From peridot.faceted at gmail.com Tue May 1 01:48:38 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 1 May 2007 01:48:38 -0400 Subject: [SciPy-user] ode/programming question In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EB8@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EB8@mrlnt6.mrl.uiuc.edu> Message-ID: On 30/04/07, Trevis Crane wrote: > I have a system of linear equations. Each equation has the exact same form: > > phi_dot = Ij - Ic*sin(phi) > > Each of the equations has a different value for Ij, Ic, and phi, and indeed > their coupling is through these parameters. Ij and Ic change dynamically > with in time subject to various constraints, including the way in which > they're coupled. Whenever t changes, I have to recalculate several > parameters that are then used to determine Ij and Ic, but when the next time > step comes, I need to use the most recent values of Ij and Ic in order to > calculate the next values. Furthermore, this evolution needs to continue > until the energy versus time of the system flattens out. And the energy is > determined based upon these continually updating parameters. Does this make > sense? I have this simulation working in Matlab, but as I've mentioned I > want to try using Python in the future, so I thought I'd start with > something for which I already have a correct answer. That description - Ij and Ic need to be calculated based on previous values - is a bit vague. Is this a question of efficient computation, or one of definition? Are the Ij and Ic actually evolving according to a differential equation as well, perhaps? > Now here's another question -- I'm trying to pass an extra argument to > odeint like this > > y = odeint(y0,t,x) > > where x is the extra argument (a parameter I want to pass to the helper > function). But this returns an error that tells me extra arguments must be > in a tuple. I'm not sure what the appropriate syntax for this would be. > Any help is appreciated... If you want to make one item into a tuple, python uses the odd-looking syntax (x,) (the brackets are often but not always necessary): y = odeint(F,y0,ts,(x,)) You can avoid needing to use this by doing: y = odeint(lambda y, t: F(y,t,x), y0, ts) This also allows you to rearrange t y and x and supply keyword arguments, if you like. Anne From icy.flame.gm at gmail.com Tue May 1 04:58:56 2007 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Tue, 1 May 2007 09:58:56 +0100 Subject: [SciPy-user] Installation SciPy on FC5 In-Reply-To: References: Message-ID: You should really upgrade to the latest FC release if possible, FC has a rather short life cycle. FC6 repo does contain a scipy, however, I always build my own latest release. "yum install" the following: python-devel python-numarray lapack lapack-devel atlas atlas-devel # depend on what extensions your machine support # cat /proc/cpuinfo will give you some ideas. atlas-3dnow atlas-3dnow-devel atlas-sse atlas-sse-devel atlas-sse2 atlas-sse2-devel fftw fftw-devel fftw2 fftw2-devel You will also need the standard development libs and tools, so you can do "yum groupinstall" for the following: "Development Libraries" "Development Tools" Yes, "yum remove" the following packages if you already have them installed: numpy scipy python-numeric gnuplot They tend to be rather out-dated and doesnt make use of all your existing libs. python-numeric is the old Numeric lib, should be replace by numpy in new codes, use only if you have some old codes that needs it. Terminal support in the standard gnuplot package is terrible, takes more effort to fix it than rebuild it. If you are likely to use gnuplot, compile your own. Download the source code for numpy and scipy then compile them yourself, should go pretty smoothly. -- iCy-fLaME ------------------------------------------------ The body maybe wounded, but it is the mind that hurts. From kuantiko at escomposlinux.org Tue May 1 08:58:13 2007 From: kuantiko at escomposlinux.org (=?iso-8859-15?q?Jes=FAs_Carrete_Monta=F1a?=) Date: Tue, 1 May 2007 14:58:13 +0200 Subject: [SciPy-user] Question about scipy.stats Message-ID: <200705011458.14151.kuantiko@escomposlinux.org> Hi to all, I get the following error when trying to evaluate the probability density function for a Poisson distribution: from scipy import * d2=stats.poisson(loc=1.,scale=2.) x=arange(1,10,1) d2.pdf(x) --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/kuantiko/latex/2006-2007/lFNyP/memoria/ /usr/lib/python2.4/site-packages/scipy/stats/distributions.py in pdf(self, x) 104 self.dist = dist 105 def pdf(self,x): --> 106 return self.dist.pdf(x,*self.args,**self.kwds) 107 def cdf(self,x): 108 return self.dist.cdf(x,*self.args,**self.kwds) AttributeError: poisson_gen instance has no attribute 'pdf' Perhaps I'm using this module incorrectly, but I've noticed that te same code works with a gaussian distribution: d1=stats.norm(loc=1.,scale=2.) d1.pdf(x) Out[12]: array([ ... ]) Is it a bug or am I missing something? I am using Scipy hversion 0.5.2 on Debian Sid. Thanks, Jes?s From rhc28 at cornell.edu Tue May 1 10:09:49 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Tue, 1 May 2007 10:09:49 -0400 Subject: [SciPy-user] ode/programming question In-Reply-To: References: <9EADC1E53F9C70479BF6559370369114142EB8@mrlnt6.mrl.uiuc.edu> Message-ID: On 01/05/07, Anne Archibald wrote: > On 30/04/07, Trevis Crane wrote: > > > I have a system of linear equations. Each equation has the exact same form: > > > > phi_dot = Ij - Ic*sin(phi) > > > > Each of the equations has a different value for Ij, Ic, and phi, and indeed > > their coupling is through these parameters. Ij and Ic change dynamically > > with in time subject to various constraints, including the way in which > > they're coupled. Whenever t changes, I have to recalculate several > > parameters that are then used to determine Ij and Ic, but when the next time > > step comes, I need to use the most recent values of Ij and Ic in order to > > calculate the next values. Furthermore, this evolution needs to continue > > until the energy versus time of the system flattens out. And the energy is > > determined based upon these continually updating parameters. Does this make > > sense? I have this simulation working in Matlab, but as I've mentioned I > > want to try using Python in the future, so I thought I'd start with > > something for which I already have a correct answer. > > That description - Ij and Ic need to be calculated based on previous > values - is a bit vague. Is this a question of efficient computation, > or one of definition? Are the Ij and Ic actually evolving according to > a differential equation as well, perhaps? It sounds like you might want to define auxiliary functions of your variables and parameters that return the necessary values of lj and lc. That is a much cleaner way to do what you're probably trying to do. I expect you would have to pass in the names of these functions explicitly as additional arguments (in the form Anne has described) in order for your right-hand side function F to "see" the functions. In order to stop a simulation when a certain condition has become true you will need to use a more sophisticated integrator such as those provided by PyDSTool or SloppyCell that allow user-defined events. Also, those typically run significantly faster. -Rob From lorenzo.isella at gmail.com Tue May 1 12:09:33 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 1 May 2007 18:09:33 +0200 Subject: [SciPy-user] Numerical Achievements in SciPy Message-ID: Dear All, I was wondering whether it makes sense or not to think about an online centralized repository containing numerical software relying on SciPy. For instance, I am using integrate.odeint to solve a large system of nonlinear coupled equations. If I succeed in my project, this code could be interesting for those studying Bose-Einstein condensation or coagulation of aerosols, to mention only a few. My purpose is not to advertise my work; viceversa I think that it would have helped me a lot to know whether someone else had already tried his hands at integrate.odeint on a large number of equations, to have a benchmark and so on. It should be clear by now that I am thinking about an online "box" containing a number of Python codes with a description of what they do/solve and possibly a benchmark. I'd like to hear the opinions from those on this very collaborative mailing list. Apologies if such a thing already exists, but my online searches did not find it. Kind Regards Lorenzo From lbolla at gmail.com Tue May 1 12:22:48 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Tue, 1 May 2007 18:22:48 +0200 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: References: Message-ID: <80c99e790705010922r57f1f6fbg672ff5698da0c32b@mail.gmail.com> I'd very happy to share my codes about electromagnetic problems. I was thinking about sharing them on empython.org, but that website doesn't seem to be very actively supported at the moment. lorenzo. On 5/1/07, Lorenzo Isella wrote: > > Dear All, > I was wondering whether it makes sense or not to think about an online > centralized repository containing numerical software relying on SciPy. > For instance, I am using integrate.odeint to solve a large system of > nonlinear coupled equations. If I succeed in my project, this code > could be interesting for those studying Bose-Einstein condensation or > coagulation of aerosols, to mention only a few. > My purpose is not to advertise my work; viceversa I think that it > would have helped me a lot to know whether someone else had already > tried his hands at integrate.odeint on a large number of equations, to > have a benchmark and so on. > It should be clear by now that I am thinking about an online "box" > containing a number of Python codes with a description of what they > do/solve and possibly a benchmark. > I'd like to hear the opinions from those on this very collaborative > mailing list. > Apologies if such a thing already exists, but my online searches did > not find it. > Kind Regards > > Lorenzo > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From David.L.Goldsmith at noaa.gov Tue May 1 12:31:28 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Tue, 01 May 2007 09:31:28 -0700 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: References: Message-ID: <46376B60.3020008@noaa.gov> Don't we have a wiki that could host this sort of thing, i.e., so that individuals who want to submit such packages can just do so @ their leisure? DG Lorenzo Isella wrote: > Dear All, > I was wondering whether it makes sense or not to think about an online > centralized repository containing numerical software relying on SciPy. > For instance, I am using integrate.odeint to solve a large system of > nonlinear coupled equations. If I succeed in my project, this code > could be interesting for those studying Bose-Einstein condensation or > coagulation of aerosols, to mention only a few. > My purpose is not to advertise my work; viceversa I think that it > would have helped me a lot to know whether someone else had already > tried his hands at integrate.odeint on a large number of equations, to > have a benchmark and so on. > It should be clear by now that I am thinking about an online "box" > containing a number of Python codes with a description of what they > do/solve and possibly a benchmark. > I'd like to hear the opinions from those on this very collaborative > mailing list. > Apologies if such a thing already exists, but my online searches did > not find it. > Kind Regards > > Lorenzo > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From t_crane at mrl.uiuc.edu Tue May 1 12:33:13 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Tue, 1 May 2007 11:33:13 -0500 Subject: [SciPy-user] ode/programming question Message-ID: <9EADC1E53F9C70479BF6559370369114134409@mrlnt6.mrl.uiuc.edu> > In order to stop a simulation when a certain condition has become true > you will need to use a more sophisticated integrator such as those > provided by PyDSTool or SloppyCell that allow user-defined events. > Also, those typically run significantly faster. [Trevis Crane] yeah, I still have to play with the event detection, but it's on my list. thanks, trevis > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From fred.jen at web.de Tue May 1 12:57:34 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Tue, 01 May 2007 18:57:34 +0200 Subject: [SciPy-user] Timeseries Message-ID: <1178038654.5149.10.camel@muli> Here are so more specifications about the problems to work with the timeseries: I have a feisty fawn amd64 so python 2.5 is installed and there ist no problem during compiling the maskedarrays. Now i installed a build-essential package and the build process works, but it gives me the warnings in the enclosure. I can compile it too. But in the interactive shell import timeseries produces: Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.5/site-packages/timeseries/__init__.py", line 22, in import tseries File "/usr/lib/python2.5/site-packages/timeseries/tseries.py", line 961, in tsmasked = TimeSeries(masked,dates=DateArray(Date('D',1))) ValueError: invalid frequency specification Any ideas? Fred Jendrzejewski -------------- next part -------------- Warning: Assuming default configuration (./lib/{setup_lib,setup}.py was not found) Appending timeseries.lib configuration to timeseries Ignoring attempt to set 'name' (from 'timeseries' to 'timeseries.lib') Warning: Assuming default configuration (./io/{setup_io,setup}.py was not found) Appending timeseries.io configuration to timeseries Ignoring attempt to set 'name' (from 'timeseries' to 'timeseries.io') Warning: Assuming default configuration (./plotlib/{setup_plotlib,setup}.py was not found) Appending timeseries.plotlib configuration to timeseries Ignoring attempt to set 'name' (from 'timeseries' to 'timeseries.plotlib') Warning: Assuming default configuration (./tests/{setup_tests,setup}.py was not found) Appending timeseries.tests configuration to timeseries Ignoring attempt to set 'name' (from 'timeseries' to 'timeseries.tests') running build running config_fc running build_src building extension "timeseries.cseries" sources running build_py running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'timeseries.cseries' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-prototypes -fPIC compile options: '-I/usr/lib/python2.5/site-packages/numpy/core/include/numpy -I/usr/lib/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c' gcc: ./src/cseries.c ./src/cseries.c: In Funktion ?reverse_dict?: ./src/cseries.c:1458: Warnung: ?bergabe des Arguments 2 von ?PyDict_Next? von inkompatiblem Zeigertyp ./src/cseries.c:1464: Warnung: Um Zuweisung, die als Wahrheitswert verwendet wird, werden Klammern empfohlen ./src/cseries.c: Auf h?chster Ebene: ./src/cseries.c:1473: Warnung: Funktionsdeklaration ist kein Prototyp ./src/cseries.c:1674: Warnung: Funktionsdeklaration ist kein Prototyp ./src/cseries.c: In Funktion ?DateObject_strfmt?: ./src/cseries.c:1992: Warnung: Variable ?special_found? wird nicht verwendet ./src/cseries.c: In Funktion ?check_mov_args?: ./src/cseries.c:3047: Warnung: ?bergabe des Arguments 2 von ?*(PyArray_API + 1280u)? von inkompatiblem Zeigertyp ./src/cseries.c: In Funktion ?calc_mov_sum?: ./src/cseries.c:3090: Warnung: ?bergabe des Arguments 2 von ?*(PyArray_API + 1280u)? von inkompatiblem Zeigertyp ./src/cseries.c:3097: Warnung: ?bergabe des Arguments 2 von ?*(PyArray_API + 1280u)? von inkompatiblem Zeigertyp ./src/cseries.c:3107: Warnung: ?bergabe des Arguments 2 von ?*(PyArray_API + 1280u)? von inkompatiblem Zeigertyp ./src/cseries.c:3118: Warnung: ?bergabe des Arguments 2 von ?*(PyArray_API + 1280u)? von inkompatiblem Zeigertyp ./src/cseries.c: In Funktion ?calc_mov_median?: ./src/cseries.c:3253: Warnung: ?bergabe des Arguments 2 von ?*(PyArray_API + 1280u)? von inkompatiblem Zeigertyp ./src/cseries.c:3363: Warnung: ?bergabe des Arguments 2 von ?*(PyArray_API + 1280u)? von inkompatiblem Zeigertyp ./src/cseries.c:3736:2: warning: no newline at end of file ./src/cseries.c: In Funktion ?check_mov_args?: ./src/cseries.c:3070: Warnung: Kontrollfluss erreicht Ende einer Nicht-void-Funktion ./src/cseries.c: In Funktion ?DateObject___compare__?: ./src/cseries.c:2279: Warnung: Kontrollfluss erreicht Ende einer Nicht-void-Funktion ./src/cseries.c: In Funktion ?DateObject_New?: ./src/cseries.c:1676: Warnung: ?dummy? wird in dieser Funktion uninitialisiert verwendet ./src/cseries.c: In Funktion ?TimeSeries_convert?: ./src/cseries.c:2726: Warnung: ?currPerLen? k?nnte in dieser Funktion uninitialisiert verwendet werden ./src/cseries.c: In Funktion ?DateObject___str__?: ./src/cseries.c:2114: Warnung: ?string_arg? k?nnte in dieser Funktion uninitialisiert verwendet werden gcc -pthread -shared -Wl,-O1 build/temp.linux-x86_64-2.5/src/cseries.o -o build/lib.linux-x86_64-2.5/timeseries/cseries.so From rhc28 at cornell.edu Tue May 1 13:06:56 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Tue, 1 May 2007 13:06:56 -0400 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <46376B60.3020008@noaa.gov> References: <46376B60.3020008@noaa.gov> Message-ID: On 01/05/07, David Goldsmith wrote: > Don't we have a wiki that could host this sort of thing, i.e., so that > individuals who want to submit such packages can just do so @ their leisure? > > DG I think the OP was talking about a repository for the software itself, not just a link. But in my opinion that's what free hosters such as sourceforge are for, and a wiki page such as http://www.scipy.org/Topical_Software is the place to encourage people to share their wares. For instance, that wiki page has a section on electromagnetic problems. -Rob From fredmfp at gmail.com Tue May 1 13:20:39 2007 From: fredmfp at gmail.com (fred) Date: Tue, 01 May 2007 19:20:39 +0200 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <80c99e790705010922r57f1f6fbg672ff5698da0c32b@mail.gmail.com> References: <80c99e790705010922r57f1f6fbg672ff5698da0c32b@mail.gmail.com> Message-ID: <463776E7.1070401@gmail.com> lorenzo bolla a ?crit : > I'd very happy to share my codes about electromagnetic problems. > I was thinking about sharing them on empython.org > , but that website doesn't seem to be very > actively supported at the moment. Did you contact Rob ? I'm sure it will be happy to host your stuff (as he is for mine (FDTD viz'), but not yet started to upload my stuff here ;-) -- http://scipy.org/FredericPetit From pgmdevlist at gmail.com Tue May 1 13:21:59 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 1 May 2007 13:21:59 -0400 Subject: [SciPy-user] Timeseries In-Reply-To: <1178038654.5149.10.camel@muli> References: <1178038654.5149.10.camel@muli> Message-ID: <200705011322.00236.pgmdevlist@gmail.com> On Tuesday 01 May 2007 12:57:34 Fred Jendrzejewski wrote: Fred, I'm aware of the warnings being raised during the installation, I'll work on that. There's obviously something wrong with the compilation/installation of the C part of the module. The error message you see is raised by C part. Please contact me off list for the moment, we'll post the solution on the board when we'll have figured it out. > Here are so more specifications about the problems to work with the > timeseries: > I have a feisty fawn amd64 > so python 2.5 is installed and there ist no problem during compiling the > maskedarrays. > Now i installed a build-essential package and the build process works, > but it gives me the warnings in the enclosure. I can compile it too. > > But in the interactive shell > import timeseries produces: > > Traceback (most recent call last): > File "", line 1, in > File > "/usr/lib/python2.5/site-packages/timeseries/__init__.py", > line 22, in import tseries > File > "/usr/lib/python2.5/site-packages/timeseries/tseries.py", > line 961, in tsmasked = > TimeSeries(masked,dates=DateArray(Date('D',1))) > ValueError: invalid frequency specification > > Any ideas? > > Fred Jendrzejewski From lbolla at gmail.com Tue May 1 13:39:04 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Tue, 1 May 2007 19:39:04 +0200 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <463776E7.1070401@gmail.com> References: <80c99e790705010922r57f1f6fbg672ff5698da0c32b@mail.gmail.com> <463776E7.1070401@gmail.com> Message-ID: <80c99e790705011039n137aecd1u4c5b1150ab9c1dc8@mail.gmail.com> yes, I did. he told me he has having problems with the website and he'll give me access as soon as possible. thank you! lorenzo. On 5/1/07, fred wrote: > > lorenzo bolla a ?crit : > > I'd very happy to share my codes about electromagnetic problems. > > I was thinking about sharing them on empython.org > > , but that website doesn't seem to be very > > actively supported at the moment. > Did you contact Rob ? > > I'm sure it will be happy to host your stuff (as he is for mine (FDTD > viz'), but not yet started to upload my stuff here ;-) > > -- > http://scipy.org/FredericPetit > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Tue May 1 13:49:51 2007 From: fredmfp at gmail.com (fred) Date: Tue, 01 May 2007 19:49:51 +0200 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <80c99e790705011039n137aecd1u4c5b1150ab9c1dc8@mail.gmail.com> References: <80c99e790705010922r57f1f6fbg672ff5698da0c32b@mail.gmail.com> <463776E7.1070401@gmail.com> <80c99e790705011039n137aecd1u4c5b1150ab9c1dc8@mail.gmail.com> Message-ID: <46377DBF.7000706@gmail.com> lorenzo bolla a ?crit : > yes, I did. > he told me he has having problems with the website and he'll give me > access as soon as possible. > thank you! So I think it's ok now, as he gave me the account I wanted ;-) Cheers, -- http://scipy.org/FredericPetit From David.L.Goldsmith at noaa.gov Tue May 1 13:55:02 2007 From: David.L.Goldsmith at noaa.gov (David L Goldsmith) Date: Tue, 01 May 2007 10:55:02 -0700 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: References: <46376B60.3020008@noaa.gov> Message-ID: <46377EF6.5050205@noaa.gov> Rob Clewley wrote: > On 01/05/07, David Goldsmith wrote: > >> Don't we have a wiki that could host this sort of thing, i.e., so that >> individuals who want to submit such packages can just do so @ their leisure? >> >> DG >> > > I think the OP was talking about a repository for the software itself, > not just a link. But in my opinion that's what free hosters such as > sourceforge are for, and a wiki page such as > > http://www.scipy.org/Topical_Software > > is the place to encourage people to share their wares. For instance, > that wiki page has a section on electromagnetic problems. > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I agree that SF could be considered for hosting the software, but since we've started using Trac where I work, I've begun to think of wiki pages as more than just html containers. FWIW, DG -- ERD/ORR/NOS/NOAA From Karl.Young at ucsf.edu Tue May 1 15:32:14 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Tue, 01 May 2007 12:32:14 -0700 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: References: Message-ID: <463795BE.6070805@ucsf.edu> It seems to me like the CRAN repository for R based statistics packages (http://cran.r-project.org/) provides a pretty good model for something like this, though maybe there's something I'm missing re. SciPy that makes CRAN a bad model. Having CRAN seems to moderate the size of the basic R package (and install) which is convenient, and things that get heavily tested and used often end up eventually making it in to the core R package. >Dear All, >I was wondering whether it makes sense or not to think about an online >centralized repository containing numerical software relying on SciPy. >For instance, I am using integrate.odeint to solve a large system of >nonlinear coupled equations. If I succeed in my project, this code >could be interesting for those studying Bose-Einstein condensation or >coagulation of aerosols, to mention only a few. >My purpose is not to advertise my work; viceversa I think that it >would have helped me a lot to know whether someone else had already >tried his hands at integrate.odeint on a large number of equations, to >have a benchmark and so on. >It should be clear by now that I am thinking about an online "box" >containing a number of Python codes with a description of what they >do/solve and possibly a benchmark. >I'd like to hear the opinions from those on this very collaborative >mailing list. >Apologies if such a thing already exists, but my online searches did >not find it. >Kind Regards > >Lorenzo >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From robert.kern at gmail.com Tue May 1 16:45:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 01 May 2007 15:45:17 -0500 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <463795BE.6070805@ucsf.edu> References: <463795BE.6070805@ucsf.edu> Message-ID: <4637A6DD.4060707@gmail.com> Karl Young wrote: > It seems to me like the CRAN repository for R based statistics packages > (http://cran.r-project.org/) provides a pretty good model for something > like this, though maybe there's something I'm missing re. SciPy that > makes CRAN a bad model. Having CRAN seems to moderate the size of the > basic R package (and install) which is convenient, and things that get > heavily tested and used often end up eventually making it in to the core > R package. We essentially have the CRAN model using the Python Package Index. 1. Write your code. 2. Package it using distutils. 3. Submit the package to the PyPI http://www.python.org/pypi/. It will host your tarballs and eggs and wininst installers, so you don't have to scrounge for hosting. 4. Write up a web page about it if the description text that PyPI allows isn't enough for your code. Feel free to use the www.scipy.org wiki if you like. 5. Announce your package here and other places of interest. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rlratzel at enthought.com Tue May 1 17:03:04 2007 From: rlratzel at enthought.com (Rick Ratzel) Date: Tue, 1 May 2007 16:03:04 -0500 (CDT) Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <4637A6DD.4060707@gmail.com> (message from Robert Kern on Tue, 01 May 2007 15:45:17 -0500) References: <463795BE.6070805@ucsf.edu> <4637A6DD.4060707@gmail.com> Message-ID: <20070501210304.717121DF4FE@mail.enthought.com> > Date: Tue, 01 May 2007 15:45:17 -0500 > From: Robert Kern > > Karl Young wrote: > > It seems to me like the CRAN repository for R based statistics packages > > (http://cran.r-project.org/) provides a pretty good model for something > > like this, though maybe there's something I'm missing re. SciPy that > > makes CRAN a bad model. Having CRAN seems to moderate the size of the > > basic R package (and install) which is convenient, and things that get > > heavily tested and used often end up eventually making it in to the core > > R package. > > We essentially have the CRAN model using the Python Package Index. > > 1. Write your code. > 2. Package it using distutils. > 3. Submit the package to the PyPI http://www.python.org/pypi/. It will host your > tarballs and eggs and wininst installers, so you don't have to scrounge for > hosting. > 4. Write up a web page about it if the description text that PyPI allows isn't > enough for your code. Feel free to use the www.scipy.org wiki if you like. > 5. Announce your package here and other places of interest. > I hope I'm not stating the obvious, but another advantage of PyPI is that all easy_install users will have access by default. -- Rick Ratzel - Enthought, Inc. 515 Congress Avenue, Suite 2100 - Austin, Texas 78701 512-536-1057 x229 - Fax: 512-536-1059 http://www.enthought.com From gael.varoquaux at normalesup.org Wed May 2 02:19:56 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 2 May 2007 08:19:56 +0200 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <4637A6DD.4060707@gmail.com> References: <463795BE.6070805@ucsf.edu> <4637A6DD.4060707@gmail.com> Message-ID: <20070502061956.GB21178@clipper.ens.fr> On Tue, May 01, 2007 at 03:45:17PM -0500, Robert Kern wrote: > We essentially have the CRAN model using the Python Package Index. > 1. Write your code. > 2. Package it using distutils. > 3. Submit the package to the PyPI http://www.python.org/pypi/. It will host your > tarballs and eggs and wininst installers, so you don't have to scrounge for > hosting. > 4. Write up a web page about it if the description text that PyPI allows isn't > enough for your code. Feel free to use the www.scipy.org wiki if you like. > 5. Announce your package here and other places of interest. Well this is quite a complicated proceedure if you just want to share a little bit of code. Most people don't want to learn about distutils. I think I would also be useful to have something where it is dead easy to contribute code. The wiki is indeed a good starting point, but I don't think it will scale up terribly well. Ga?l From david at ar.media.kyoto-u.ac.jp Wed May 2 02:42:43 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 02 May 2007 15:42:43 +0900 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <20070502061956.GB21178@clipper.ens.fr> References: <463795BE.6070805@ucsf.edu> <4637A6DD.4060707@gmail.com> <20070502061956.GB21178@clipper.ens.fr> Message-ID: <463832E3.1030206@ar.media.kyoto-u.ac.jp> Gael Varoquaux wrote: > On Tue, May 01, 2007 at 03:45:17PM -0500, Robert Kern wrote: >> We essentially have the CRAN model using the Python Package Index. > >> 1. Write your code. >> 2. Package it using distutils. >> 3. Submit the package to the PyPI http://www.python.org/pypi/. It will host your >> tarballs and eggs and wininst installers, so you don't have to scrounge for >> hosting. >> 4. Write up a web page about it if the description text that PyPI allows isn't >> enough for your code. Feel free to use the www.scipy.org wiki if you like. >> 5. Announce your package here and other places of interest. > > Well this is quite a complicated proceedure if you just want to share a > little bit of code. Most people don't want to learn about distutils. I > think I would also be useful to have something where it is dead easy to > contribute code. The wiki is indeed a good starting point, but I don't > think it will scale up terribly well. If you want to share anything non trivial, you will need distutils, I think, or something as complicated. I don't know much about R, but it looks like it uses autoconf when you have some code written in C or Fortran (and compared to autoconf, almost anything is a pleasant experience, including debugging someone's else perl code). Now, someone could create something like a system to create a skeleton project, with distutils setup.py already written, etc... To make things a bit more automated. A page could also be set up om the wiki for instructions how to do it for someone new to scipy, etc.. But generally, when you want to share some code which is a bit more than one python file, it will require some work, and I don't see how it can be really different than what Robert described. David From peridot.faceted at gmail.com Wed May 2 03:08:38 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 2 May 2007 03:08:38 -0400 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <463832E3.1030206@ar.media.kyoto-u.ac.jp> References: <463795BE.6070805@ucsf.edu> <4637A6DD.4060707@gmail.com> <20070502061956.GB21178@clipper.ens.fr> <463832E3.1030206@ar.media.kyoto-u.ac.jp> Message-ID: On 02/05/07, David Cournapeau wrote: > If you want to share anything non trivial, you will need distutils, I > think, or something as complicated. I don't know much about R, but it > looks like it uses autoconf when you have some code written in C or > Fortran (and compared to autoconf, almost anything is a pleasant > experience, including debugging someone's else perl code). Now, someone > could create something like a system to create a skeleton project, with > distutils setup.py already written, etc... To make things a bit more > automated. A page could also be set up om the wiki for instructions how > to do it for someone new to scipy, etc.. > > But generally, when you want to share some code which is a bit more than > one python file, it will require some work, and I don't see how it can > be really different than what Robert described. I made a distutils package for a library with a few python files the other day. I'd never done it before and it took me about ten minutes. It's really not hard. You write one file, setup.py, you make sure you have an __init__.py containing __all__, and it just works. If you have C files, it becomes more complicated - but getting C files to compile on any computer but your own basically requires something like distutils. I really don't think there's much of a gap between projects small enough to go in the wiki and projects for which distutils is worth the trouble. Anne From david at ar.media.kyoto-u.ac.jp Wed May 2 03:45:23 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 02 May 2007 16:45:23 +0900 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: References: <463795BE.6070805@ucsf.edu> <4637A6DD.4060707@gmail.com> <20070502061956.GB21178@clipper.ens.fr> <463832E3.1030206@ar.media.kyoto-u.ac.jp> Message-ID: <46384193.3060006@ar.media.kyoto-u.ac.jp> Anne Archibald wrote: > On 02/05/07, David Cournapeau wrote: > >> If you want to share anything non trivial, you will need distutils, I >> think, or something as complicated. I don't know much about R, but it >> looks like it uses autoconf when you have some code written in C or >> Fortran (and compared to autoconf, almost anything is a pleasant >> experience, including debugging someone's else perl code). Now, someone >> could create something like a system to create a skeleton project, with >> distutils setup.py already written, etc... To make things a bit more >> automated. A page could also be set up om the wiki for instructions how >> to do it for someone new to scipy, etc.. >> >> But generally, when you want to share some code which is a bit more than >> one python file, it will require some work, and I don't see how it can >> be really different than what Robert described. > > I made a distutils package for a library with a few python files the > other day. I'd never done it before and it took me about ten minutes. > It's really not hard. You write one file, setup.py, you make sure you > have an __init__.py containing __all__, and it just works. Exactly. What may be useful for newcommers though is a small script in scipy to do the above for you. David From peridot.faceted at gmail.com Wed May 2 04:05:21 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 2 May 2007 04:05:21 -0400 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <46384193.3060006@ar.media.kyoto-u.ac.jp> References: <463795BE.6070805@ucsf.edu> <4637A6DD.4060707@gmail.com> <20070502061956.GB21178@clipper.ens.fr> <463832E3.1030206@ar.media.kyoto-u.ac.jp> <46384193.3060006@ar.media.kyoto-u.ac.jp> Message-ID: On 02/05/07, David Cournapeau wrote: > Anne Archibald wrote: > > I made a distutils package for a library with a few python files the > > other day. I'd never done it before and it took me about ten minutes. > > It's really not hard. You write one file, setup.py, you make sure you > > have an __init__.py containing __all__, and it just works. > Exactly. What may be useful for newcommers though is a small script in > scipy to do the above for you. Well, judge for yourself, at http://docs.python.org/dist/dist.html . But from my experience I would say that if you wrote a script, it would be just as much work trying to figure out where and how to put all the information that is needed as it is to figure out where to put it in a proper distutils setup. So why not just use distutils? Anne From lorenzo.isella at gmail.com Wed May 2 04:36:39 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Wed, 2 May 2007 10:36:39 +0200 Subject: [SciPy-user] Numerical Achievements in Scipy Message-ID: Hello, I tend to agree with the opinion expressed below (sharing the code is something, being required to package it up is quite a different matter). Bottom line (sorry if I am not getting this straight right now and I do not want to start a flamewar, but I think only a repository / wiki should emerge from this discussion): are we thinking about http://scipy.org/FredericPetit only for electromagnetism and http://www.scipy.org/Topical_Software for everything else? My point is that it would be so much better not to have important/useful codes spread around the whole net. Cheers Lorenzo Message: 10 Date: Wed, 2 May 2007 08:19:56 +0200 From: Gael Varoquaux Subject: Re: [SciPy-user] Numerical Achievements in SciPy To: SciPy Users List Message-ID: <20070502061956.GB21178 at clipper.ens.fr> Content-Type: text/plain; charset=iso-8859-1 On Tue, May 01, 2007 at 03:45:17PM -0500, Robert Kern wrote: > We essentially have the CRAN model using the Python Package Index. > 1. Write your code. > 2. Package it using distutils. > 3. Submit the package to the PyPI http://www.python.org/pypi/. It will host your > tarballs and eggs and wininst installers, so you don't have to scrounge for > hosting. > 4. Write up a web page about it if the description text that PyPI allows isn't > enough for your code. Feel free to use the www.scipy.org wiki if you like. > 5. Announce your package here and other places of interest. Well this is quite a complicated proceedure if you just want to share a little bit of code. Most people don't want to learn about distutils. I think I would also be useful to have something where it is dead easy to contribute code. The wiki is indeed a good starting point, but I don't think it will scale up terribly well. Ga?l From fred.jen at web.de Wed May 2 05:09:05 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Wed, 02 May 2007 11:09:05 +0200 Subject: [SciPy-user] Timeseries In-Reply-To: <200705011322.00236.pgmdevlist@gmail.com> References: <1178038654.5149.10.camel@muli> <200705011322.00236.pgmdevlist@gmail.com> Message-ID: <1178096945.5158.2.camel@muli> Am Dienstag, den 01.05.2007, 13:21 -0400 schrieb Pierre GM: > On Tuesday 01 May 2007 12:57:34 Fred Jendrzejewski wrote: > > Fred, > I'm aware of the warnings being raised during the installation, I'll work on > that. There's obviously something wrong with the compilation/installation of > the C part of the module. The error message you see is raised by C part. > Please contact me off list for the moment, we'll post the solution on the > board when we'll have figured it out. > > > Here are so more specifications about the problems to work with the > > timeseries: > > I have a feisty fawn amd64 > > so python 2.5 is installed and there ist no problem during compiling the > > maskedarrays. > > Now i installed a build-essential package and the build process works, > > but it gives me the warnings in the enclosure. I can compile it too. > > > > But in the interactive shell > > import timeseries produces: > > > > Traceback (most recent call last): > > File "", line 1, in > > File > > "/usr/lib/python2.5/site-packages/timeseries/__init__.py", > > line 22, in import tseries > > File > > "/usr/lib/python2.5/site-packages/timeseries/tseries.py", > > line 961, in tsmasked = > > TimeSeries(masked,dates=DateArray(Date('D',1))) > > ValueError: invalid frequency specification > > > > Any ideas? > > > > Fred Jendrzejewski > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user The svn is updated now and problem is solved. It was a problem, that the syntax in python 2.5 has changed. int is changed zu Py_ssie_t for better support of 64bit machines now. From fred.jen at web.de Wed May 2 05:27:39 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Wed, 02 May 2007 11:27:39 +0200 Subject: [SciPy-user] Numerical Achievements in Scipy In-Reply-To: References: Message-ID: <1178098059.5288.3.camel@muli> Am Mittwoch, den 02.05.2007, 10:36 +0200 schrieb Lorenzo Isella: > Hello, > I tend to agree with the opinion expressed below (sharing the code is > something, being required to package it up is quite a different > matter). > Bottom line (sorry if I am not getting this straight right now and I > do not want to start a flamewar, but I think only a repository / wiki > should emerge from this discussion): are we thinking about > http://scipy.org/FredericPetit only for electromagnetism and > http://www.scipy.org/Topical_Software for everything else? My point is > that it would be so much better not to have important/useful codes > spread around the whole net. > Cheers > > Lorenzo > > > > Message: 10 > Date: Wed, 2 May 2007 08:19:56 +0200 > From: Gael Varoquaux > Subject: Re: [SciPy-user] Numerical Achievements in SciPy > To: SciPy Users List > Message-ID: <20070502061956.GB21178 at clipper.ens.fr> > Content-Type: text/plain; charset=iso-8859-1 > > On Tue, May 01, 2007 at 03:45:17PM -0500, Robert Kern wrote: > > We essentially have the CRAN model using the Python Package Index. > > > 1. Write your code. > > 2. Package it using distutils. > > 3. Submit the package to the PyPI http://www.python.org/pypi/. It will host your > > tarballs and eggs and wininst installers, so you don't have to scrounge for > > hosting. > > 4. Write up a web page about it if the description text that PyPI allows isn't > > enough for your code. Feel free to use the www.scipy.org wiki if you like. > > 5. Announce your package here and other places of interest. > > Well this is quite a complicated proceedure if you just want to share a > little bit of code. Most people don't want to learn about distutils. I > think I would also be useful to have something where it is dead easy to > contribute code. The wiki is indeed a good starting point, but I don't > think it will scale up terribly well. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user I really like the idea of a better support to. Something like a Portal for showing code, having a forum and a documentation would be a dream. The problem is the time or am i wrong? How many people are maintaining the website and manage the content? It would be a great support for beginners to have a all in one website and scipy could become more popular maybe, but the time.... From matt.halstead at auckland.ac.nz Wed May 2 06:54:05 2007 From: matt.halstead at auckland.ac.nz (Matt ) Date: Wed, 2 May 2007 22:54:05 +1200 Subject: [SciPy-user] Numerical Achievements in Scipy In-Reply-To: References: Message-ID: <60d4509f0705020354n33643017v15907542c8472e5c@mail.gmail.com> Fragments of code make sense in a Wiki, but not really entire modules above some x number of lines. I think if the idea is for people to upload packages or modules as attachments then I would suggest creating a subversion collective with write access to contributers and public read access and using that as the referenced repository. cheers Matt On 5/2/07, Lorenzo Isella wrote: > Hello, > I tend to agree with the opinion expressed below (sharing the code is > something, being required to package it up is quite a different > matter). > Bottom line (sorry if I am not getting this straight right now and I > do not want to start a flamewar, but I think only a repository / wiki > should emerge from this discussion): are we thinking about > http://scipy.org/FredericPetit only for electromagnetism and > http://www.scipy.org/Topical_Software for everything else? My point is > that it would be so much better not to have important/useful codes > spread around the whole net. > Cheers > > Lorenzo > > > > Message: 10 > Date: Wed, 2 May 2007 08:19:56 +0200 > From: Gael Varoquaux > Subject: Re: [SciPy-user] Numerical Achievements in SciPy > To: SciPy Users List > Message-ID: <20070502061956.GB21178 at clipper.ens.fr> > Content-Type: text/plain; charset=iso-8859-1 > > On Tue, May 01, 2007 at 03:45:17PM -0500, Robert Kern wrote: > > We essentially have the CRAN model using the Python Package Index. > > > 1. Write your code. > > 2. Package it using distutils. > > 3. Submit the package to the PyPI http://www.python.org/pypi/. It will host your > > tarballs and eggs and wininst installers, so you don't have to scrounge for > > hosting. > > 4. Write up a web page about it if the description text that PyPI allows isn't > > enough for your code. Feel free to use the www.scipy.org wiki if you like. > > 5. Announce your package here and other places of interest. > > Well this is quite a complicated proceedure if you just want to share a > little bit of code. Most people don't want to learn about distutils. I > think I would also be useful to have something where it is dead easy to > contribute code. The wiki is indeed a good starting point, but I don't > think it will scale up terribly well. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fredmfp at gmail.com Wed May 2 07:10:50 2007 From: fredmfp at gmail.com (fred) Date: Wed, 02 May 2007 13:10:50 +0200 Subject: [SciPy-user] Numerical Achievements in Scipy In-Reply-To: References: Message-ID: <463871BA.1080303@gmail.com> Lorenzo Isella a ?crit : > Hello, > I tend to agree with the opinion expressed below (sharing the code is > something, being required to package it up is quite a different > matter). > Bottom line (sorry if I am not getting this straight right now and I > do not want to start a flamewar, but I think only a repository / wiki > should emerge from this discussion): are we thinking about > http://scipy.org/FredericPetit only for electromagnetism and > My 2 cents. I did not want to put any python code for electromagnetic cavities on the wiki, because it would be too... boring and cumbersome, I guess. So I just put pictures. (and I even wonder if it is interesting for somebody; I just did it for fun ;-) I think that empython.org is more appropriate to put this stuff... For spherical harmonics, the python code is rather short, so I put it. But I'm wondering now if it is the worth to put x number of lines of python code, even it is "short"... Cheers, -- http://scipy.org/FredericPetit From v-nijs at kellogg.northwestern.edu Wed May 2 09:03:27 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Wed, 02 May 2007 08:03:27 -0500 Subject: [SciPy-user] Numerical Achievements in Scipy Message-ID: Perhaps scipy could use something like the tips and scripts posting set-up on the vim site (see links below). On this site tips/scripts get posted/updated to a database. There is an interface to search and browse for scripts. You can also list tips/scripts by number of views/downloads or ratings. http://www.vim.org/tips/index.php http://www.vim.org/scripts/index.php Vincent On 5/2/07 6:10 AM, "fred" wrote: > Lorenzo Isella a ?crit : >> Hello, >> I tend to agree with the opinion expressed below (sharing the code is >> something, being required to package it up is quite a different >> matter). >> Bottom line (sorry if I am not getting this straight right now and I >> do not want to start a flamewar, but I think only a repository / wiki >> should emerge from this discussion): are we thinking about >> http://scipy.org/FredericPetit only for electromagnetism and >> > My 2 cents. > I did not want to put any python code for electromagnetic cavities on > the wiki, > because it would be too... boring and cumbersome, I guess. So I just put > pictures. > (and I even wonder if it is interesting for somebody; I just did it for > fun ;-) > I think that empython.org is more appropriate to put this stuff... > > For spherical harmonics, the python code is rather short, so I put it. > But I'm wondering now if it is the worth to put x number of lines of > python code, > even it is "short"... > > Cheers, From jdc at uwo.ca Wed May 2 10:13:07 2007 From: jdc at uwo.ca (Dan Christensen) Date: Wed, 02 May 2007 10:13:07 -0400 Subject: [SciPy-user] Numerical Achievements in Scipy References: Message-ID: <87hcqvpb4c.fsf@uwo.ca> The python cookbook might be a reasonable place for short snippets that aren't in need of distutils: http://aspn.activestate.com/ASPN/Python/Cookbook/ Dan From fred.jen at web.de Wed May 2 11:52:28 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Wed, 02 May 2007 17:52:28 +0200 Subject: [SciPy-user] TimeSeries Message-ID: <1178121148.11017.6.camel@muli> The following makes same problems: import numpy as N import maskedarray as MA import datetime import timeseries as TS rows=[ u'1991-01-02', u'1990-12-27', u'1990-12-17', u'1990-12-10', u'1990-12-03', u'1990-11-26'] daten=TS.date_array(dlist=rows) The output is: File "times.py", line 11, in daten=TS.date_array(dlist=rows) File "timeseries/tdates.py", line 554, in date_array File "timeseries/tdates.py", line 533, in _listparser UnboundLocalError: local variable 'dates' referenced before assignment I think it is because the dlist.dtype.kind gives back 'U' but creating a normal Date-Object works D=TS.Date(freq='W', string=rows[0]) From pgmdevlist at gmail.com Wed May 2 12:13:31 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 2 May 2007 12:13:31 -0400 Subject: [SciPy-user] TimeSeries In-Reply-To: <1178121148.11017.6.camel@muli> References: <1178121148.11017.6.camel@muli> Message-ID: <200705021213.31322.pgmdevlist@gmail.com> On Wednesday 02 May 2007 11:52:28 Fred Jendrzejewski wrote: > The following makes same problems: > I think it is because the dlist.dtype.kind gives back 'U' > > but creating a normal Date-Object works > D=TS.Date(freq='W', string=rows[0]) Fred, Thanks a lot. Series w/ undefined frequencies are always tricky to work with, as David and yourself had the misfortune to realize. I'm glad you were able to find a work around. However, the bug comes from the fact that you used unicode characters. The array you get then is of kind 'U', not of kind 'S' (string), as expected. Before I update the SVN, you can apply the following fix: in tdates.py, change line 494 to if dlist.dtype.kind in 'SU': That solves your problem. Your feedback is invaluable. Please do not hesitate to keep on posting your comments/error logs. I'll check where the problems are and will address them as soon as possible. This week however, please expect a bit of delay, as I'm very busy with some current projects. From fredmfp at gmail.com Wed May 2 14:56:46 2007 From: fredmfp at gmail.com (fred) Date: Wed, 02 May 2007 20:56:46 +0200 Subject: [SciPy-user] filling array without loop... In-Reply-To: References: <462B8DF7.9010500@gmail.com> <462BADC5.4090803@gmail.com> <462BBF2A.7000103@gmail.com> Message-ID: <4638DEEE.8030508@gmail.com> /On 23/04/07, fred > wrote:/ > Anne Archibald a ?crit : //>>/ Uh, maybe I'm confused - />Ok, so let me explain a little more... Hi, Nobody has any idea about doing this dot product without loops ? Ok, I successfully computed it whith weave and f2py. That's fast, yes. But just for curiosity and comparison, I would have the "pure python" version, if possible ;-) Any suggestion ? Thanks in advance. Cheers, -- http://scipy.org/FredericPetit From ggellner at uoguelph.ca Wed May 2 15:24:09 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 2 May 2007 15:24:09 -0400 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: References: <463795BE.6070805@ucsf.edu> <4637A6DD.4060707@gmail.com> <20070502061956.GB21178@clipper.ens.fr> <463832E3.1030206@ar.media.kyoto-u.ac.jp> <46384193.3060006@ar.media.kyoto-u.ac.jp> Message-ID: <20070502192409.GA22703@encolpuis> I haven't really used distutils on its own, rather I like setuptools (http://peak.telecommunity.com/DevCenter/setuptools). Using the find_packages function it makes it trivial to create easy installers for simple projects like I mostly write (I use it to give access to my supervisor to what I am doing, it works like a charm). I also really like the develop option so that as I am making some library, I can use it while still working in the source area. If there aren't any problems with this (I hear some grumblings about setuptools, though it seems to be mostly about ez_setup, which I never use, as hate programs installing themselves) then the template becomes: from setuptools import setup, find_packages setup(name = "", version = "", packages = find_packages()) And that is all! (I have never gone further than this, as I have very, very simple needs) Using the above with python setup.py develop --prefix=~/lib/python is, like a said, a beautiful thing. It makes setuptools worth using, even if I don't distribute to anybody but myself. Furthermore I read that setuptools will automatically send things to PyPI as well. Finally, are people talking about distributing specialty libraries or simply the scripts that solve particular scientific problems (like what I use for my thesis . . .)? If it is the latter than this seems different than what I understand CRAN provides. That being said if anyone sets up an area (on the wiki or otherwise) I would be happy to add some scripts I have written for theoretical ecology (reproducing figures in some papers, etc). Along this lines we could potentially use the Python Cookbook site (http://aspn.activestate.com/ASPN/Python/Cookbook/) in the Image and scientific section. That way people can write comments on other peoples code etc, so that it becomes more of a discussion of best practices, not just a final document of what someone has done. Gabriel On Wed, May 02, 2007 at 04:05:21AM -0400, Anne Archibald wrote: > On 02/05/07, David Cournapeau wrote: > > Anne Archibald wrote: > > > I made a distutils package for a library with a few python files the > > > other day. I'd never done it before and it took me about ten minutes. > > > It's really not hard. You write one file, setup.py, you make sure you > > > have an __init__.py containing __all__, and it just works. > > Exactly. What may be useful for newcommers though is a small script in > > scipy to do the above for you. > > Well, judge for yourself, at http://docs.python.org/dist/dist.html . > > But from my experience I would say that if you wrote a script, it > would be just as much work trying to figure out where and how to put > all the information that is needed as it is to figure out where to put > it in a proper distutils setup. So why not just use distutils? > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From fred.jen at web.de Wed May 2 16:16:49 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Wed, 02 May 2007 22:16:49 +0200 Subject: [SciPy-user] TimeSeries Message-ID: <1178137009.12768.2.camel@muli> Hello if i plot with this code the area under the curve is filled. It is almost the code of the tutorial. But why? dates=TS.date_array(dlist=nrows[:,0], freq='W').asfreq('BUSINESS') raw_series=TS.time_series(price, Message-ID: Fred Jendrzejewski web.de> writes: > > Hello if i plot with this code the area under the curve is filled. It is > almost the code of the tutorial. But why? > > dates=TS.date_array(dlist=nrows[:,0], > freq='W').asfreq('BUSINESS') > raw_series=TS.time_series(price, > series = TS.fill_missing_dates(raw_series) > fig = TPL.tsfigure() > fsp = fig.add_tsplot(111) > fsp.tsplot(series, ls='--') > fsp.format_dateaxis() > dates = series.dates > quarter_starts = dates[dates.quarter != (dates-1).quarter] > fsp.set_xticks(quarter_starts.tovalue()) > fsp.set_xlim(int(series.start_date), int(series.end_date)) > pylab.show() > Are you referring to the example in section 7.2.1 ? I just tried running that example exactly as it is, and it works fine for me. What version of matplotlib are you using? Can you post the full script you ran to generate the plot? - Matt From cclarke at chrisdev.com Wed May 2 18:12:15 2007 From: cclarke at chrisdev.com (Christopher Clarke) Date: Wed, 2 May 2007 18:12:15 -0400 Subject: [SciPy-user] Error with examples/tagbold.spi? Message-ID: Hi The documentation refers to _content as the stuff enclosed in the tag the example on website is ------------ [[.begin name=bold buffers=True]] [[=_contents]] [[.end]] ------------ This fails while _content works Also calling the tag by itself as in the example does not make much sense The example should include the tag in a .spy ---------------------------------- [[.taglib as='bb' from='examples/tagbold.spi']] this is a test ----------------------------------- Regards Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Wed May 2 19:23:49 2007 From: strawman at astraw.com (Andrew Straw) Date: Wed, 02 May 2007 16:23:49 -0700 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <20070502192409.GA22703@encolpuis> References: <463795BE.6070805@ucsf.edu> <4637A6DD.4060707@gmail.com> <20070502061956.GB21178@clipper.ens.fr> <463832E3.1030206@ar.media.kyoto-u.ac.jp> <46384193.3060006@ar.media.kyoto-u.ac.jp> <20070502192409.GA22703@encolpuis> Message-ID: <46391D85.2090303@astraw.com> Gabriel Gellner wrote: > Finally, are people talking about distributing specialty libraries or > simply the scripts that solve particular scientific problems (like > what I use for my thesis . . .)? If it is the latter than this seems > different than what I understand CRAN provides. That being said if > anyone sets up an area (on the wiki or otherwise) I would be happy to > add some scripts I have written for theoretical ecology (reproducing > figures in some papers, etc). > I'm not sure that everyone participating in this thread is aware of the existence of http://www.scipy.org/Cookbook . Please, feel free to add your sample code, short scripts, longer scripts, or whatever there! This is not to exclude the other suggestions, just to make sure that people are aware of that page. -Andrew (Of course, with my luck, I'm getting the following error at the moment on that page.) The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, root at localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Apache/2.0.54 (Fedora) Server at www.scipy.org Port 80 From robert.kern at gmail.com Wed May 2 19:28:02 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 May 2007 18:28:02 -0500 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <46391D85.2090303@astraw.com> References: <463795BE.6070805@ucsf.edu> <4637A6DD.4060707@gmail.com> <20070502061956.GB21178@clipper.ens.fr> <463832E3.1030206@ar.media.kyoto-u.ac.jp> <46384193.3060006@ar.media.kyoto-u.ac.jp> <20070502192409.GA22703@encolpuis> <46391D85.2090303@astraw.com> Message-ID: <46391E82.3080201@gmail.com> Andrew Straw wrote: > (Of course, with my luck, I'm getting the following error at the moment > on that page.) It's working for me both from Enthought's office and UGCS. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From v-nijs at kellogg.northwestern.edu Wed May 2 21:15:38 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Wed, 02 May 2007 20:15:38 -0500 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: <46391D85.2090303@astraw.com> Message-ID: The cookbook on Scipy.org is indeed very nice. In its current form it is a great alternative to a tutorial. I wonder, however, I doubt if it will remain easy to use if many people start to add information to these pages. Also it may be intimidating to some to edit the front-page of the cookbook and move other's links around. Uploading a script and adding a few lines of comments and installation instructions for it to a database that can then be searched seems a more practical solution in the long-run. Vincent On 5/2/07 6:23 PM, "Andrew Straw" wrote: > Gabriel Gellner wrote: >> Finally, are people talking about distributing specialty libraries or >> simply the scripts that solve particular scientific problems (like >> what I use for my thesis . . .)? If it is the latter than this seems >> different than what I understand CRAN provides. That being said if >> anyone sets up an area (on the wiki or otherwise) I would be happy to >> add some scripts I have written for theoretical ecology (reproducing >> figures in some papers, etc). >> > I'm not sure that everyone participating in this thread is aware of the > existence of http://www.scipy.org/Cookbook . Please, feel free to add > your sample code, short scripts, longer scripts, or whatever there! This > is not to exclude the other suggestions, just to make sure that people > are aware of that page. > > -Andrew > > (Of course, with my luck, I'm getting the following error at the moment > on that page.) > > The server encountered an internal error or misconfiguration and was > unable to complete your request. > > Please contact the server administrator, root at localhost and inform them > of the time the error occurred, and anything you might have done that > may have caused the error. > > More information about this error may be available in the server error log. > Apache/2.0.54 (Fedora) Server at www.scipy.org Port 80 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Vincent R. Nijs Assistant Professor of Marketing Kellogg School of Management, Northwestern University 2001 Sheridan Road, Evanston, IL 60208-2001 Phone: +1-847-491-4574 Fax: +1-847-491-2498 E-mail: v-nijs at kellogg.northwestern.edu Skype: vincentnijs From robert.kern at gmail.com Wed May 2 22:00:54 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 May 2007 21:00:54 -0500 Subject: [SciPy-user] Numerical Achievements in SciPy In-Reply-To: References: Message-ID: <46394256.3010003@gmail.com> Vincent Nijs wrote: > The cookbook on Scipy.org is indeed very nice. In its current form it is a > great alternative to a tutorial. I wonder, however, I doubt if it will > remain easy to use if many people start to add information to these pages. > > Also it may be intimidating to some to edit the front-page of the cookbook > and move other's links around. Uploading a script and adding a few lines of > comments and installation instructions for it to a database that can then be > searched seems a more practical solution in the long-run. That's essentially what the Wiki is. If you name the page Cookbook/SomethingOrOther, it will automatically show up on the Cookbook page. No muss, no fuss. But if you do really want something like the interface of the ASPN Python Cookbook site, then please use that site. Personally, I'm not going to bother trying to reimplement such a thing just for our use unless if the ASPN Cookbook has so many scipy-specific recipes that it becomes hard to use. I'm not convinced that we currently have enough volume to justify making our own system. So please prove me wrong by writing as many ASPN recipes or scipy.org/Cookbook pages as you can! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fred.jen at web.de Thu May 3 02:55:26 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Thu, 03 May 2007 08:55:26 +0200 Subject: [SciPy-user] TimeSeries In-Reply-To: References: <1178137009.12768.2.camel@muli> Message-ID: <1178175327.5462.2.camel@muli> Am Mittwoch, den 02.05.2007, 20:41 +0000 schrieb Matt Knox: > Fred Jendrzejewski web.de> writes: > > > > > Hello if i plot with this code the area under the curve is filled. It is > > almost the code of the tutorial. But why? > > > > dates=TS.date_array(dlist=nrows[:,0], > > freq='W').asfreq('BUSINESS') > > raw_series=TS.time_series(price, > > > series = TS.fill_missing_dates(raw_series) > > fig = TPL.tsfigure() > > fsp = fig.add_tsplot(111) > > fsp.tsplot(series, ls='--') > > fsp.format_dateaxis() > > dates = series.dates > > quarter_starts = dates[dates.quarter != (dates-1).quarter] > > fsp.set_xticks(quarter_starts.tovalue()) > > fsp.set_xlim(int(series.start_date), int(series.end_date)) > > pylab.show() > > > > > Are you referring to the example in section 7.2.1 ? I just tried running that > example exactly as it is, and it works fine for me. What version of matplotlib > are you using? Can you post the full script you ran to generate the plot? > > - Matt > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user import pysqlite2.dbapi2 import pylab from numpy import * import datetime from matplotlib.dates import YearLocator, MonthLocator, DateFormatter import timeseries as TS from timeseries import plotlib as TPL rows=[(u'2007-04-23', u'7348.01'),(u'2007-04-16', u'7229.06'),(u'2007-04-10', u'7102.47'),(u'2007-04-02', u'6911.13'), (u'2007-03-26', u'6899.33'),(u'2007-03-19', u'6622.76'), (u'2007-03-12', u'6727.57'),(u'2007-03-05', u'6510.95'), (u'2007-02-26', u'6992.12'),(u'2007-02-19', u'6967.53'), (u'2007-02-12', u'6892.03'),(u'2007-02-05', u'6878.67'), (u'2007-01-29', u'6702.45'),(u'2007-01-22', u'6753.73'), (u'2007-01-15', u'6719.53'),(u'2007-01-08', u'6603.55'), (u'2007-01-02', u'6614.73'),(u'2006-12-27', u'6525.99'), (u'2006-12-18', u'6588.25'),(u'2006-12-11', u'6449.76'), (u'2006-12-04', u'6258.91'),(u'2006-11-27', u'6400.49'), (u'2006-11-20', u'6394.08'),(u'2006-11-13', u'6350.65'), (u'2006-11-06', u'6249.92'),(u'2006-10-30', u'6230.86'), (u'2006-10-23', u'6216.46'),(u'2006-10-16', u'6178.36'), (u'2006-10-09', u'6064.88'),(u'2006-10-02', u'6019.34'), (u'2006-09-25', u'5898.90'),(u'2006-09-18', u'5932.51'), (u'2006-09-11', u'5774.31'),(u'2006-09-04', u'5890.01'), (u'2006-08-28', u'5808.37'),(u'2006-08-21', u'5804.66'), (u'2006-08-14', u'5661.07'),(u'2006-08-07', u'5685.46'), (u'2006-07-31', u'5704.56'),(u'2006-07-24', u'5460.12'), (u'2006-07-17', u'5437.08'),(u'2006-07-10', u'5678.11'), (u'2006-07-03', u'5688.00'),(u'2006-06-26', u'5542.74'), (u'2006-06-19', u'5388.83'),(u'2006-06-12', u'5458.38'), (u'2006-06-05', u'5685.56'),(u'2006-05-29', u'5782.49'), (u'2006-05-22', u'5653.85'),(u'2006-05-15', u'5893.00'), ] nrows=array(rows) print nrows price=nrows[:,1].astype(float) dates=TS.date_array(dlist=nrows[:,0], freq='W').asfreq('BUSINESS') raw_series=TS.time_series(price, dates) series = TS.fill_missing_dates(raw_series) fig = TPL.tsfigure() fsp = fig.add_tsplot(111) fsp.tsplot(series, ls='-') fsp.format_dateaxis() dates = series.dates quarter_starts = dates[dates.quarter != (dates-1).quarter] fsp.set_xticks(quarter_starts.tovalue()) fsp.set_xlim(int(series.start_date), int(series.end_date)) pylab.show() He is always connecting the xaxis with a point of the graph. My version of matplotlib is: 0.87.7-0.3ubuntu1 From mattknox_ca at hotmail.com Thu May 3 10:57:43 2007 From: mattknox_ca at hotmail.com (Matt Knox) Date: Thu, 3 May 2007 14:57:43 +0000 (UTC) Subject: [SciPy-user] TimeSeries References: <1178137009.12768.2.camel@muli> <1178175327.5462.2.camel@muli> Message-ID: > import pysqlite2.dbapi2 > import pylab > from numpy import * > import datetime > from matplotlib.dates import YearLocator, MonthLocator, DateFormatter > import timeseries as TS > from timeseries import plotlib as TPL > > rows=[(u'2007-04-23', u'7348.01'),(u'2007-04-16', > u'7229.06'),(u'2007-04-10', u'7102.47'),(u'2007-04-02', u'6911.13'), > (u'2007-03-26', u'6899.33'),(u'2007-03-19', u'6622.76'), > (u'2007-03-12', u'6727.57'),(u'2007-03-05', u'6510.95'), > (u'2007-02-26', u'6992.12'),(u'2007-02-19', u'6967.53'), > (u'2007-02-12', u'6892.03'),(u'2007-02-05', u'6878.67'), > (u'2007-01-29', u'6702.45'),(u'2007-01-22', u'6753.73'), > (u'2007-01-15', u'6719.53'),(u'2007-01-08', u'6603.55'), > (u'2007-01-02', u'6614.73'),(u'2006-12-27', u'6525.99'), > (u'2006-12-18', u'6588.25'),(u'2006-12-11', u'6449.76'), > (u'2006-12-04', u'6258.91'),(u'2006-11-27', u'6400.49'), > (u'2006-11-20', u'6394.08'),(u'2006-11-13', u'6350.65'), > (u'2006-11-06', u'6249.92'),(u'2006-10-30', u'6230.86'), > (u'2006-10-23', u'6216.46'),(u'2006-10-16', u'6178.36'), > (u'2006-10-09', u'6064.88'),(u'2006-10-02', u'6019.34'), > (u'2006-09-25', u'5898.90'),(u'2006-09-18', u'5932.51'), > (u'2006-09-11', u'5774.31'),(u'2006-09-04', u'5890.01'), > (u'2006-08-28', u'5808.37'),(u'2006-08-21', u'5804.66'), > (u'2006-08-14', u'5661.07'),(u'2006-08-07', u'5685.46'), > (u'2006-07-31', u'5704.56'),(u'2006-07-24', u'5460.12'), > (u'2006-07-17', u'5437.08'),(u'2006-07-10', u'5678.11'), > (u'2006-07-03', u'5688.00'),(u'2006-06-26', u'5542.74'), > (u'2006-06-19', u'5388.83'),(u'2006-06-12', u'5458.38'), > (u'2006-06-05', u'5685.56'),(u'2006-05-29', u'5782.49'), > (u'2006-05-22', u'5653.85'),(u'2006-05-15', u'5893.00'), > ] > nrows=array(rows) > print nrows > price=nrows[:,1].astype(float) > dates=TS.date_array(dlist=nrows[:,0], freq='W').asfreq('BUSINESS') > raw_series=TS.time_series(price, dates) > series = TS.fill_missing_dates(raw_series) > fig = TPL.tsfigure() > fsp = fig.add_tsplot(111) > fsp.tsplot(series, ls='-') > fsp.format_dateaxis() > dates = series.dates > quarter_starts = dates[dates.quarter != (dates-1).quarter] > fsp.set_xticks(quarter_starts.tovalue()) > fsp.set_xlim(int(series.start_date), int(series.end_date)) > pylab.show() > > He is always connecting the xaxis with a point of the graph. My version > of matplotlib is: > 0.87.7-0.3ubuntu1 > Hi Fred, first thing you should check is that you have correctly applied the change outlined in the "WARNING" at the begin of the plotting section of the TimeSeriesPackage wiki. You have to tell matplotlib to use the version of maskedarray from the scipy sandbox. If things still look strange, I would suggest upgrading to the latest version of matplotlib just to eliminate that as a possible source of problems. Secondly, using fill_missing_dates is not really appropriate in this case. fill_missing_dates inserts masked values in the series wherever the date portion of the TimeSeries is not continuous. For example, lets say I have the following TimeSeries at daily frequency: 15-mar-2007, 55.5 17-mar-2007, 45.1 19-mar-2007, 46.2 after applying fill_missing_dates, the result is: 15-mar-2007, 55.5 16-mar-2007, -- 17-mar-2007, 45.1 18-mar-2007, -- 19-mar-2007, 46.2 (as a side note, the Report class is useful for inspecting such things. try "TS.Report(series)()") So if you try to plot a line for the series that had fill_missing_dates applied to it, you won't actually see anything since there are no consecutive data points to plot. If you plot the original series in your example (raw_series), matplotlib will actually perform linear interpolation between the points and that may look quite acceptable for your purposes. Another possibility is to use fill_missing_dates and then use the forward_fill function (or other interpolation methods) from the interpolate sub-module to fill in the resulting masked values. Let me know if that works for you. Also, I just committed to SVN a fix to recognize unicode strings (which I think Pierre alluded to earlier), but I assume you have already fixed that in the code for your own copy. One other thing I should point out is that when you specify your DateArray/TimeSeries as weekly frequency ('W'), the data points are assumed to be on Sundays. So in your example, all your dates get converted to Weekly-Sunday dates (so 2006-05-15 would turn into the week ending 2006-05-21), and then when you do ".asfreq("BUSINESS")" the default relation for asfreq happens to be "BEFORE" so they get converted back to Monday dates. But I would suggest just specifying the original DateArray as business frequency and skipping the call to asfreq. I'm actually going to change the default relation for the DateArray asfreq method to "AFTER" to match up with what is used for the Date class and that would impact your code here. As you may have noticed, weekly frequencies aren't properly supported in the plotting code yet (I may get around to that in the near future). Also, there are 7 different weekly frequencies you can use if you want more explicit control over what day the week ends on. There is a section on the wiki about that. - Matt From gnchen at cortechs.net Thu May 3 12:37:32 2007 From: gnchen at cortechs.net (Gennan Chen) Date: Thu, 3 May 2007 09:37:32 -0700 Subject: [SciPy-user] ndimage crash on Rocks 4.0 References: <3F34907A-0112-4831-82C8-C3A81F6E4060@cortechs.net> Message-ID: <1F83138B-CD88-4240-B603-2D2D477A4591@cortechs.net> Hi! All, anyone found out what went wrong there?? Gen Begin forwarded message: > From: Gennan Chen > Date: April 30, 2007 5:45:19 PM PDT > To: SciPy Users List > Subject: Re: [SciPy-user] ndimage crash on Rocks 4.0 > > Here is the debug info: > > In [4]: scipy.ndimage.test(verbosity=2) > Found 398 tests for scipy.ndimage > Warning: No test file found in /usr/lib/python2.3/site-packages/ > scipy/ndimage/tests for module from '...ges/scipy/ndimage/_nd_image.so'> > Warning: No test file found in /usr/lib/python2.3/site-packages/ > scipy/ndimage/tests for module from '.../scipy/ndimage/_ni_support.pyc'> > Warning: No test file found in /usr/lib/python2.3/site-packages/ > scipy/ndimage/tests for module '...ages/scipy/ndimage/filters.pyc'> > Warning: No test file found in /usr/lib/python2.3/site-packages/ > scipy/ndimage/tests for module '...ages/scipy/ndimage/fourier.pyc'> > Warning: No test file found in /usr/lib/python2.3/site-packages/ > scipy/ndimage/tests for module '...ackages/scipy/ndimage/info.pyc'> > Warning: No test file found in /usr/lib/python2.3/site-packages/ > scipy/ndimage/tests for module 'scipy.ndimage.interpolation' from '...cipy/ndimage/ > interpolation.pyc'> > Warning: No test file found in /usr/lib/python2.3/site-packages/ > scipy/ndimage/tests for module from '...scipy/ndimage/measurements.pyc'> > Warning: No test file found in /usr/lib/python2.3/site-packages/ > scipy/ndimage/tests for module from '...s/scipy/ndimage/morphology.pyc'> > Found 0 tests for __main__ > affine_transform 1 ... ok > affine transform 2 ... ok > affine transform 3 ... ok > affine transform 4 ... ok > affine transform 5 ... ok > affine transform 6 ... ok > affine transform 7 ... ok > affine transform 8 ... ok > affine transform 9 ... ok > affine transform 10 ... ok > affine transform 11 ... ok > affine transform 12 ... ok > affine transform 13 ... ok > affine transform 14 ... ok > affine transform 15 ... ok > affine transform 16 ... ok > affine transform 17 ... ok > affine transform 18 ... ok > affine transform 19 ... ok > affine transform 20 ... ok > affine transform 21 ... ok > binary closing 1 ... ok > binary closing 2 ... ok > binary dilation 1 ... ok > binary dilation 2 ... ok > binary dilation 3 ... ok > binary dilation 4 ... ok > binary dilation 5 ... ok > binary dilation 6 ... ok > binary dilation 7 ... ok > binary dilation 8 ... ok > binary dilation 9 ... ok > binary dilation 10 ... ok > binary dilation 11 ... ok > binary dilation 12 ... ok > binary dilation 13 ... ok > binary dilation 14 ... ok > binary dilation 15 ... ok > binary dilation 16 ... ok > binary dilation 17 ... ok > binary dilation 18 ... ok > binary dilation 19 ... ok > binary dilation 20 ... ok > binary dilation 21 ... ok > binary dilation 22 ... ok > binary dilation 23 ... ok > binary dilation 24 ... ok > binary dilation 25 ... ok > binary dilation 26 ... ok > binary dilation 27 ... ok > binary dilation 28 ... ok > binary dilation 29 ... ok > binary dilation 30 ... ok > binary dilation 31 ... ok > binary dilation 32 ... ok > binary dilation 33 ... ok > binary dilation 34 ... ok > binary dilation 35 ... ok > binary erosion 1 ... ok > binary erosion 2 ... ok > binary erosion 3 ... ok > binary erosion 4 ... ok > binary erosion 5 ... ok > binary erosion 6 ... ok > binary erosion 7 ... ok > binary erosion 8 ... ok > binary erosion 9 ... ok > binary erosion 10 ... ok > binary erosion 11 ... ok > binary erosion 12 ... ok > binary erosion 13 ... ok > binary erosion 14 ... ok > binary erosion 15 ... ok > binary erosion 16 ... ok > binary erosion 17 ... ok > binary erosion 18 ... ok > binary erosion 19 ... ok > binary erosion 20 ... ok > binary erosion 21 ... ok > binary erosion 22 ... ok > binary erosion 23 ... ok > binary erosion 24 ... ok > binary erosion 25 ... ok > binary erosion 26 ... ok > binary erosion 27 ... ok > binary erosion 28 ... ok > binary erosion 29 ... ok > binary erosion 30 ... ok > binary erosion 31 ... ok > binary erosion 32 ... ok > binary erosion 33 ... ok > binary erosion 34 ... ok > binary erosion 35 ... ok > binary erosion 36 ... ok > binary fill holes 1 ... ok > binary fill holes 2 ... ok > binary fill holes 3 ... ok > binary opening 1 ... ok > binary opening 2 ... ok > binary propagation 1 ... ok > binary propagation 2 ... ok > black tophat 1 ... ok > black tophat 2 ... ok > boundary modes/usr/lib/python2.3/site-packages/scipy/ndimage/ > interpolation.py:41: UserWarning: Mode "reflect" may yield > incorrect results on boundaries. Please use "mirror" instead. > warnings.warn('Mode "reflect" may yield incorrect results on ' > ... ok > center of mass 1 ... ok > center of mass 2 ... ok > center of mass 3 ... ok > center of mass 4 ... ok > center of mass 5 ... ok > center of mass 6 ... ok > center of mass 7 ... ok > center of mass 8 ... ok > center of mass 9 ... ok > correlation 1 ... ok > correlation 2 ... ok > correlation 3 ... ok > correlation 4 ... ok > correlation 5 ... ok > correlation 6 ... ok > correlation 7 ... ok > correlation 8 ... ok > correlation 9 ... ok > correlation 10 ... ok > correlation 11 ... ok > correlation 12 ... ok > correlation 13 ... ok > correlation 14 ... ok > correlation 15 ... ok > correlation 16 ... ok > correlation 17 ... ok > correlation 18 ... ok > correlation 19 ... ok > correlation 20 ... ok > correlation 21 ... ok > correlation 22 ... ok > correlation 23 ... ok > correlation 24 ... ok > correlation 25 ... ok > brute force distance transform 1 ... ok > brute force distance transform 2 ... ok > brute force distance transform 3 ... ok > brute force distance transform 4 ... ok > brute force distance transform 5 ... ok > brute force distance transform 6 ... ok > chamfer type distance transform 1 ... ok > chamfer type distance transform 2 ... ok > chamfer type distance transform 3 ... ok > euclidean distance transform 1 ... ok > euclidean distance transform 2 ... ok > euclidean distance transform 3 ... ok > euclidean distance transform 4 ... ok > line extension 1 ... ok > line extension 2 ... ok > line extension 3 ... ok > line extension 4 ... ok > line extension 5 ... ok > line extension 6 ... ok > line extension 7 ... ok > line extension 8 ... ok > line extension 9 ... ok > line extension 10 ... ok > extrema 1 ... ok > extrema 2 ... ok > extrema 3 ... ok > extrema 4 ... ok > find_objects 1 ... ok > find_objects 2 ... ok > find_objects 3 ... ok > find_objects 4 ... ok > find_objects 5 ... ok > find_objects 6 ... ok > find_objects 7 ... ok > find_objects 8 ... ok > find_objects 9 ... ok > ellipsoid fourier filter for complex transforms 1 ... ok > ellipsoid fourier filter for real transforms 1 ... ok > gaussian fourier filter for complex transforms 1 ... ok > gaussian fourier filter for real transforms 1 ... ok > shift filter for complex transforms 1 ... ok > shift filter for real transforms 1 ... ok > uniform fourier filter for complex transforms 1 ... ok > uniform fourier filter for real transforms 1 ... ok > gaussian filter 1 ... ok > gaussian filter 2 ... ok > gaussian filter 3 ... ok > gaussian filter 4 ... ok > gaussian filter 5 ... ok > gaussian filter 6 ... ok > gaussian gradient magnitude filter 1 ... ok > gaussian gradient magnitude filter 2 ... ok > gaussian laplace filter 1 ... ok > gaussian laplace filter 2 ... ok > generation of a binary structure 1 ... ok > generation of a binary structure 2 ... ok > generation of a binary structure 3 ... ok > generation of a binary structure 4 ... ok > generic filter 1Segmentation fault > > And Stefan is right. The problem is generic_filter. > > Gen > > On Apr 30, 2007, at 4:09 PM, Robert Kern wrote: > >> Gennan Chen wrote: >>> Hi! All, >>> >>> scipy.ndimage gave me seg fault on Rocks 4.0, python 2.3.4. >>> Anyone has a >>> solution? >> >> Can you rerun the tests with scipy.ndimage.test(verbosity=2)? That >> will put the >> test framework into verbose mode and print out the name of the >> test before it's >> run. That way, we can know which test failed. >> >> A gdb backtrace would also be helpful if you know how to get one. >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a >> harmless enigma >> that is made terrible by our own mad attempt to interpret it as >> though it had >> an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Thu May 3 13:42:56 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 3 May 2007 13:42:56 -0400 Subject: [SciPy-user] TimeSeries In-Reply-To: References: <1178137009.12768.2.camel@muli> <1178175327.5462.2.camel@muli> Message-ID: <200705031342.58429.pgmdevlist@gmail.com> In addition to Matt's answer: Note that I had a strange result myself when trying to plot your graph: the masked data were no longer masked (which explained the lines at y=0). Turned out that there was something wrong in my version of matplotlib: in matplotlib.lines, on lines 320 and 321: x = asarray(self.convert_xunits(self._xorig), Float) y = asarray(self.convert_yunits(self._yorig), Float) should be x = ma.asarray(self.convert_xunits(self._xorig), Float) y = ma.asarray(self.convert_yunits(self._yorig), Float) (which has been corrected on SVN). The initial two lines were converting a maskedarray to a regular ndarray, therefore dropping the mask. Sp, please update matplotlib.lines and let us know how it goes. From t_crane at mrl.uiuc.edu Thu May 3 16:01:29 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Thu, 3 May 2007 15:01:29 -0500 Subject: [SciPy-user] trying with numpy when trying to install PyDSTool? Message-ID: <9EADC1E53F9C70479BF6559370369114142EBB@mrlnt6.mrl.uiuc.edu> Hi, OK, so I'm trying to install PyDSTool in order to use their ODE solver which has event detection. However, when I import it this is the result I get: c:\PyDSTool\common.py 63 array, sapaxes, as array, zeros, ones, \ 64 take, less_equal, putmask --> 65 from numpy import int, int8, int16, int32, int64, float, float32, float 64, complex, complex64 66 67 import time ImportError: cannont import name int So, does anyone know what's up with this? Obviously trying to import int, float, or complex from numpy all result in the same import error. I don't know if this is a problem with the version of numpy I have or if it's a problem PyDSTool. I used the Enthought install, so my numpy is version 0.9.9.2706. Any help is appreciated. thanks, trevis ________________________________________________ Trevis Crane Postdoctoral Research Assoc. Department of Physics University of Ilinois 1110 W. Green St. Urbana, IL 61801 p: 217-244-8652 f: 217-244-2278 e: tcrane at uiuc.edu ________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu May 3 16:05:06 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 3 May 2007 14:05:06 -0600 Subject: [SciPy-user] trying with numpy when trying to install PyDSTool? In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EBB@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EBB@mrlnt6.mrl.uiuc.edu> Message-ID: On 5/3/07, Trevis Crane wrote: > problem PyDSTool. I used the Enthought install, so my numpy is version > 0.9.9.2706. That's too old, you need a more current numpy. cheers, f From t_crane at mrl.uiuc.edu Thu May 3 16:08:22 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Thu, 3 May 2007 15:08:22 -0500 Subject: [SciPy-user] trying with numpy when trying to install PyDSTool? Message-ID: <9EADC1E53F9C70479BF6559370369114134415@mrlnt6.mrl.uiuc.edu> huh. OK thanks. > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of Fernando Perez > Sent: Thursday, May 03, 2007 3:05 PM > To: SciPy Users List > Subject: Re: [SciPy-user] trying with numpy when trying to install PyDSTool? > > On 5/3/07, Trevis Crane wrote: > > > problem PyDSTool. I used the Enthought install, so my numpy is version > > 0.9.9.2706. > > That's too old, you need a more current numpy. > > cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From dominique.orban at gmail.com Thu May 3 17:32:15 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Thu, 3 May 2007 17:32:15 -0400 Subject: [SciPy-user] flapack.so: undefined symbol: cblas_strsm Message-ID: <8793ae6e0705031432p4d2d7270md29ae565d4302345@mail.gmail.com> Hello, I compiled SciPy 0.5.2 against Lapack and Atlas 3.6.0 following the directions on the SciPy website. I had previously installed NumPy 1.0.2 which passed all the tests. In SciPy, I get the following error message: >>> import scipy.linalg Traceback (most recent call last): File "", line 1, in ? File "/home/orban/local/Python/lib/python/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/home/orban/local/Python/lib/python/scipy/linalg/basic.py", line 17, in ? from lapack import get_lapack_funcs File "/home/orban/local/Python/lib/python/scipy/linalg/lapack.py", line 17, in ? from scipy.linalg import flapack ImportError: /home/orban/local/Python/lib/python/scipy/linalg/flapack.so: undefined symbol: cblas_strsm I interpreted this message as saying that flapack.so wasn't (correctly) compiled against libcblas.so. The libcblas.so library is however present in the same directory as libatlas, libf77blas and others, and is coming from Atlas. Any help would be welcome as long web searches didn't help me much. I paste below the result of 'python setup.py config' in the SciPy source directory. For some reason, liblapack appears twice in each compiling command. Thanks in advance. Dominique -------------Result of python setup.y config mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /usr/local/lib libraries fftw3 not found in /usr/lib fftw3 not found NOT AVAILABLE fftw2_info: libraries rfftw,fftw not found in /usr/local/lib libraries rfftw,fftw not found in /usr/lib fftw2 not found NOT AVAILABLE dfftw_info: libraries drfftw,dfftw not found in /usr/local/lib libraries drfftw,dfftw not found in /usr/lib dfftw not found NOT AVAILABLE djbfft_info: NOT AVAILABLE blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'f77blas', 'atlas'] library_dirs = ['/home/orban/lib'] language = c Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -pipe -m32 -march=i386 -mtune=pentium4 -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/home/orban/lib -llapack -lf77blas -latlas -o _configtest ATLAS version 3.6.0 built by orban on Thu Feb 1 16:14:41 EST 2007: UNAME : Linux p1121.gerad.ca 2.6.9-22.0.1.ELsmp #1 SMP Thu Oct 27 13:14:25 CDT 2005 i686 i686 i386 GNU/Linux INSTFLG : MMDEF : /home/orban/local/LinearAlgebra/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/gemm ARCHDEF : /home/orban/local/LinearAlgebra/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/misc F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 1048576 F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) F77FLAGS : -fomit-frame-pointer -O CC : /usr/bin/gcc, version gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops MCC : /usr/bin/gcc, version gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) MCCFLAGS : -fomit-frame-pointer -O success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'f77blas', 'atlas'] library_dirs = ['/home/orban/lib'] language = c define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] ATLAS version 3.6.0 lapack_opt_info: lapack_mkl_info: NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in /home/orban/lib numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'lapack', 'f77blas', 'atlas'] library_dirs = ['/home/orban/lib'] language = f77 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -pipe -m32 -march=i386 -mtune=pentium4 -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/home/orban/lib -llapack -llapack -lf77blas -latlas -o _configtest ATLAS version 3.6.0 built by orban on Thu Feb 1 16:14:41 EST 2007: UNAME : Linux p1121.gerad.ca 2.6.9-22.0.1.ELsmp #1 SMP Thu Oct 27 13:14:25 CDT 2005 i686 i686 i386 GNU/Linux INSTFLG : MMDEF : /home/orban/local/LinearAlgebra/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/gemm ARCHDEF : /home/orban/local/LinearAlgebra/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/misc F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 1048576 F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) F77FLAGS : -fomit-frame-pointer -O CC : /usr/bin/gcc, version gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops MCC : /usr/bin/gcc, version gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) MCCFLAGS : -fomit-frame-pointer -O success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'lapack', 'f77blas', 'atlas'] library_dirs = ['/home/orban/lib'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] ATLAS version 3.6.0 ATLAS version 3.6.0 non-existing path in 'Lib/linsolve': 'tests' umfpack_info: libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /home/orban/local/Python/lib/python/numpy/distutils/system_info.py:401: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE Warning: Subpackage 'Lib' configuration returned as 'scipy' non-existing path in 'Lib/maxentropy': 'doc' running config From fperez.net at gmail.com Thu May 3 19:01:54 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 3 May 2007 17:01:54 -0600 Subject: [SciPy-user] Distributed Array Library? In-Reply-To: <6ce0ac130704270847k59bbd522tad40102209e677d0@mail.gmail.com> References: <9D6787C2-4FA8-40F8-B828-B253EE4DC458@u.washington.edu> <6ce0ac130704270847k59bbd522tad40102209e677d0@mail.gmail.com> Message-ID: Hi all, On 4/27/07, Brian Granger wrote: > 3. Global arrays > > Robert Harrison at ORNL has python bindings to this. They probably > need updating, and I am not sure if/where they can be downloaded. > This could be very nice. It also might make sense to do a simple > ctypes wrapper for the global array library. I would be interested in > this. Well, here it is, straight from the horse's mouths. Many thanks to Manoj et al for the info/code. I personally, while interested, am way too swamped already to handle this particular topic. But it would be a great project for someone to inherit and collaobrate with the PNL team on, providing as good integration with numpy as feasible. As Brian mentioned, if the facilities of IPython can contribute to the communications parts with the distributed backend, we'll be happy to participate, obviously. Cheers, f ---------- Forwarded message ---------- From: Krishnan, Manojkumar Date: May 3, 2007 4:40 PM Subject: RE: Global arrays To: "Nieplocha, Jarek" , Fernando Perez Cc: "Harrison, Robert J." Fernando, Here is some information about pyGA. Please let me know if you need more information regarding source code. As Robert/Jarek mentioned, this is an ancient code. If there is enough interest, we can update/support it for the lattest Global Arrays release GA 4.0.4/4.0.5. http://www.emsl.pnl.gov/docs/global/pyGA/ Thanks, -Manoj:) From stefan at sun.ac.za Fri May 4 06:11:03 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 4 May 2007 12:11:03 +0200 Subject: [SciPy-user] ndimage crash on Rocks 4.0 In-Reply-To: <1F83138B-CD88-4240-B603-2D2D477A4591@cortechs.net> References: <3F34907A-0112-4831-82C8-C3A81F6E4060@cortechs.net> <1F83138B-CD88-4240-B603-2D2D477A4591@cortechs.net> Message-ID: <20070504101103.GD23778@mentat.za.net> Hi Gennan On Thu, May 03, 2007 at 09:37:32AM -0700, Gennan Chen wrote: > anyone found out what went wrong there?? Which version of gcc are you using? Please also send me a gdb traceback if you can. I'd like to see if this is the same problem Nils reported. May be related to some over-zealous optimization. Cheers St?fan From issa at aims.ac.za Fri May 4 07:47:45 2007 From: issa at aims.ac.za (Issa Karambal) Date: Fri, 04 May 2007 13:47:45 +0200 Subject: [SciPy-user] spline wavelet decomposition Message-ID: <463B1D61.2050504@aims.ac.za> Hi, if there anyone who knows how to simulate 'spline wavelet decomposition and reconstruction' for a given signal 'f'. regards, issa From stefan at sun.ac.za Fri May 4 09:14:52 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 4 May 2007 15:14:52 +0200 Subject: [SciPy-user] spline wavelet decomposition In-Reply-To: <463B1D61.2050504@aims.ac.za> References: <463B1D61.2050504@aims.ac.za> Message-ID: <20070504131452.GF23778@mentat.za.net> Hi Issa On Fri, May 04, 2007 at 01:47:45PM +0200, Issa Karambal wrote: > if there anyone who knows how to simulate 'spline wavelet decomposition > and reconstruction' for a given signal 'f'. Take a look at http://wavelets.scipy.org which includes different wavelet families, including Haar, Daubechies, Symlets, Coiflets and more. 'demo/dwt_signal_decomposition.py' shows how a signal is decomposed. Regards St?fan From fred.jen at web.de Fri May 4 12:04:41 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Fri, 04 May 2007 18:04:41 +0200 Subject: [SciPy-user] TimeSeries Message-ID: <1178294681.5148.10.camel@muli> Thank you for all these hints. I am so sorry, it was my stuipidness. The problem was the fill_missing_dates(raw_series). This created points on the x-axis. But there is another thing that makes some little problems. import datetime from numpy import * import timeseries as TS from timeseries import plotlib as TPL rows=[(u'2007-04-23', u'7348.01'),(u'2007-04-16', u'7229.06'), (u'2007-04-10', u'7102.47') ] nrows=array(rows) datum=TS.Date('W-MON', string=nrows[0][0]) dates=TS.date_array(dlist=nrows[:,0], freq='Business') series=TS.time_series(nrows[:,1].astype(float), dates) print series print series.dates The dates are inverted but the datas not. From Laurent.Perrinet at incm.cnrs-mrs.fr Fri May 4 12:26:59 2007 From: Laurent.Perrinet at incm.cnrs-mrs.fr (Laurent Perrinet) Date: Fri, 4 May 2007 18:26:59 +0200 Subject: [SciPy-user] loking for lookfor Message-ID: <987F5DF9-F350-4D89-BC43-6F2CC0BECA37@incm.cnrs-mrs.fr> Dear list, hope this is not off-topic. I'm sometimes lost in the richness of scipy possibilities and I am looking for a function that could search in the docstrings for particular keywords... (i'm sure that as many other users, we're googling that). there is an equivalent in matlab called lookfor which searches in the whole path all functions containing the given keyword. does a similar function exist for scipy? and more generally (i)python? like a magic: %lookfor convol* would return all functions etc... containing "convol" in its docstring thanks, Laurent ps: only reference I found was a missing entry in a matlab/pylab comparison: http://37mm.no/mpy/matlab-numpy.html Laurent Perrinet ------------ http://incm.cnrs-mrs.fr/LaurentPerrinet/ContactInformation From pgmdevlist at gmail.com Fri May 4 12:50:15 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 4 May 2007 12:50:15 -0400 Subject: [SciPy-user] TimeSeries In-Reply-To: <1178294681.5148.10.camel@muli> References: <1178294681.5148.10.camel@muli> Message-ID: <200705041250.16184.pgmdevlist@gmail.com> On Friday 04 May 2007 12:04:41 Fred Jendrzejewski wrote: > The dates are inverted but the datas not. Fred, that's a genuine bug. We sort the dates, but don't keep track of the initial order. I'm working on that, and will let you know when the SVN will be updated. Thanks again for the feedback! From elcorto at gmx.net Fri May 4 13:07:32 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 04 May 2007 19:07:32 +0200 Subject: [SciPy-user] loking for lookfor In-Reply-To: <987F5DF9-F350-4D89-BC43-6F2CC0BECA37@incm.cnrs-mrs.fr> References: <987F5DF9-F350-4D89-BC43-6F2CC0BECA37@incm.cnrs-mrs.fr> Message-ID: <463B6854.2090705@gmx.net> Laurent Perrinet wrote: > Dear list, > > hope this is not off-topic. I'm sometimes lost in the richness of > scipy possibilities and I am looking for a function that could search > in the docstrings for particular keywords... (i'm sure that as many > other users, we're googling that). there is an equivalent in matlab > called lookfor which searches in the whole path all functions > containing the given keyword. > > does a similar function exist for scipy? and more generally (i)python? > > like a magic: > %lookfor convol* > > would return all functions etc... containing "convol" in its docstring > I don't know if scipy has such a function to search in docstrings. But although not for the docstring, the cool ipython magic function %psearch lets you search a certain namespace for function names (e.g. all functions related to fft): In [4]: scipy.*fft*? scipy.fft scipy.fft2 scipy.fftfreq scipy.fftn scipy.fftpack scipy.fftshift scipy.ifft scipy.ifft2 scipy.ifftn scipy.ifftshift See %psearch? for details and options. If you want to search the docstrings itself the only method that comes into my mind is to simply grep for it (if you are on a *nix system, e.g. grep -ir fft /path/to/scipy-installation/). But this also gives you matches of function calls, comments etc. and is likely to be rather useless. -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From steve at shrogers.com Fri May 4 13:13:55 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Fri, 4 May 2007 11:13:55 -0600 (MDT) Subject: [SciPy-user] looking for lookfor In-Reply-To: <987F5DF9-F350-4D89-BC43-6F2CC0BECA37@incm.cnrs-mrs.fr> References: <987F5DF9-F350-4D89-BC43-6F2CC0BECA37@incm.cnrs-mrs.fr> Message-ID: <58474.192.55.4.36.1178298835.squirrel@mail2.webfaction.com> On Fri, May 4, 2007 10:26, Laurent Perrinet wrote: > Dear list, > > hope this is not off-topic. I'm sometimes lost in the richness of > scipy possibilities and I am looking for a function that could search > in the docstrings for particular keywords... (i'm sure that as many > other users, we're googling that). there is an equivalent in matlab > called lookfor which searches in the whole path all functions > containing the given keyword. > > does a similar function exist for scipy? and more generally (i)python? > We started looking into this at the IPython1 Sprint last month. We don't yet have anything to show other than a wiki page. http://ipython.scipy.org/moin/Developer_Zone/SearchDocs Ideas and other help are welcome. # Steve From fperez.net at gmail.com Fri May 4 13:20:51 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 4 May 2007 11:20:51 -0600 Subject: [SciPy-user] loking for lookfor In-Reply-To: <463B6854.2090705@gmx.net> References: <987F5DF9-F350-4D89-BC43-6F2CC0BECA37@incm.cnrs-mrs.fr> <463B6854.2090705@gmx.net> Message-ID: On 5/4/07, Steve Schmerler wrote: > Laurent Perrinet wrote: > > Dear list, > > > > hope this is not off-topic. I'm sometimes lost in the richness of > > scipy possibilities and I am looking for a function that could search > > in the docstrings for particular keywords... (i'm sure that as many > > other users, we're googling that). there is an equivalent in matlab > > called lookfor which searches in the whole path all functions > > containing the given keyword. > > > > does a similar function exist for scipy? and more generally (i)python? > > > > like a magic: > > %lookfor convol* > > > > would return all functions etc... containing "convol" in its docstring > > > > I don't know if scipy has such a function to search in docstrings. But > although not for the docstring, the cool ipython magic function %psearch > lets you search a certain namespace for function names (e.g. all > functions related to fft): In addition to this, we really need a better integrated help search system. At last weekend's ipython sprint, some Boulder developers started to look in this direction: http://ipython.scipy.org/moin/Developer_Zone/SearchDocs This is sorely needed, so I'm hoping they'll continue to be interested in developing this (I'm too swamped with other things to work on this for now). In the meantime, you can try to use pydoc -k or pydoc -g if you want a little GUI. It's slow as molasses because it does a real-time search of sys.path instead of having any kind of database-backed fast index. But it kinda works... Cheers, f From peridot.faceted at gmail.com Fri May 4 13:39:25 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 4 May 2007 13:39:25 -0400 Subject: [SciPy-user] loking for lookfor In-Reply-To: <463B6854.2090705@gmx.net> References: <987F5DF9-F350-4D89-BC43-6F2CC0BECA37@incm.cnrs-mrs.fr> <463B6854.2090705@gmx.net> Message-ID: On 04/05/07, Steve Schmerler wrote: > I don't know if scipy has such a function to search in docstrings. But > although not for the docstring, the cool ipython magic function %psearch > lets you search a certain namespace for function names (e.g. all > functions related to fft): It should be perfectly possible to write a script that, given a couple of top-level packages, or just everything currently imported, grovels through all the docstrings. It would have to import not-yet-imported subpackages - that is, if you have imported scipy, it would have to be smart enough to import and then look through all subpackages of scipy (which ought to be listed in __all__, after all) recursively. Fundamentally, this is rather difficult for python because the modules for python scn be strewn all over the filesystem, but not everything that looks like it might be a module is safe to import - for example, there may be testing code in a directory that also contains modules. But for at least the standard library and imported packages it should be doable. Yes, I know, "show me some code". Anne From pgmdevlist at gmail.com Fri May 4 14:02:22 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 4 May 2007 14:02:22 -0400 Subject: [SciPy-user] TimeSeries In-Reply-To: <1178294681.5148.10.camel@muli> References: <1178294681.5148.10.camel@muli> Message-ID: <200705041402.24413.pgmdevlist@gmail.com> On Friday 04 May 2007 12:04:41 Fred Jendrzejewski wrote: > The dates are inverted but the datas not. Fred, that should be fixed on the SVN. Would you mind giving another try ? Thanks again for your inputs! From bnuttall at uky.edu Fri May 4 17:32:40 2007 From: bnuttall at uky.edu (Brandon Nuttall) Date: Fri, 04 May 2007 17:32:40 -0400 Subject: [SciPy-user] ARIMA In-Reply-To: <200705041402.24413.pgmdevlist@gmail.com> References: <1178294681.5148.10.camel@muli> <200705041402.24413.pgmdevlist@gmail.com> Message-ID: <6.0.1.1.2.20070504172946.021d4978@pop.uky.edu> Folks, Speaking of both Timeseries and Lookfor, is there an autoregressive integrated moving average implementation in SciPy/Python for fitting and projecting time series data? Thanks. Brandon Brandon C. Nuttall BNUTTALL at UKY.EDU Kentucky Geological Survey (859) 257-5500 University of Kentucky (859) 257-1147 (fax) 228 Mining & Mineral Resources Bldg http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 From pgmdevlist at gmail.com Fri May 4 17:37:06 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 4 May 2007 17:37:06 -0400 Subject: [SciPy-user] ARIMA In-Reply-To: <6.0.1.1.2.20070504172946.021d4978@pop.uky.edu> References: <1178294681.5148.10.camel@muli> <200705041402.24413.pgmdevlist@gmail.com> <6.0.1.1.2.20070504172946.021d4978@pop.uky.edu> Message-ID: <200705041737.06933.pgmdevlist@gmail.com> On Friday 04 May 2007 17:32:40 Brandon Nuttall wrote: > Folks, > > Speaking of both Timeseries and Lookfor, is there an autoregressive > integrated moving average implementation in SciPy/Python for fitting and > projecting time series data? Not yet. That's in the plans for our TimeSeries package. We already have basic fitting such as moving windows and centered windows, but not yet a full ARIMA model. For fitting, you may wanna give a look to pyloess on the SVN server, that implements the loess/lowess/STL methods. From cookedm at physics.mcmaster.ca Fri May 4 17:52:00 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 4 May 2007 17:52:00 -0400 Subject: [SciPy-user] best way of finding a function In-Reply-To: <4632281B.5070705@gmail.com> References: <9EADC1E53F9C70479BF6559370369114142EB3@mrlnt6.mrl.uiuc.edu> <4632281B.5070705@gmail.com> Message-ID: <20070504215200.GA30385@arbutus.physics.mcmaster.ca> On Fri, Apr 27, 2007 at 11:43:07AM -0500, Robert Kern wrote: > Trevis Crane wrote: > > Hi, > > > > One thing that makes it hard to get into using SciPy and Python is the > > decentralized nature of the documentation. My problem is that I want to > > use the arc-hyperbolic sine function. I have no idea where this is in > > order to import it (and in all likelihood I could import it from any > > number of sources). I can?t seem to find it looking through the > > documentation on scipy.org. This is a specific example, but in general > > what?s the best way of finding where some given function is in order to > > import it? > > Well, it is numpy.arcsinh(). Googling for "numpy arcsinh" brings up numerous > hits including this: > > http://www.scipy.org/Numpy_Example_List > > The problem is that you had to know it was called "arcsinh" rather than > searching for it in a (nonunique) expanded form "arc-hyperbolic sine." Or, for that matter, that it's arcsinh and not asinh. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Fri May 4 18:07:54 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 4 May 2007 18:07:54 -0400 Subject: [SciPy-user] ode/programming question In-Reply-To: References: <9EADC1E53F9C70479BF6559370369114142EB6@mrlnt6.mrl.uiuc.edu> Message-ID: <20070504220754.GA30413@arbutus.physics.mcmaster.ca> On Mon, Apr 30, 2007 at 01:48:04PM -0400, Anne Archibald wrote: > On 30/04/07, Trevis Crane wrote: > > > When using one of the ODE solvers, you can pass it a list of arguments. > > These arguments are used in the function that defines the system of linear > > equations that you're solving. What if I want to modify an argument every > > iteration and re-pass this modified argument to the helper function in the > > next iteration? In Matlab (what I'm most familiar with), this is easy to do, > > using "nested" functions, because they share the same scope/namespace as the > > function they're nested in. This is not the case in Python, however, so I'm > > curious what the best way of doing this would be. I assume I define a > > global variable, but I'm wondering if there's another, perhaps better, way > > of doing it. > > I would use a class that implements the __callable__ method if I > wanted to store some state. But be warned that the ODE solver is going > to assume that your function always returns the same value for the > same inputs, and it's unlikely to call the function in t order. > > Incidentally, I've never understood why python has all those "args" > arguments. It makes the code and signature for functions with function > arguments complicated and confusing, and it's not usually enough: for > example, if you want to use the minimization functions to maximize, > you either have to write a function apurpose or you have to feed it a > lambda; in either case you can easily curry the function arbitrarily. > So I never ever use the "args" arguments. Why are they there? Probably because it's old code :) I agree, it's simple enough to say lambda x: f(x, other_args) if you need to pass more arguments. Or, if you're adverse to lambda, you can define a local function, or use functools.partial. I'd rip them out if it wasn't for backward compatibility. scipy.optimize and scipy.integrate could both use a hefty dose of "modernisation", like using function wrappers, not using print, consistent return semantics for different but related routines (the fmin_* routines with full_output=1, espicially), etc. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From ryanlists at gmail.com Fri May 4 21:46:58 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 4 May 2007 20:46:58 -0500 Subject: [SciPy-user] Ubuntu Feisty Source Install Message-ID: I have been dragged into windows against my will for some time and have almost made my peace with windows+cygwin as a reasonable operating system. But I really want to be one of the cool guys again (i.e. Linux users). I am trying to get scipy installed from source and have some issues. Numpy 1.0.2 is installed and passes all the tests. I think I have all the prereq's and have followed the instructions in INSTALL.txt. Here is the content of my site.cfg: [atlas] library_dirs = /usr/lib/atlas/3dnow/ atlas_libs = lapack, blas And here is the output of scipy.test() - apparently something is quite wrong with loadmat: In [3]: scipy.test() Warning: FAILURE importing tests for /usr/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ) Found 4 tests for scipy.io.array_import Found 1 tests for scipy.cluster.vq Found 128 tests for scipy.linalg.fblas Found 397 tests for scipy.ndimage Found 10 tests for scipy.integrate.quadpack Found 98 tests for scipy.stats.stats Found 53 tests for scipy.linalg.decomp Found 3 tests for scipy.integrate.quadrature Found 96 tests for scipy.sparse.sparse Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 6 tests for scipy.interpolate.fitpack Found 6 tests for scipy.interpolate Found 70 tests for scipy.stats.distributions Found 12 tests for scipy.io.mmio Found 10 tests for scipy.stats.morestats Found 4 tests for scipy.linalg.lapack Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.io.recaster Warning: FAILURE importing tests for /usr/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ) Found 4 tests for scipy.optimize.zeros Found 28 tests for scipy.io.mio Found 4 tests for scipy.fftpack.helper Found 41 tests for scipy.linalg.basic Found 2 tests for scipy.maxentropy.maxentropy Found 358 tests for scipy.special.basic Found 128 tests for scipy.lib.blas.fblas Found 7 tests for scipy.linalg.matfuncs Found 42 tests for scipy.lib.lapack Found 1 tests for scipy.optimize.cobyla Found 16 tests for scipy.lib.blas Found 1 tests for scipy.integrate Found 14 tests for scipy.linalg.blas Found 4 tests for scipy.signal.signaltools Found 0 tests for __main__ Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ........caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................Took 13 points. ............Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. ........................Use minimum degree ordering on A'+A. ...................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. .................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. ......................................................................................................................................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ....................................EEEE.E.EE.E.E.EE.E.E.E.EEEEE........................................................................................................................................................................................................................................................................................................................................................................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ...Result may be inaccurate, approximate err = 1.09253706841e-08 ...Result may be inaccurate, approximate err = 1.38604496001e-10 ..............................................................Residual: 1.05006950608e-07 ................... ====================================================================== ERROR: check loadmat case 3dmatrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case cell ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case cellnest ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case double ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case emptycell ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case minus ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case multi ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case object ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case onechar ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case sparse ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case sparsecomplex ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case string ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case stringarray ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case struct ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case structarr ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case structnest ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case unicode ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ---------------------------------------------------------------------- Ran 1596 tests in 2.628s FAILED (errors=19) Out[3]: Thanks for any help, Ryan From ryanlists at gmail.com Fri May 4 22:03:18 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 4 May 2007 21:03:18 -0500 Subject: [SciPy-user] Ubuntu Feisty Source Install In-Reply-To: References: Message-ID: I found another thread with loadmat issues that said to build from svn. Here is the scipy.test() result after rebuilding numpy and scipy from svn. Only 1 failure and a few warnings. Any thoughts?: In [3]: scipy.test() Found 7 tests for scipy.cluster.vq Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.fftpack.helper Found 20 tests for scipy.fftpack.pseudo_diffs Found 1 tests for scipy.integrate Found 10 tests for scipy.integrate.quadpack Found 3 tests for scipy.integrate.quadrature Found 6 tests for scipy.interpolate Found 6 tests for scipy.interpolate.fitpack Found 4 tests for scipy.io.array_import Found 28 tests for scipy.io.mio Found 12 tests for scipy.io.mmio Found 5 tests for scipy.io.npfile Found 4 tests for scipy.io.recaster Found 16 tests for scipy.lib.blas Found 128 tests for scipy.lib.blas.fblas Found 42 tests for scipy.lib.lapack Found 41 tests for scipy.linalg.basic Found 14 tests for scipy.linalg.blas Found 53 tests for scipy.linalg.decomp Found 128 tests for scipy.linalg.fblas Found 6 tests for scipy.linalg.iterative Found 4 tests for scipy.linalg.lapack Found 7 tests for scipy.linalg.matfuncs Warning: FAILURE importing tests for /usr/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ) Warning: FAILURE importing tests for /usr/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ) Found 2 tests for scipy.maxentropy Found 398 tests for scipy.ndimage Found 5 tests for scipy.odr Found 6 tests for scipy.optimize Found 1 tests for scipy.optimize.cobyla Found 4 tests for scipy.optimize.zeros Found 4 tests for scipy.signal.signaltools Found 105 tests for scipy.sparse Found 358 tests for scipy.special.basic Found 98 tests for scipy.stats Found 70 tests for scipy.stats.distributions Found 10 tests for scipy.stats.morestats Found 0 tests for __main__ ......== Error while importing _vq, not testing C imp of vq == ...........................................Residual: 1.05006950608e-07 ...........Took 13 points. ............... Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ........................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ...........................................................................................................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..||A.x - b|| = 0.718813025587 ||A.x - b|| = 0.107998540679 ||A.x - b|| = 0.0146543385345 ||A.x - b|| = 0.00169266301427 ||A.x - b|| = 0.000165008853207 .||A.x - b|| = 0.176628639181 ||A.x - b|| = 0.00206540732124 ||A.x - b|| = 0.000148607249368 .||A.x - b|| = 0.355451784869 ||A.x - b|| = 0.0514561779702 ||A.x - b|| = 0.00595662848756 ||A.x - b|| = 0.00118367288655 ||A.x - b|| = 0.000142372468837 ||A.x - b|| = 1.73694086643e-05 .||A.x - b|| = 0.199383828481 ||A.x - b|| = 0.00618598930234 ||A.x - b|| = 0.000123249125388 ..||A.x - b|| = 0.480871354795 ||A.x - b|| = 0.0576541097581 ||A.x - b|| = 0.00740254888902 ||A.x - b|| = 0.00120870585967 ||A.x - b|| = 0.000119225563203 ......Result may be inaccurate, approximate err = 1.09253706841e-08 ...Result may be inaccurate, approximate err = 1.38604496001e-10 .......................................................................................................................................................................................................................................................................................................................................................................................................................................................Use minimum degree ordering on A'+A. ...............sorting CSC indices in (7, 0) 0.0 (2, 0) 1.0 (1, 0) 2.0 (5, 1) 3.0 (4, 1) 4.0 out (1, 0) 2.0 (2, 0) 1.0 (7, 0) 0.0 (4, 1) 4.0 (5, 1) 3.0 ............Use minimum degree ordering on A'+A. ...............sorting CSR indices in (0, 7) 0.0 (0, 2) 1.0 (0, 1) 2.0 (1, 5) 3.0 (1, 4) 4.0 out (0, 1) 2.0 (0, 2) 1.0 (0, 7) 0.0 (1, 4) 4.0 (1, 5) 3.0 ............Use minimum degree ordering on A'+A. ......................Use minimum degree ordering on A'+A. ............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................FTies preclude use of exact statistic. ..Ties preclude use of exact statistic. ...... ====================================================================== FAIL: check_normal (scipy.stats.tests.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/stats/tests/test_morestats.py", line 51, in check_normal assert_array_less(A, crit[-2:]) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 235, in assert_array_less header='Arrays are not less-ordered') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not less-ordered (mismatch 100.0%) x: array(1.0864174631371526) y: array([ 0.858, 1.021]) ---------------------------------------------------------------------- Ran 1628 tests in 3.440s FAILED (failures=1) Out[3]: On 5/4/07, Ryan Krauss wrote: > I have been dragged into windows against my will for some time and > have almost made my peace with windows+cygwin as a reasonable > operating system. But I really want to be one of the cool guys again > (i.e. Linux users). I am trying to get scipy installed from source > and have some issues. Numpy 1.0.2 is installed and passes all the > tests. I think I have all the prereq's and have followed the > instructions in INSTALL.txt. > > Here is the content of my site.cfg: > [atlas] > library_dirs = /usr/lib/atlas/3dnow/ > atlas_libs = lapack, blas > > > And here is the output of scipy.test() - apparently something is quite > wrong with loadmat: > > In [3]: scipy.test() > Warning: FAILURE importing tests for 'scipy.linsolve.umfpack.umfpack' from > '...y/linsolve/umfpack/umfpack.pyc'> > /usr/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: > AttributeError: 'module' object has no attribute 'umfpack' (in > ) > Found 4 tests for scipy.io.array_import > Found 1 tests for scipy.cluster.vq > Found 128 tests for scipy.linalg.fblas > Found 397 tests for scipy.ndimage > Found 10 tests for scipy.integrate.quadpack > Found 98 tests for scipy.stats.stats > Found 53 tests for scipy.linalg.decomp > Found 3 tests for scipy.integrate.quadrature > Found 96 tests for scipy.sparse.sparse > Found 20 tests for scipy.fftpack.pseudo_diffs > Found 6 tests for scipy.optimize.optimize > Found 6 tests for scipy.interpolate.fitpack > Found 6 tests for scipy.interpolate > Found 70 tests for scipy.stats.distributions > Found 12 tests for scipy.io.mmio > Found 10 tests for scipy.stats.morestats > Found 4 tests for scipy.linalg.lapack > Found 18 tests for scipy.fftpack.basic > Found 4 tests for scipy.io.recaster > Warning: FAILURE importing tests for from '.../linsolve/umfpack/__init__.pyc'> > /usr/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: > AttributeError: 'module' object has no attribute 'umfpack' (in > ) > Found 4 tests for scipy.optimize.zeros > Found 28 tests for scipy.io.mio > Found 4 tests for scipy.fftpack.helper > Found 41 tests for scipy.linalg.basic > Found 2 tests for scipy.maxentropy.maxentropy > Found 358 tests for scipy.special.basic > Found 128 tests for scipy.lib.blas.fblas > Found 7 tests for scipy.linalg.matfuncs > Found 42 tests for scipy.lib.lapack > Found 1 tests for scipy.optimize.cobyla > Found 16 tests for scipy.lib.blas > Found 1 tests for scipy.integrate > Found 14 tests for scipy.linalg.blas > Found 4 tests for scipy.signal.signaltools > Found 0 tests for __main__ > > Don't worry about a warning regarding the number of bytes read. > Warning: 1000000 bytes requested, 20 bytes read. > ........caxpy:n=4 > ..caxpy:n=3 > ....ccopy:n=4 > ..ccopy:n=3 > .............cscal:n=4 > ....cswap:n=4 > ..cswap:n=3 > .....daxpy:n=4 > ..daxpy:n=3 > ....dcopy:n=4 > ..dcopy:n=3 > .............dscal:n=4 > ....dswap:n=4 > ..dswap:n=3 > .....saxpy:n=4 > ..saxpy:n=3 > ....scopy:n=4 > ..scopy:n=3 > .............sscal:n=4 > ....sswap:n=4 > ..sswap:n=3 > .....zaxpy:n=4 > ..zaxpy:n=3 > ....zcopy:n=4 > ..zcopy:n=3 > .............zscal:n=4 > ....zswap:n=4 > ..zswap:n=3 > ................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................Took > 13 points. > ............Resizing... 16 17 24 > Resizing... 20 7 35 > Resizing... 23 7 47 > Resizing... 24 25 58 > Resizing... 28 7 68 > Resizing... 28 27 73 > .....Use minimum degree ordering on A'+A. > ........................Use minimum degree ordering on A'+A. > ...................Resizing... 16 17 24 > Resizing... 20 7 35 > Resizing... 23 7 47 > Resizing... 24 25 58 > Resizing... 28 7 68 > Resizing... 28 27 73 > .....Use minimum degree ordering on A'+A. > .................Resizing... 16 17 24 > Resizing... 20 7 35 > Resizing... 23 7 47 > Resizing... 24 25 58 > Resizing... 28 7 68 > Resizing... 28 27 73 > .....Use minimum degree ordering on A'+A. > ......................................................................................................................................Ties > preclude use of exact statistic. > ..Ties preclude use of exact statistic. > ....................................EEEE.E.EE.E.E.EE.E.E.E.EEEEE........................................................................................................................................................................................................................................................................................................................................................................................................................caxpy:n=4 > ..caxpy:n=3 > ....ccopy:n=4 > ..ccopy:n=3 > .............cscal:n=4 > ....cswap:n=4 > ..cswap:n=3 > .....daxpy:n=4 > ..daxpy:n=3 > ....dcopy:n=4 > ..dcopy:n=3 > .............dscal:n=4 > ....dswap:n=4 > ..dswap:n=3 > .....saxpy:n=4 > ..saxpy:n=3 > ....scopy:n=4 > ..scopy:n=3 > .............sscal:n=4 > ....sswap:n=4 > ..sswap:n=3 > .....zaxpy:n=4 > ..zaxpy:n=3 > ....zcopy:n=4 > ..zcopy:n=3 > .............zscal:n=4 > ....zswap:n=4 > ..zswap:n=3 > ...Result may be inaccurate, approximate err = 1.09253706841e-08 > ...Result may be inaccurate, approximate err = 1.38604496001e-10 > ..............................................................Residual: > 1.05006950608e-07 > ................... > ====================================================================== > ERROR: check loadmat case 3dmatrix > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case cell > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case cellnest > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case complex > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case double > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case emptycell > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case matrix > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case minus > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case multi > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case object > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case onechar > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case sparse > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case sparsecomplex > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case string > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case stringarray > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case struct > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case structarr > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case structnest > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case unicode > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/usr/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat > matfile_dict = MR.get_variables() > File "/usr/lib/python2.5/site-packages/scipy/io/miobase.py", line > 269, in get_variables > mdict = self.file_header() > File "/usr/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, > in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ---------------------------------------------------------------------- > Ran 1596 tests in 2.628s > > FAILED (errors=19) > Out[3]: > > > > Thanks for any help, > > Ryan > From robert.kern at gmail.com Fri May 4 23:16:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 04 May 2007 22:16:17 -0500 Subject: [SciPy-user] Ubuntu Feisty Source Install In-Reply-To: References: Message-ID: <463BF701.4050405@gmail.com> Ryan Krauss wrote: > I found another thread with loadmat issues that said to build from > svn. Here is the scipy.test() result after rebuilding numpy and scipy > from svn. Only 1 failure and a few warnings. Any thoughts?: > ====================================================================== > FAIL: check_normal (scipy.stats.tests.test_morestats.test_anderson) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/stats/tests/test_morestats.py", > line 51, in check_normal > assert_array_less(A, crit[-2:]) > File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line > 235, in assert_array_less > header='Arrays are not less-ordered') > File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line > 215, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not less-ordered > > (mismatch 100.0%) > x: array(1.0864174631371526) > y: array([ 0.858, 1.021]) > > ---------------------------------------------------------------------- > Ran 1628 tests in 3.440s Does it fail repeatedly? Unfortunately, it is only a statistical test of a statistical test, if you follow my drift. It is almost guaranteed to fail eventually if you run it enough times. Anyways, I've modified it to run deterministically now, with a chosen PRNG seed. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Sat May 5 00:19:58 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 4 May 2007 23:19:58 -0500 Subject: [SciPy-user] Ubuntu Feisty Source Install In-Reply-To: <463BF701.4050405@gmail.com> References: <463BF701.4050405@gmail.com> Message-ID: I could not get a repeatable failure. After rebuilding from svn again I now get: ............Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ...... ---------------------------------------------------------------------- Ran 1628 tests in 3.670s OK Thanks, Ryan On 5/4/07, Robert Kern wrote: > Ryan Krauss wrote: > > I found another thread with loadmat issues that said to build from > > svn. Here is the scipy.test() result after rebuilding numpy and scipy > > from svn. Only 1 failure and a few warnings. Any thoughts?: > > > ====================================================================== > > FAIL: check_normal (scipy.stats.tests.test_morestats.test_anderson) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/usr/lib/python2.5/site-packages/scipy/stats/tests/test_morestats.py", > > line 51, in check_normal > > assert_array_less(A, crit[-2:]) > > File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line > > 235, in assert_array_less > > header='Arrays are not less-ordered') > > File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line > > 215, in assert_array_compare > > assert cond, msg > > AssertionError: > > Arrays are not less-ordered > > > > (mismatch 100.0%) > > x: array(1.0864174631371526) > > y: array([ 0.858, 1.021]) > > > > ---------------------------------------------------------------------- > > Ran 1628 tests in 3.440s > > Does it fail repeatedly? Unfortunately, it is only a statistical test of a > statistical test, if you follow my drift. It is almost guaranteed to fail > eventually if you run it enough times. > > Anyways, I've modified it to run deterministically now, with a chosen PRNG seed. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From dominique.orban at gmail.com Fri May 4 23:59:35 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 04 May 2007 23:59:35 -0400 Subject: [SciPy-user] flapack.so: undefined symbol: cblas_strsm In-Reply-To: <8793ae6e0705031432p4d2d7270md29ae565d4302345@mail.gmail.com> References: <8793ae6e0705031432p4d2d7270md29ae565d4302345@mail.gmail.com> Message-ID: <463C0127.2080500@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Issue resolved by creating a site.cfg file containing [atlas] library_dirs = /path/to/lib atlas_libs = lapack, f77blas, cblas, atlas Sorry for the noise. Dominique Orban wrote: > Hello, > > I compiled SciPy 0.5.2 against Lapack and Atlas 3.6.0 following the > directions on the SciPy website. I had previously installed NumPy > 1.0.2 which passed all the tests. In SciPy, I get the following error > message: > >>>> import scipy.linalg > Traceback (most recent call last): > File "", line 1, in ? > File "/home/orban/local/Python/lib/python/scipy/linalg/__init__.py", > line 8, in ? > from basic import * > File "/home/orban/local/Python/lib/python/scipy/linalg/basic.py", > line 17, in ? > from lapack import get_lapack_funcs > File "/home/orban/local/Python/lib/python/scipy/linalg/lapack.py", > line 17, in ? > from scipy.linalg import flapack > ImportError: /home/orban/local/Python/lib/python/scipy/linalg/flapack.so: > undefined symbol: cblas_strsm > > I interpreted this message as saying that flapack.so wasn't > (correctly) compiled against libcblas.so. The libcblas.so library is > however present in the same directory as libatlas, libf77blas and > others, and is coming from Atlas. > > Any help would be welcome as long web searches didn't help me much. I > paste below the result of 'python setup.py config' in the SciPy source > directory. For some reason, liblapack appears twice in each compiling > command. > > Thanks in advance. > Dominique > > > -------------Result of python setup.y config > > mkl_info: > libraries mkl,vml,guide not found in /usr/local/lib > libraries mkl,vml,guide not found in /usr/lib > NOT AVAILABLE > > fftw3_info: > libraries fftw3 not found in /usr/local/lib > libraries fftw3 not found in /usr/lib > fftw3 not found > NOT AVAILABLE > > fftw2_info: > libraries rfftw,fftw not found in /usr/local/lib > libraries rfftw,fftw not found in /usr/lib > fftw2 not found > NOT AVAILABLE > > dfftw_info: > libraries drfftw,dfftw not found in /usr/local/lib > libraries drfftw,dfftw not found in /usr/lib > dfftw not found > NOT AVAILABLE > > djbfft_info: > NOT AVAILABLE > > blas_opt_info: > blas_mkl_info: > libraries mkl,vml,guide not found in /usr/local/lib > libraries mkl,vml,guide not found in /usr/lib > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > Setting PTATLAS=ATLAS > Setting PTATLAS=ATLAS > FOUND: > libraries = ['lapack', 'f77blas', 'atlas'] > library_dirs = ['/home/orban/lib'] > language = c > > Could not locate executable f95 > customize GnuFCompiler > customize GnuFCompiler > customize GnuFCompiler using config > compiling '_configtest.c': > > /* This file is generated from numpy_distutils/system_info.py */ > void ATL_buildinfo(void); > int main(void) { > ATL_buildinfo(); > return 0; > } > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -pipe -m32 > -march=i386 -mtune=pentium4 -D_GNU_SOURCE -fPIC -fPIC > > compile options: '-c' > gcc: _configtest.c > gcc -pthread _configtest.o -L/home/orban/lib -llapack -lf77blas > -latlas -o _configtest > ATLAS version 3.6.0 built by orban on Thu Feb 1 16:14:41 EST 2007: > UNAME : Linux p1121.gerad.ca 2.6.9-22.0.1.ELsmp #1 SMP Thu Oct > 27 13:14:25 CDT 2005 i686 i686 i386 GNU/Linux > INSTFLG : > MMDEF : /home/orban/local/LinearAlgebra/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/gemm > ARCHDEF : /home/orban/local/LinearAlgebra/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/misc > F2CDEFS : -DAdd__ -DStringSunStyle > CACHEEDGE: 1048576 > F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.4.4 20050721 > (Red Hat 3.4.4-2) > F77FLAGS : -fomit-frame-pointer -O > CC : /usr/bin/gcc, version gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) > CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops > MCC : /usr/bin/gcc, version gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) > MCCFLAGS : -fomit-frame-pointer -O > success! > removing: _configtest.c _configtest.o _configtest > FOUND: > libraries = ['lapack', 'f77blas', 'atlas'] > library_dirs = ['/home/orban/lib'] > language = c > define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] > > ATLAS version 3.6.0 > lapack_opt_info: > lapack_mkl_info: > NOT AVAILABLE > > atlas_threads_info: > Setting PTATLAS=ATLAS > libraries lapack_atlas not found in /home/orban/lib > numpy.distutils.system_info.atlas_threads_info > Setting PTATLAS=ATLAS > Setting PTATLAS=ATLAS > FOUND: > libraries = ['lapack', 'lapack', 'f77blas', 'atlas'] > library_dirs = ['/home/orban/lib'] > language = f77 > > customize GnuFCompiler > customize GnuFCompiler > customize GnuFCompiler using config > compiling '_configtest.c': > > /* This file is generated from numpy_distutils/system_info.py */ > void ATL_buildinfo(void); > int main(void) { > ATL_buildinfo(); > return 0; > } > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -pipe -m32 > -march=i386 -mtune=pentium4 -D_GNU_SOURCE -fPIC -fPIC > > compile options: '-c' > gcc: _configtest.c > gcc -pthread _configtest.o -L/home/orban/lib -llapack -llapack > -lf77blas -latlas -o _configtest > ATLAS version 3.6.0 built by orban on Thu Feb 1 16:14:41 EST 2007: > UNAME : Linux p1121.gerad.ca 2.6.9-22.0.1.ELsmp #1 SMP Thu Oct > 27 13:14:25 CDT 2005 i686 i686 i386 GNU/Linux > INSTFLG : > MMDEF : /home/orban/local/LinearAlgebra/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/gemm > ARCHDEF : /home/orban/local/LinearAlgebra/ATLAS/CONFIG/ARCHS/P4SSE2/gcc/misc > F2CDEFS : -DAdd__ -DStringSunStyle > CACHEEDGE: 1048576 > F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.4.4 20050721 > (Red Hat 3.4.4-2) > F77FLAGS : -fomit-frame-pointer -O > CC : /usr/bin/gcc, version gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) > CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops > MCC : /usr/bin/gcc, version gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2) > MCCFLAGS : -fomit-frame-pointer -O > success! > removing: _configtest.c _configtest.o _configtest > FOUND: > libraries = ['lapack', 'lapack', 'f77blas', 'atlas'] > library_dirs = ['/home/orban/lib'] > language = f77 > define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] > > ATLAS version 3.6.0 > ATLAS version 3.6.0 > non-existing path in 'Lib/linsolve': 'tests' > umfpack_info: > libraries umfpack not found in /usr/local/lib > libraries umfpack not found in /usr/lib > /home/orban/local/Python/lib/python/numpy/distutils/system_info.py:401: > UserWarning: > UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) > not found. Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [umfpack]) or by setting > the UMFPACK environment variable. > warnings.warn(self.notfounderror.__doc__) > NOT AVAILABLE > > Warning: Subpackage 'Lib' configuration returned as 'scipy' > non-existing path in 'Lib/maxentropy': 'doc' > running config > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGPAEn2vhdTNgbn8wRAtVZAJ95P1nSrPDaeXvPps0u0K04Y3BaMQCgid1D vrRa/fQCGUouKL3zoahxy54= =PLko -----END PGP SIGNATURE----- From nwagner at iam.uni-stuttgart.de Sat May 5 06:04:51 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 05 May 2007 12:04:51 +0200 Subject: [SciPy-user] License issue Message-ID: Hi, Is the Common Public License (CPL) compatible with scipy ? Nils https://projects.coin-or.org/Ipopt/wiki From stefan at sun.ac.za Sat May 5 08:39:43 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 5 May 2007 14:39:43 +0200 Subject: [SciPy-user] License issue In-Reply-To: References: Message-ID: <20070505123943.GM23778@mentat.za.net> On Sat, May 05, 2007 at 12:04:51PM +0200, Nils Wagner wrote: > Is the Common Public License (CPL) compatible with scipy ? No, it's not a BSD compatible license. Also see http://www.gnu.org/philosophy/license-list.html """ Common Public License Version 1.0 This is a free software license but it is incompatible with the GPL. The Common Public License is incompatible with the GPL because it has various specific requirements that are not in the GPL. For example, it requires certain patent licenses be given that the GPL does not require. (We don't think those patent license requirements are inherently a bad idea, but nonetheless they are incompatible with the GNU GPL.) """ Regards St?fan From drfredkin at ucsd.edu Sat May 5 16:14:14 2007 From: drfredkin at ucsd.edu (Donald Fredkin) Date: Sat, 5 May 2007 20:14:14 +0000 (UTC) Subject: [SciPy-user] Question about scipy.stats References: <200705011458.14151.kuantiko@escomposlinux.org> Message-ID: Jes?s Carrete Monta?a wrote: > I get the following error when trying to evaluate the probability > density function for a Poisson distribution: > > from scipy import * > > d2=stats.poisson(loc=1.,scale=2.) > x=arange(1,10,1) > d2.pdf(x) What is a probability density function (pdf) for a Poisson distribution? and what does scale mean? Both notions are fine for a Gaussian distribution, which is continuous, but the Poisson distribution is discrete and contains only one parameter. -- From robert.kern at gmail.com Sat May 5 16:43:14 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 05 May 2007 15:43:14 -0500 Subject: [SciPy-user] Question about scipy.stats In-Reply-To: References: <200705011458.14151.kuantiko@escomposlinux.org> Message-ID: <463CEC62.9040207@gmail.com> Donald Fredkin wrote: > Jes?s Carrete Monta?a wrote: > >> I get the following error when trying to evaluate the probability >> density function for a Poisson distribution: >> >> from scipy import * >> >> d2=stats.poisson(loc=1.,scale=2.) >> x=arange(1,10,1) >> d2.pdf(x) > > What is a probability density function (pdf) for a Poisson > distribution? Nothing. As a discrete distribution, it has a Probability Mass Function, .pmf(), method. However, there is a bug in the way we deal with "frozen" probability distributions. We don't handle the .pmf() method of discrete distributions properly. I'll try to fix it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kuantiko at escomposlinux.org Sat May 5 16:55:47 2007 From: kuantiko at escomposlinux.org (=?iso-8859-15?q?Jes=FAs_Carrete_Monta=F1a?=) Date: Sat, 5 May 2007 22:55:47 +0200 Subject: [SciPy-user] Question about scipy.stats In-Reply-To: References: <200705011458.14151.kuantiko@escomposlinux.org> Message-ID: <200705052255.48121.kuantiko@escomposlinux.org> El S?bado, 5 de Mayo de 2007 22:14, Donald Fredkin escribi?: > Jes?s Carrete Monta?a wrote: > > I get the following error when trying to evaluate the > > probability density function for a Poisson distribution: > > > > from scipy import * > > > > d2=stats.poisson(loc=1.,scale=2.) > > x=arange(1,10,1) > > d2.pdf(x) > > What is a probability density function (pdf) for a Poisson > distribution? and what does scale mean? Both notions are fine for a > Gaussian distribution, which is continuous, but the Poisson > distribution is discrete and contains only one parameter. While your observations are formally correct, the documentation for scipy.stats states that the pdf() method should exist - I suppose it should work as the discrete counterpart of a true PDF, returning the probability of its argument (or raising an exception if it's not an integer). Similarly, although the Poisson distribution is uniparametric, the same documentation says that those two arguments should be accepted by all the constructors (even if they are ignored). I included them only to make the parallelism between my calls to poisson() and norm() more evident. To avoid confusion, let's take an example of a continous, biparametric distribution such as scipy.stats.beta: XXXXXXXXXXXXXXXXXXXX--code--XXXXXXXXXXXXXXXXXXX import scipy.stats d=scipy.stats.beta(loc=.2,scale=.1) d.pdf(.3) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/kuantiko/latex/2006-2007/lFNyP/memoria/ /usr/lib/python2.4/site-packages/scipy/stats/distributions.py in pdf(self, x) 104 self.dist = dist 105 def pdf(self,x): --> 106 return self.dist.pdf(x,*self.args,**self.kwds) 107 def cdf(self,x): 108 return self.dist.cdf(x,*self.args,**self.kwds) /usr/lib/python2.4/site-packages/scipy/stats/distributions.py in pdf(self, x, *args, **kwds) 466 goodargs = argsreduce(cond, *((x,)+args+(scale,))) 467 scale, goodargs = goodargs[-1], goodargs[:-1] --> 468 place(output,cond,self._pdf(*goodargs) / scale) 469 return output 470 TypeError: _pdf() takes exactly 4 arguments (2 given) XXXXXXXXXXXXXXXXXXXX--code--XXXXXXXXXXXXXXXXXXX The exact same code, but changing 'beta' to 'norm', works as expected. I suppose I am doing something wrong (given the lack of answers to my original message, probably something very elementary) but I still haven't been able to find the solution. I'm using the functions from Rpy in the meantime: great software, but IMHO terribly slow. I would prefer to use only scipy and would e very grateful if someone pointed me to the answer. Thanks, Jes?s. From robert.kern at gmail.com Sat May 5 17:08:11 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 05 May 2007 16:08:11 -0500 Subject: [SciPy-user] Question about scipy.stats In-Reply-To: <200705052255.48121.kuantiko@escomposlinux.org> References: <200705011458.14151.kuantiko@escomposlinux.org> <200705052255.48121.kuantiko@escomposlinux.org> Message-ID: <463CF23B.3050908@gmail.com> Jes?s Carrete Monta?a wrote: > El S?bado, 5 de Mayo de 2007 22:14, Donald Fredkin escribi?: >> Jes?s Carrete Monta?a wrote: >>> I get the following error when trying to evaluate the >>> probability density function for a Poisson distribution: >>> >>> from scipy import * >>> >>> d2=stats.poisson(loc=1.,scale=2.) >>> x=arange(1,10,1) >>> d2.pdf(x) >> What is a probability density function (pdf) for a Poisson >> distribution? and what does scale mean? Both notions are fine for a >> Gaussian distribution, which is continuous, but the Poisson >> distribution is discrete and contains only one parameter. > > While your observations are formally correct, the documentation for > scipy.stats states that the pdf() method should exist - I suppose it > should work as the discrete counterpart of a true PDF, returning the > probability of its argument (or raising an exception if it's not an > integer). As I mentioned, it's actually .pmf() for discrete distributions, and that's not there currently because of a bug. You can use the "unfrozen" versions if you can bear passing the arguments defining the parameters of the distribution every time: stats.poisson.pmf(10, 5.0) > Similarly, although the Poisson distribution is > uniparametric, the same documentation says that those two arguments > should be accepted by all the constructors (even if they are ignored). > I included them only to make the parallelism between my calls to > poisson() and norm() more evident. Don't use them if they don't make sense for your application. They are *not* ignored. Every probability distribution can be generalized like so: X ~ Poisson(mu) Y = scale * X + loc Y ~ stats.poisson(mu, loc=loc, scale=scale) But if you are dealing with that generalization, don't do it. > To avoid confusion, let's take an example of a continous, biparametric > distribution such as scipy.stats.beta: > > XXXXXXXXXXXXXXXXXXXX--code--XXXXXXXXXXXXXXXXXXX > > import scipy.stats > d=scipy.stats.beta(loc=.2,scale=.1) > d.pdf(.3) > > --------------------------------------------------------------------------- > exceptions.TypeError Traceback (most > recent call last) > > /home/kuantiko/latex/2006-2007/lFNyP/memoria/ > > /usr/lib/python2.4/site-packages/scipy/stats/distributions.py in > pdf(self, x) > 104 self.dist = dist > 105 def pdf(self,x): > --> 106 return self.dist.pdf(x,*self.args,**self.kwds) > 107 def cdf(self,x): > 108 return self.dist.cdf(x,*self.args,**self.kwds) > > /usr/lib/python2.4/site-packages/scipy/stats/distributions.py in > pdf(self, x, *args, **kwds) > 466 goodargs = argsreduce(cond, *((x,)+args+(scale,))) > 467 scale, goodargs = goodargs[-1], goodargs[:-1] > --> 468 place(output,cond,self._pdf(*goodargs) / scale) > 469 return output > 470 > > TypeError: _pdf() takes exactly 4 arguments (2 given) Yes, because beta distributions take two shape parameters that you have not provided. In [20]: b = stats.beta(3, 4) In [21]: b.pdf(0.2) Out[21]: array(1.2288000000000006) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Sat May 5 17:14:05 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 5 May 2007 17:14:05 -0400 Subject: [SciPy-user] License issue In-Reply-To: <20070505123943.GM23778@mentat.za.net> References: <20070505123943.GM23778@mentat.za.net> Message-ID: > On Sat, May 05, 2007 at 12:04:51PM +0200, Nils Wagner wrote: >> Is the Common Public License (CPL) compatible with scipy ? On Sat, 5 May 2007, Stefan van der Walt apparently wrote: > No, it's not a BSD compatible license. Also see In what sense? Do you just mean the copyright clause? I do not like all the verbiage, but in the end, it seems harmless. At the *very* least, there is no reason not to base SciKits on such software, it seems to me. Cheers, Alan Isaac From robert.kern at gmail.com Sat May 5 17:20:27 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 05 May 2007 16:20:27 -0500 Subject: [SciPy-user] License issue In-Reply-To: References: <20070505123943.GM23778@mentat.za.net> Message-ID: <463CF51B.9080501@gmail.com> Alan G Isaac wrote: >> On Sat, May 05, 2007 at 12:04:51PM +0200, Nils Wagner wrote: >>> Is the Common Public License (CPL) compatible with scipy ? > > On Sat, 5 May 2007, Stefan van der Walt apparently wrote: >> No, it's not a BSD compatible license. Also see > > In what sense? In these sense that CPL-licensed code will not be included into the main scipy package because of our policies with respect to licenses. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kuantiko at escomposlinux.org Sat May 5 17:36:56 2007 From: kuantiko at escomposlinux.org (=?utf-8?q?Jes=C3=BAs_Carrete_Monta=C3=B1a?=) Date: Sat, 5 May 2007 23:36:56 +0200 Subject: [SciPy-user] Question about scipy.stats In-Reply-To: <463CF23B.3050908@gmail.com> References: <200705011458.14151.kuantiko@escomposlinux.org> <200705052255.48121.kuantiko@escomposlinux.org> <463CF23B.3050908@gmail.com> Message-ID: <200705052336.56906.kuantiko@escomposlinux.org> El S?bado, 5 de Mayo de 2007 23:08, Robert Kern escribi?: [...] > Every probability distribution can be generalized > like so: > > X ~ Poisson(mu) > Y = scale * X + loc > Y ~ stats.poisson(mu, loc=loc, scale=scale) > > But if you are dealing with that generalization, don't do it. [...] > > TypeError: _pdf() takes exactly 4 arguments (2 given) > > Yes, because beta distributions take two shape parameters that you > have not provided. > > In [20]: b = stats.beta(3, 4) > > In [21]: b.pdf(0.2) > Out[21]: array(1.2288000000000006) I wasn't aware of the use of that generalization, and erroneously tought that the two parameters of the distribution were calculated from loc and scale. Thank you for taking the time to explain this to me, Jes?s. From aisaac at american.edu Sat May 5 18:06:57 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 5 May 2007 18:06:57 -0400 Subject: [SciPy-user] License issue In-Reply-To: <463CF51B.9080501@gmail.com> References: <20070505123943.GM23778@mentat.za.net> <463CF51B.9080501@gmail.com> Message-ID: >>> On Sat, May 05, 2007 at 12:04:51PM +0200, Nils Wagner >>> wrote: >>>> Is the Common Public License (CPL) compatible with scipy ? >> On Sat, 5 May 2007, Stefan van der Walt apparently wrote: >>> No, it's not a BSD compatible license. Also see > Alan G Isaac wrote: >> In what sense? On Sat, 05 May 2007, Robert Kern apparently wrote: > In these sense that CPL-licensed code will not be included into the main scipy > package because of our policies with respect to licenses. Well OK, that's just a policy statement. Of course my real question is: what is wrong with this license? (Real question; not an assertion that nothing is.) It seems only to make explicit things that are left implicit in the modified BSD license. And just to be clear, I believe you are not saying that it would be inappropriate for a SciKit. Cheers, Alan Isaac From robert.kern at gmail.com Sat May 5 18:15:36 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 05 May 2007 17:15:36 -0500 Subject: [SciPy-user] License issue In-Reply-To: References: <20070505123943.GM23778@mentat.za.net> <463CF51B.9080501@gmail.com> Message-ID: <463D0208.2040301@gmail.com> Alan G Isaac wrote: >>>> On Sat, May 05, 2007 at 12:04:51PM +0200, Nils Wagner >>>> wrote: >>>>> Is the Common Public License (CPL) compatible with scipy ? > >>> On Sat, 5 May 2007, Stefan van der Walt apparently wrote: >>>> No, it's not a BSD compatible license. Also see > >> Alan G Isaac wrote: >>> In what sense? > > On Sat, 05 May 2007, Robert Kern apparently wrote: >> In these sense that CPL-licensed code will not be included into the main scipy >> package because of our policies with respect to licenses. > > Well OK, that's just a policy statement. Yes. > Of course my real question is: what is wrong with this > license? (Real question; not an assertion that nothing is.) In general, nothing. With respect to our policy, it's not compatible with the GPL and I, at least, don't want components of scipy that preclude incorporation of scipy with GPLed programs. > It seems only to make explicit things that are left implicit > in the modified BSD license. > > And just to be clear, I believe you are not saying that it > would be inappropriate for a SciKit. Correct. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sim at klubko.net Sat May 5 21:18:21 2007 From: sim at klubko.net (Petr =?utf-8?q?=C5=A0imon?=) Date: Sun, 6 May 2007 09:18:21 +0800 Subject: [SciPy-user] polyfit problem Message-ID: <200705060918.21724.sim@klubko.net> Hello, I am learning regression and I was trying to do polyfit with degree higher than the length of data, which failed /usr/lib/python2.4/site-packages/numpy/lib/polynomial.py:305: RankWarning: Polyfit may be poorly conditioned warnings.warn(msg, RankWarning) /usr/lib/python2.4/site-packages/numpy/lib/polynomial.py in polyfit(x, y, deg, rcond, full) 307 # scale returned coefficients 308 if scale != 0 : --> 309 c /= vander([scale], order)[0] 310 311 if full : ValueError: invalid return array shape Is there anyway I can go around this? I am not very strong in linear algebra, so a short explanation or pointer what to read would be of a great help. Thanks a lot Petr From hasslerjc at comcast.net Sat May 5 21:37:49 2007 From: hasslerjc at comcast.net (John Hassler) Date: Sat, 05 May 2007 21:37:49 -0400 Subject: [SciPy-user] polyfit problem In-Reply-To: <200705060918.21724.sim@klubko.net> References: <200705060918.21724.sim@klubko.net> Message-ID: <463D316D.9030007@comcast.net> You need _at least_ "degree plus one" data points. Consider a quadratic: y = a*x**2 + b*x + c You would need at least three points to determine the three coefficients. Fewer would give an indeterminate problem. john Petr ?imon wrote: > Hello, > I am learning regression and I was trying to do polyfit with degree higher > than the length of data, which failed > > /usr/lib/python2.4/site-packages/numpy/lib/polynomial.py:305: RankWarning: > Polyfit may be poorly conditioned warnings.warn(msg, RankWarning) > /usr/lib/python2.4/site-packages/numpy/lib/polynomial.py in polyfit(x, y, deg, > rcond, full) > 307 # scale returned coefficients > 308 if scale != 0 : > --> 309 c /= vander([scale], order)[0] > 310 > 311 if full : > > ValueError: invalid return array shape > > Is there anyway I can go around this? I am not very strong in linear algebra, > so a short explanation or pointer what to read would be of a great help. > > Thanks a lot > Petr > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From sim at klubko.net Sat May 5 21:42:16 2007 From: sim at klubko.net (Petr =?windows-1252?q?=8Aimon?=) Date: Sun, 6 May 2007 09:42:16 +0800 Subject: [SciPy-user] polyfit problem In-Reply-To: <463D316D.9030007@comcast.net> References: <200705060918.21724.sim@klubko.net> <463D316D.9030007@comcast.net> Message-ID: <200705060942.17020.sim@klubko.net> On Sunday 06 May 2007 09:37:49 John Hassler wrote: > You need _at least_ "degree plus one" data points. Consider a quadratic: > y = a*x**2 + b*x + c > You would need at least three points to determine the three > coefficients. Fewer would give an indeterminate problem. > john > Oh, right, thanks a lot (\me hiding) Petr > Petr ?imon wrote: > > Hello, > > I am learning regression and I was trying to do polyfit with degree > > higher than the length of data, which failed > > > > /usr/lib/python2.4/site-packages/numpy/lib/polynomial.py:305: > > RankWarning: Polyfit may be poorly conditioned warnings.warn(msg, > > RankWarning) /usr/lib/python2.4/site-packages/numpy/lib/polynomial.py in > > polyfit(x, y, deg, rcond, full) > > 307 # scale returned coefficients > > 308 if scale != 0 : > > --> 309 c /= vander([scale], order)[0] > > 310 > > 311 if full : > > > > ValueError: invalid return array shape > > > > Is there anyway I can go around this? I am not very strong in linear > > algebra, so a short explanation or pointer what to read would be of a > > great help. > > > > Thanks a lot > > Petr > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Petr ?imon http://www.klubko.net PhD student, TIGP-CLCLP Academia Sinica http://clclp.ling.sinica.edu.tw "... what the Buddhist call 'right livelyhood', I didn't have that, I didn't have any way of making a living, and to make a living is to be doing something that you love, something that was creative, something that made sense..." Mark Bittner, parrot caretaker, Telegraph Hill From john at curioussymbols.com Sun May 6 01:03:10 2007 From: john at curioussymbols.com (John Pye) Date: Sun, 06 May 2007 15:03:10 +1000 Subject: [SciPy-user] multiple regression: problem with matrix algebra Message-ID: <463D618E.9080201@curioussymbols.com> Hi all I am having a little trouble with using optimize.leastsq for multiple regression. I've used an approach based on this tute, which deals only with single regression: http://linuxgazette.net/115/andreasen.html But now in my adaptation to multiple regression, I can get it to work only if I do some dirty loop-based evaluation for my residuals. But if I try to use elegant matrix evaluation, the fit parameters go awry: # this doesn't work: results = optimize.leastsq(residuals_mat, params0, args=(y, x1,x2),full_output=1) # this works: #results = optimize.leastsq(residuals, params0, args=(y, x1,x2),full_output=1) I'm sure it's a silly mistake, but I certainly can't find it. Does anyone have any suggestions? Cheers JP -------------- next part -------------- A non-text attachment was scrubbed... Name: kinfit.py Type: text/x-python Size: 1803 bytes Desc: not available URL: From ckkart at hoc.net Sun May 6 03:05:26 2007 From: ckkart at hoc.net (Christian K.) Date: Sun, 06 May 2007 16:05:26 +0900 Subject: [SciPy-user] multiple regression: problem with matrix algebra In-Reply-To: <463D618E.9080201@curioussymbols.com> References: <463D618E.9080201@curioussymbols.com> Message-ID: John Pye wrote: > Hi all > > I am having a little trouble with using optimize.leastsq for multiple > regression. I've used an approach based on this tute, which deals only > with single regression: > http://linuxgazette.net/115/andreasen.html > > But now in my adaptation to multiple regression, I can get it to work > only if I do some dirty loop-based evaluation for my residuals. But if I > try to use elegant matrix evaluation, the fit parameters go awry: Set the iteration stepsize (epsfcn keyword arg of leastsq) to something higher than the default, which is machine precision (?), e.g. 1e-12. Then both methods work. Christian From john at curioussymbols.com Sun May 6 04:38:23 2007 From: john at curioussymbols.com (John Pye) Date: Sun, 06 May 2007 18:38:23 +1000 Subject: [SciPy-user] multiple regression: problem with matrix algebra In-Reply-To: References: <463D618E.9080201@curioussymbols.com> Message-ID: <463D93FF.7020006@curioussymbols.com> Christian K. wrote: > John Pye wrote: > >> Hi all >> >> I am having a little trouble with using optimize.leastsq for multiple >> regression. I've used an approach based on this tute, which deals only >> with single regression: >> http://linuxgazette.net/115/andreasen.html >> >> But now in my adaptation to multiple regression, I can get it to work >> only if I do some dirty loop-based evaluation for my residuals. But if I >> try to use elegant matrix evaluation, the fit parameters go awry: >> > > Set the iteration stepsize (epsfcn keyword arg of leastsq) to something > higher than the default, which is machine precision (?), e.g. 1e-12. > Then both methods work. > > Christian 'Tis true! Thanks very much. So is it fair to say that the default value of this parameter should perhaps be changed? Cheers JP From s.mientki at ru.nl Sun May 6 05:07:17 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 06 May 2007 11:07:17 +0200 Subject: [SciPy-user] treshold function ? Message-ID: <463D9AC5.8030008@ru.nl> hello, is there a simple treshold function, that makes all elements of an array zero if the element is smaller than a given treshold ? I couldn't find it in Numpy, Signal, but maybe I overlooked it. Of course it can be done by a simple formula array_data = array_data * ( array_data > treshold ) but it's more convenient to use a function. thanks, Stef Mientki From nicolas.pettiaux at ael.be Sun May 6 08:38:11 2007 From: nicolas.pettiaux at ael.be (Nicolas Pettiaux) Date: Sun, 6 May 2007 14:38:11 +0200 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes Message-ID: I want to give arguments to the faculty (applied sciences at the Free University of Brussels) where I am implied in the teaching of numerical analysis, today with matlab and octave, to consider switching to python with scipy and numpy. I am looking for examples and similar actions by others, and I am also looking for answers to questions and remarks I may get. One that I have already received is that in scipy / matplotlib (and python) the indices of matrices and arrays is different than in matlab / octave /scilab : in python with numpy for example, the first element is 0 while in matlab it is 1, as shown in http://www.scipy.org/NumPy_for_Matlab_Users#head-5a9301ba4c6f5a12d5eb06e478b9fb8bbdc25084 as for example matlab : numpy a(2,:) : a[1] or a[1,:] : entire second row of a For me , the octave / matlab notation, sith first element having the 1 numeber, is rather self explanatory. I count with one as the first number. What would you answer ? Is it possible to change the behavior of numpy and if not, how can I argue for the python way of counting ? THanks, Nicolas -- Nicolas Pettiaux - email: nicolas.pettiaux at ael.be From ryanlists at gmail.com Sun May 6 09:12:36 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sun, 6 May 2007 08:12:36 -0500 Subject: [SciPy-user] treshold function ? In-Reply-To: <463D9AC5.8030008@ru.nl> References: <463D9AC5.8030008@ru.nl> Message-ID: What about this: a[a<0.4]=0.0 You can define a function is you want to: def zero_below_thresh(arrayin, thresh): arrayin[arrayin wrote: > hello, > > is there a simple treshold function, > that makes all elements of an array zero if the element is smaller than > a given treshold ? > > I couldn't find it in Numpy, Signal, > but maybe I overlooked it. > > Of course it can be done by a simple formula > array_data = array_data * ( array_data > treshold ) > but it's more convenient to use a function. > > thanks, > Stef Mientki > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Sun May 6 09:18:06 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sun, 6 May 2007 08:18:06 -0500 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: References: Message-ID: Numpy follows the convention of Python which follows the convention of C. So, to change the Numpy convention, you would have to convince Python to change which would most likely require convincing C to change. It isn't going to happen. It is just a convention people have to get used to. The goal of Scipy/Numpy isn't to make a software package that acts exactly like Matlab. The goal is to make a better software package for scientific computing. There are things that Matlab does that aren't worthy of our imitation. Ryan On 5/6/07, Nicolas Pettiaux wrote: > I want to give arguments to the faculty (applied sciences at the Free > University of Brussels) where I am implied in the teaching of > numerical analysis, today with matlab and octave, to consider > switching to python with scipy and numpy. > > I am looking for examples and similar actions by others, and I am also > looking for answers to questions and remarks I may get. > > One that I have already received is that in scipy / matplotlib (and > python) the indices of matrices and arrays is different than in matlab > / octave /scilab : in python with numpy for example, the first element > is 0 while in matlab it is 1, as shown in > > http://www.scipy.org/NumPy_for_Matlab_Users#head-5a9301ba4c6f5a12d5eb06e478b9fb8bbdc25084 > > as for example > > matlab : numpy > a(2,:) : a[1] or a[1,:] : entire second row of a > > For me , the octave / matlab notation, sith first element having the 1 > numeber, is rather self explanatory. I count with one as the first > number. > > What would you answer ? Is it possible to change the behavior of numpy > and if not, how can I argue for the python way of counting ? > > THanks, > > Nicolas > -- > Nicolas Pettiaux - email: nicolas.pettiaux at ael.be > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nicolas.pettiaux at ael.be Sun May 6 09:59:29 2007 From: nicolas.pettiaux at ael.be (Nicolas Pettiaux) Date: Sun, 6 May 2007 15:59:29 +0200 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: References: Message-ID: 2007/5/6, Ryan Krauss : Thanks Ryan for your rapid comment. > Numpy follows the convention of Python which follows the convention of > C. So, to change the Numpy convention, you would have to convince > Python to change which would most likely require convincing C to > change. It isn't going to happen. It is just a convention people > have to get used to. Ok for me. This helps me in saying "with python, the students learn a real programming language, like C, that also has capabilities very similar to matlab / octave when loaded with numpy / scipy" What about C++ or java ? Do they use the same convention as C ? >The goal of Scipy/Numpy isn't to make a software > package that acts exactly like Matlab. The goal is to make a better > software package for scientific computing. There are things that > Matlab does that aren't worthy of our imitation. I agree. THanks, Nicolas -- Nicolas Pettiaux - email: nicolas.pettiaux at ael.be From s.mientki at ru.nl Sun May 6 10:08:40 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 06 May 2007 16:08:40 +0200 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: References: Message-ID: <463DE168.4080306@ru.nl> Nicolas Pettiaux wrote: > I want to give arguments to the faculty (applied sciences at the Free > University of Brussels) where I am implied in the teaching of > numerical analysis, today with matlab and octave, to consider > switching to python with scipy and numpy. > > I am looking for examples and similar actions by others, and I am also > looking for answers to questions and remarks I may get. > I'm in a similar situation (Radboud university, Nijmegen), and even worse some people want to change from MatLab to LabView, which, for as far as I can see, is very bad for educational purposes (and even more expensive). To make them like SciPy more than MatLab, I'm writing a few shell programs, that will make it more attractive. I've made a first version of a general data acquisition / realtime analysis tool (which btw also supports MatLab ;-) http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jalcc_data_acquisition.html you can see a movie here http://oase.uci.ru.nl/~mientki/download/medilab_tot.htm And now I'm working on a Signal WorkBench, a first demo (including comparison with MatLab) is described here http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jallcc_swb_filters.html There's someone else writing another Signal WorkBench, but I can't remember his name, nor his program right now (BVE ?) > One that I have already received is that in scipy / matplotlib (and > python) the indices of matrices and arrays is different than in matlab > / octave /scilab : in python with numpy for example, the first element > is 0 while in matlab it is 1, as shown in > > http://www.scipy.org/NumPy_for_Matlab_Users#head-5a9301ba4c6f5a12d5eb06e478b9fb8bbdc25084 > > as for example > > matlab : numpy > a(2,:) : a[1] or a[1,:] : entire second row of a > > For me , the octave / matlab notation, sith first element having the 1 > numeber, is rather self explanatory. I count with one as the first > number. > > What would you answer ? Yes, indeed, starting with zero is inhuman ;-) but, - working with zero-based index, makes cicular arrays more easy to handle - it's very easy to get the third element from the end a[-3] so not very strong arguments ;-) And even harder to get used, is the end-index is not included, like in MatLab. So I think you should use other arguments, like - it's easy to vectorize any function (to gain speed) - all code is avaliable and can be changed (In MatLab, I often need to run test series, to see what a function exactly did, because the description was inadaequat, and MathWorks didn't want to tell me what's inside, btw the standard answer from MathWorks "only available if you manage to get a job at MathWorks" ;-) - SciPy is free, and therefor every student can take it home and study it when it fits him/her - Python can be used to perform any computer task (general programming), while MatLab is mainly pointed at math - graphical user interface with feedback are much easier to create than in MatLab (ever designed an interactive plot in MatLab ?) - Python can be used fully Object Oriented (MatLab not), which makes extending code (in the future) extremly easy - in MatLab every function must in a seperate file (do you have also directories with a few hundred very small files ;-) - Memory footprint MatLab : 110 MB, Scipy 10 MB - Embedding and encapsultion of MatLab is very limites, Python is made for it - although more scientific problems are discussed in MatLab newsgroups, the answers often only consists of the right direction. In Python newsgroups (all that I know), give you support (and often more than 1 solution), until your problem is solved (Thanks newsgroups !!) - although the Python documentation is sometimes fragmented, the available sources on the web and the available books are much better, just compare the ratio of contents and number of pages of "Using MatLab, version 5" and a standard Pythion manula like "Learning Python, O'Reilly" - New versions of MatLab are full of bugs and often doesn't support newer platforms, I've the feeling Python doesn't suffer from that. - Function calls with many (named) parameters is much easier in Python than in MatLab So aren't there any disadvantages ?, Yes there are a few: - Python (Scipy) is fully unknown (so you must be a Don QuiChotte) - the MatLab workspace is missing in Python succes with your attempt, cheers, Stef Mientki > Is it possible to change the behavior of numpy > and if not, how can I argue for the python way of counting ? > > THanks, > > Nicolas > From s.mientki at ru.nl Sun May 6 10:10:59 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 06 May 2007 16:10:59 +0200 Subject: [SciPy-user] treshold function ? In-Reply-To: References: <463D9AC5.8030008@ru.nl> Message-ID: <463DE1F3.90706@ru.nl> Ryan Krauss wrote: > What about this: > a[a<0.4]=0.0 > thanks Ryan, that seems simple enough. cheers. Stef Mientki From nicolas.pettiaux at ael.be Sun May 6 10:56:33 2007 From: nicolas.pettiaux at ael.be (Nicolas Pettiaux) Date: Sun, 6 May 2007 16:56:33 +0200 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: <463DE168.4080306@ru.nl> References: <463DE168.4080306@ru.nl> Message-ID: 2007/5/6, Stef Mientki : Thank you very much Stef > I'm in a similar situation (Radboud university, Nijmegen), > and even worse some people want to change from MatLab to LabView, > which, for as far as I can see, is very bad for educational purposes > (and even more expensive). Indeed. Good luck. Here, I first introduced the idea to go from matlab to octave, keeping much of the syntax and programs but going in a direction where the students would not have to copy illegally the software to run it on their own computers, but then someone came with the idea that Mathematica should be tought instead. So I want to answer that if we consider to switch to something else than octave / matlab, python with its extension is the way to go. > To make them like SciPy more than MatLab, > I'm writing a few shell programs, > that will make it more attractive. this is a vey good idea. I would very much like ot contribute (when I have the time and opportinity, that is not now unfortunately). > I've made a first version of a general data acquisition / realtime > analysis tool (which btw also supports MatLab ;-) > http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jalcc_data_acquisition.html > this is very nice? Is it a portable application that runs on Windows , Mac but also GNU/linux ? (as python does) > you can see a movie here > http://oase.uci.ru.nl/~mientki/download/medilab_tot.htm > Thanks. I'll do as soon as I have the time to do so > And now I'm working on a Signal WorkBench, > a first demo (including comparison with MatLab) is described here > http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jallcc_swb_filters.html > a signal toolbox for scipy / numpy is a good idea > Yes, indeed, starting with zero is inhuman ;-) > but, > - working with zero-based index, makes cicular arrays more easy to handle > - it's very easy to get the third element from the end a[-3] > so not very strong arguments ;-) > And even harder to get used, is the end-index is not included, like in > MatLab. > > So I think you should use other arguments, like yes, I will, but the more and best I have, the better I'll feel ! > - it's easy to vectorize any function (to gain speed) is it different / better / simpler than in Matlab ? How do you proceed ? > - all code is avaliable and can be changed (In MatLab, I often need to > run test series, to see what a function exactly did, because the > description was inadaequat, and MathWorks didn't want to tell me what's > inside, btw the standard answer from MathWorks "only available if you > manage to get a job at MathWorks" ;-) indeed. (as FLOSS) > - SciPy is free, and therefor every student can take it home and study > it when it fits him/her indeed. (as FLOSS) > - Python can be used to perform any computer task (general programming), > while MatLab is mainly pointed at math indeed. (as FLOSS) > - graphical user interface with feedback are much easier to create than > in MatLab (ever designed an interactive plot in MatLab ?) very good point . I did not try and will try asap. Have you got a simple example accessible ? > - Python can be used fully Object Oriented (MatLab not), which makes > extending code (in the future) extremly easy yes > - in MatLab every function must in a seperate file (do you have also > directories with a few hundred very small files ;-) very good point > - Memory footprint MatLab : 110 MB, Scipy 10 MB good point but I suppose they don't care. > - Embedding and encapsultion of MatLab is very limites, Python is made > for it very good point > - although more scientific problems are discussed in MatLab newsgroups, > the answers often only consists of the right direction. In Python > newsgroups (all that I know), give you support (and often more than 1 > solution), until your problem is solved (Thanks newsgroups !!) very good point > - although the Python documentation is sometimes fragmented, the > available sources on the web and the available books are much better, > just compare the ratio of contents and number of pages of "Using MatLab, > version 5" and a standard Pythion manula like "Learning Python, O'Reilly" very good point > - New versions of MatLab are full of bugs and often doesn't support > newer platforms, I've the feeling Python doesn't suffer from that. very good point > - Function calls with many (named) parameters is much easier in Python > than in MatLab > > So aren't there any disadvantages ?, > Yes there are a few: > - Python (Scipy) is fully unknown (so you must be a Don QuiChotte) one point is that a teacher of computing science is considering python to teach the basis of programming instead pf the C++ that is currently tought in the first year ... and that is a real pain for the students, who do hardly remember anything when I see then in the second year where I teach. > - the MatLab workspace is missing in Python yes, but I suppose that can be replaced with well configured editors > succes with your attempt, thanks Nicolas -- Nicolas Pettiaux - email: nicolas.pettiaux at ael.be From john at curioussymbols.com Sun May 6 11:35:34 2007 From: john at curioussymbols.com (John Pye) Date: Mon, 07 May 2007 01:35:34 +1000 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: References: <463DE168.4080306@ru.nl> Message-ID: <463DF5C6.6050204@curioussymbols.com> So, what will 'they' say in return? * Where's the manual? * What is the one single file that I have to install? * Why do I have to pay to get the numpy manual? I use python/matplotlib and to a lesser extent scipy very frequently but I have to say that I don't think the user experience is quite smooth enough for foisting on those poor undergrads just yet. I think that the fragmentation of the documentation is just too much, as well as the fact that a really important part of the documentation is non-free. I would like to see a one-file installer for Windows (AFAIK this doesn't exist) and big searchable .CHM file with everything you need. You have to think about all those people who struggle through their first programming course... I'm curious as to whether SciLab was an option you've considered? Cheers JP Nicolas Pettiaux wrote: > 2007/5/6, Stef Mientki : > > Thank you very much Stef > > >> I'm in a similar situation (Radboud university, Nijmegen), >> and even worse some people want to change from MatLab to LabView, >> which, for as far as I can see, is very bad for educational purposes >> (and even more expensive). >> > > Indeed. Good luck. > > Here, I first introduced the idea to go from matlab to octave, keeping > much of the syntax and programs but going in a direction where the > students would not have to copy illegally the software to run it on > their own computers, but then someone came with the idea that > Mathematica should be tought instead. So I want to answer that if we > consider to switch to something else than octave / matlab, python with > its extension is the way to go. > > >> To make them like SciPy more than MatLab, >> I'm writing a few shell programs, >> that will make it more attractive. >> > > this is a vey good idea. I would very much like ot contribute (when I > have the time and opportinity, that is not now unfortunately). > > >> I've made a first version of a general data acquisition / realtime >> analysis tool (which btw also supports MatLab ;-) >> http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jalcc_data_acquisition.html >> >> > > this is very nice? Is it a portable application that runs on Windows , > Mac but also GNU/linux ? (as python does) > > >> you can see a movie here >> http://oase.uci.ru.nl/~mientki/download/medilab_tot.htm >> >> > Thanks. I'll do as soon as I have the time to do so > > >> And now I'm working on a Signal WorkBench, >> a first demo (including comparison with MatLab) is described here >> http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jallcc_swb_filters.html >> >> > > a signal toolbox for scipy / numpy is a good idea > > > >> Yes, indeed, starting with zero is inhuman ;-) >> but, >> - working with zero-based index, makes cicular arrays more easy to handle >> - it's very easy to get the third element from the end a[-3] >> so not very strong arguments ;-) >> And even harder to get used, is the end-index is not included, like in >> MatLab. >> >> So I think you should use other arguments, like >> > > yes, I will, but the more and best I have, the better I'll feel ! > > >> - it's easy to vectorize any function (to gain speed) >> > is it different / better / simpler than in Matlab ? How do you proceed ? > > >> - all code is avaliable and can be changed (In MatLab, I often need to >> run test series, to see what a function exactly did, because the >> description was inadaequat, and MathWorks didn't want to tell me what's >> inside, btw the standard answer from MathWorks "only available if you >> manage to get a job at MathWorks" ;-) >> > > indeed. (as FLOSS) > > >> - SciPy is free, and therefor every student can take it home and study >> it when it fits him/her >> > > indeed. (as FLOSS) > > >> - Python can be used to perform any computer task (general programming), >> while MatLab is mainly pointed at math >> > > indeed. (as FLOSS) > > >> - graphical user interface with feedback are much easier to create than >> in MatLab (ever designed an interactive plot in MatLab ?) >> > > very good point . I did not try and will try asap. Have you got a > simple example accessible ? > > >> - Python can be used fully Object Oriented (MatLab not), which makes >> extending code (in the future) extremly easy >> > > yes > >> - in MatLab every function must in a seperate file (do you have also >> directories with a few hundred very small files ;-) >> > > very good point > > >> - Memory footprint MatLab : 110 MB, Scipy 10 MB >> > > good point but I suppose they don't care. > > >> - Embedding and encapsultion of MatLab is very limites, Python is made >> for it >> > > very good point > > >> - although more scientific problems are discussed in MatLab newsgroups, >> the answers often only consists of the right direction. In Python >> newsgroups (all that I know), give you support (and often more than 1 >> solution), until your problem is solved (Thanks newsgroups !!) >> > > very good point > > >> - although the Python documentation is sometimes fragmented, the >> available sources on the web and the available books are much better, >> just compare the ratio of contents and number of pages of "Using MatLab, >> version 5" and a standard Pythion manula like "Learning Python, O'Reilly" >> > > very good point > > >> - New versions of MatLab are full of bugs and often doesn't support >> newer platforms, I've the feeling Python doesn't suffer from that. >> > > very good point > > >> - Function calls with many (named) parameters is much easier in Python >> than in MatLab >> >> So aren't there any disadvantages ?, >> Yes there are a few: >> > > >> - Python (Scipy) is fully unknown (so you must be a Don QuiChotte) >> > > one point is that a teacher of computing science is considering python > to teach the basis of programming instead pf the C++ that is > currently tought in the first year ... and that is a real pain for the > students, who do hardly remember anything when I see then in the > second year where I teach. > > >> - the MatLab workspace is missing in Python >> > > yes, but I suppose that can be replaced with well configured editors > > >> succes with your attempt, >> > > thanks > > Nicolas > From s.mientki at ru.nl Sun May 6 13:03:44 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 06 May 2007 19:03:44 +0200 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: <463DF5C6.6050204@curioussymbols.com> References: <463DE168.4080306@ru.nl> <463DF5C6.6050204@curioussymbols.com> Message-ID: <463E0A70.9060803@ru.nl> John Pye wrote: > So, what will 'they' say in return? > > * Where's the manual? > yes that's a real disadavantage, here's a start http://www.limsi.fr/Individu/pointal/python/pqrc/ > * What is the one single file that I have to install? > well you've - Enthought edition (rather buggy) - Enthought enstaller (for windows users a bit clumsy) - portable Scipy (written by myself ;-) http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/python/portable_scipy.html > * Why do I have to pay to get the numpy manual? > > that's a pity, and not for the money, I wouldn't mind paying for it (or better said let my boss pay for it), but it's a real pain to order from non standard distributors at our university !! > I use python/matplotlib and to a lesser extent scipy very frequently but > I have to say that I don't think the user experience is quite smooth > enough for foisting on those poor undergrads just yet. Ever tried a good IDE (e.g. PyScripter) with auto-completion/suggestion in combination with Python, much better than the MatLab editor. > I think that the > fragmentation of the documentation is just too much, as well as the fact > that a really important part of the documentation is non-free. I would > like to see a one-file installer for Windows (AFAIK this doesn't exist) > and big searchable .CHM file with everything you need. Yes that's indeed what we need. I'm building my own right now, by redirecting the help from the IDE to a multipage html editor, but of course that will only contain what I'm interested in ;-) > You have to think > about all those people who struggle through their first programming > course... > > I'm curious as to whether SciLab was an option you've considered? > Scilab indeed has a few advantages (if you only look at the standard MatLab functionality), - better documentation - includes Simulink + PowerSim Disadvantages of SciLab: - it's only suited for Math, not for general programming - embedding and encapsulation is not supported - an even smaller user group at least the English written one cheers, Stef Mientki > Cheers > JP > > > Nicolas Pettiaux wrote: > >> 2007/5/6, Stef Mientki : >> >> Thank you very much Stef >> >> >> >>> I'm in a similar situation (Radboud university, Nijmegen), >>> and even worse some people want to change from MatLab to LabView, >>> which, for as far as I can see, is very bad for educational purposes >>> (and even more expensive). >>> >>> >> Indeed. Good luck. >> >> Here, I first introduced the idea to go from matlab to octave, keeping >> much of the syntax and programs but going in a direction where the >> students would not have to copy illegally the software to run it on >> their own computers, but then someone came with the idea that >> Mathematica should be tought instead. So I want to answer that if we >> consider to switch to something else than octave / matlab, python with >> its extension is the way to go. >> >> >> >>> To make them like SciPy more than MatLab, >>> I'm writing a few shell programs, >>> that will make it more attractive. >>> >>> >> this is a vey good idea. I would very much like ot contribute (when I >> have the time and opportinity, that is not now unfortunately). >> >> >> >>> I've made a first version of a general data acquisition / realtime >>> analysis tool (which btw also supports MatLab ;-) >>> http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jalcc_data_acquisition.html >>> >>> >>> >> this is very nice? Is it a portable application that runs on Windows , >> Mac but also GNU/linux ? (as python does) >> >> >> >>> you can see a movie here >>> http://oase.uci.ru.nl/~mientki/download/medilab_tot.htm >>> >>> >>> >> Thanks. I'll do as soon as I have the time to do so >> >> >> >>> And now I'm working on a Signal WorkBench, >>> a first demo (including comparison with MatLab) is described here >>> http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jallcc_swb_filters.html >>> >>> >>> >> a signal toolbox for scipy / numpy is a good idea >> >> >> >> >>> Yes, indeed, starting with zero is inhuman ;-) >>> but, >>> - working with zero-based index, makes cicular arrays more easy to handle >>> - it's very easy to get the third element from the end a[-3] >>> so not very strong arguments ;-) >>> And even harder to get used, is the end-index is not included, like in >>> MatLab. >>> >>> So I think you should use other arguments, like >>> >>> >> yes, I will, but the more and best I have, the better I'll feel ! >> >> >> >>> - it's easy to vectorize any function (to gain speed) >>> >>> >> is it different / better / simpler than in Matlab ? How do you proceed ? >> >> >> >>> - all code is avaliable and can be changed (In MatLab, I often need to >>> run test series, to see what a function exactly did, because the >>> description was inadaequat, and MathWorks didn't want to tell me what's >>> inside, btw the standard answer from MathWorks "only available if you >>> manage to get a job at MathWorks" ;-) >>> >>> >> indeed. (as FLOSS) >> >> >> >>> - SciPy is free, and therefor every student can take it home and study >>> it when it fits him/her >>> >>> >> indeed. (as FLOSS) >> >> >> >>> - Python can be used to perform any computer task (general programming), >>> while MatLab is mainly pointed at math >>> >>> >> indeed. (as FLOSS) >> >> >> >>> - graphical user interface with feedback are much easier to create than >>> in MatLab (ever designed an interactive plot in MatLab ?) >>> >>> >> very good point . I did not try and will try asap. Have you got a >> simple example accessible ? >> >> >> >>> - Python can be used fully Object Oriented (MatLab not), which makes >>> extending code (in the future) extremly easy >>> >>> >> yes >> >> >>> - in MatLab every function must in a seperate file (do you have also >>> directories with a few hundred very small files ;-) >>> >>> >> very good point >> >> >> >>> - Memory footprint MatLab : 110 MB, Scipy 10 MB >>> >>> >> good point but I suppose they don't care. >> >> >> >>> - Embedding and encapsultion of MatLab is very limites, Python is made >>> for it >>> >>> >> very good point >> >> >> >>> - although more scientific problems are discussed in MatLab newsgroups, >>> the answers often only consists of the right direction. In Python >>> newsgroups (all that I know), give you support (and often more than 1 >>> solution), until your problem is solved (Thanks newsgroups !!) >>> >>> >> very good point >> >> >> >>> - although the Python documentation is sometimes fragmented, the >>> available sources on the web and the available books are much better, >>> just compare the ratio of contents and number of pages of "Using MatLab, >>> version 5" and a standard Pythion manula like "Learning Python, O'Reilly" >>> >>> >> very good point >> >> >> >>> - New versions of MatLab are full of bugs and often doesn't support >>> newer platforms, I've the feeling Python doesn't suffer from that. >>> >>> >> very good point >> >> >> >>> - Function calls with many (named) parameters is much easier in Python >>> than in MatLab >>> >>> So aren't there any disadvantages ?, >>> Yes there are a few: >>> >>> >> >> >>> - Python (Scipy) is fully unknown (so you must be a Don QuiChotte) >>> >>> >> one point is that a teacher of computing science is considering python >> to teach the basis of programming instead pf the C++ that is >> currently tought in the first year ... and that is a real pain for the >> students, who do hardly remember anything when I see then in the >> second year where I teach. >> >> >> >>> - the MatLab workspace is missing in Python >>> >>> >> yes, but I suppose that can be replaced with well configured editors >> >> >> >>> succes with your attempt, >>> >>> >> thanks >> >> Nicolas >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From prabhu at aero.iitb.ac.in Sun May 6 13:22:26 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Sun, 6 May 2007 22:52:26 +0530 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: <463E0A70.9060803@ru.nl> References: <463DE168.4080306@ru.nl> <463DF5C6.6050204@curioussymbols.com> <463E0A70.9060803@ru.nl> Message-ID: <17982.3794.270613.684967@gargle.gargle.HOWL> >>>>> "Stef" == Stef Mientki writes: >> * What is the one single file that I have to install? >> Stef> well you've - Enthought edition (rather buggy) - Enthought Stef> enstaller (for windows users a bit clumsy) I think it would be more constructive to report bugs at an appropriate place and help fix bugs rather than complain or reinvent the wheel. prabhu From s.mientki at ru.nl Sun May 6 13:59:50 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 06 May 2007 19:59:50 +0200 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: <17982.3794.270613.684967@gargle.gargle.HOWL> References: <463DE168.4080306@ru.nl> <463DF5C6.6050204@curioussymbols.com> <463E0A70.9060803@ru.nl> <17982.3794.270613.684967@gargle.gargle.HOWL> Message-ID: <463E1796.5000709@ru.nl> Prabhu Ramachandran wrote: >>>>>> "Stef" == Stef Mientki writes: >>>>>> > > >> * What is the one single file that I have to install? > >> > Stef> well you've - Enthought edition (rather buggy) - Enthought > Stef> enstaller (for windows users a bit clumsy) > > I think it would be more constructive to report bugs at an appropriate > place and help fix bugs rather than complain or reinvent the wheel. > I wrote an extensive report with my findings and suggestions to the team that build the Enstaller edition, but I'm not sure it arrived (didn't get any reaction), and I already mentioned that in this newsgroup. I really don't want to reinvent the wheel, but I think non-windows users don't really understand what native windows users expect: "one-button install, without any non-human questions" . cheers, Stef Mientki > prabhu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From s.mientki at ru.nl Sun May 6 16:44:12 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 06 May 2007 22:44:12 +0200 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: References: <463DE168.4080306@ru.nl> Message-ID: <463E3E1C.9020308@ru.nl> Moz Champion (Dan) wrote: > Stef Mientki wrote: > >> ram wrote: >> >>> The memory leak in TB has brought my pc to its knees. The memory >>> starts at 26M and slowly grows to around 330M, before my PC completely >>> slows down. I tried compacting folders, deleting .msf files, etc., as >>> suggested by Nir in earlier messages, but nothing seems to help. I am >>> reading mostly plain-text emails from just one hotmail account. Is it >>> possible that the Webmail extension may be causing this problem? >>> >> I don't think so, >> I had disabled all extensions, deleted panacae, XUL,*.msf, >> still the memory usages rices above 400 MB after a few hours, >> both in TB 1.5 and TB 2.0. >> I really think this is a serious problem, >> and it's not recognized by the developers, >> it seems even to be denied ! >> >> In the meanwhile I tried several other mail / newsreader programs, >> setup with the same accounts and subscribtions, >> figures from those other programs, >> - during handling the largest accounts memory usage increased to >> about 80 MB >> - decreasing again to 40 MB when doing nothing. >> >> Unfortunately all these alternatives have other disadvantages ;-) >> I'd love to stick to TB, but 400 MB is just too much :-( >> >> Isn't possible to make a TB version with memory logging, >> so the (very few ?) people who have noticed this problem, >> can pinpoint the cause. >> >> cheers, >> Stef Mientki >> >> >>> --TIA >>> --ram >>> >>> _______________________________________________ >>> support-thunderbird mailing list >>> support-thunderbird at lists.mozilla.org >>> https://lists.mozilla.org/listinfo/support-thunderbird >>> To unsubscribe, send an email to >>> support-thunderbird-request at lists.mozilla.org?subject=unsubscribe >>> >>> >>> > > > People who can't recreate the problem can't fix it. It's really > difficult to fix what you can't see. > > YOU are the person with the problem... YOU can see it... we can't > > Those who do experience this problem, MUST share with us all aspects > of the situation, INCLUDING such as turning off (or on) extensions, > and other aspects of the program in testing. > > For example, if you run Thunderbird with NO extensions, do you see the > problem? What are the numbers then? > Can you tell us HOW to configure our systems so we get the same > numbers (or at least the same effects) as you see? > > Running away or back (to another program or to an earlier version) > isnt going to get the problem fixed - If everyone who suffers from it > does this, then the 'problem' disappears! > > There might be almost endless questions, about your setup, your > system, your files, extensions, what-ever. Simply because people who > CANNOT recreate your problem are attempting to discover what causes such > > > You say that you have tried the no extensions routine, and other > remedies. Have you tried it without all your newsgroups? Create a new > profile without the newsgroups - whats your memory use then? How about > RSS feeds? You have any of those? What about temporarily stopping > them, does that change anything of a material nature? > Try a new profile with just your email, or newsgroups, or RSS feeds, > or combinations of... determine if the problem exists in ALL or just > when certain requirements are met (RSS feeds are on or such). Perhaps > its simply one newsgroup... or one server, creating the problem > hi Dan, YES I'm one of the few that has these problems, and I'm glad to help to find the cause of the problem, but that won't work if I just keep complaining in this newsgroup, get each time the same suggestions, after which it goes well for a while, returning to the same problems after some time. I find TB a wonderful program, which I have been using for many years now, with great succes, until v1.5. Let me tell you another story, since 2..3 years I have turned off TalkBack (I think it's only in Firefox). Bad decision you might say, (and I agree ;-) but every time the program crashed, the talkback also crashed. I think this is real problem of free programs, the real problems never reach the developers, !! only the obvious ones, detected by simple users ;-) Back to the problem, I've tried almost everything, but I can't find the source, so I think we need a different approach. (btw I write larger programs than TB, with much less customers, and we detect and find these kind of problems / errors within days). Some suggestions: - get a few people that have these kind of problems and are willing to testdrive - put in a logging mechanism in TB, that tracks (user) actions and memory usage I'm willing to test, it's easy to setup a separate TB, that can be build up step by step, and give you my experiences and the log files back, after each step. I hope to hear form one of the developers, TB is too beautiful to leave this problem in ! cheers, Stef Mientki From lorenzo.isella at gmail.com Sun May 6 17:25:06 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sun, 06 May 2007 23:25:06 +0200 Subject: [SciPy-user] Optimization & Parallelization of integrate.odeint Message-ID: <463E47B2.4020902@gmail.com> Dear All, I need to run integrate.odeint to deal with a large system of coupled non-linear equations. I will try not to be tedious with the details of the problem, but I need to give you some background to find out if my code (I'll be showing only a snippet) can be better coded in the first place and eventually parallelized. The set of equations I need to solve is (written in Latex style): \frac{dn_k}{dt}=\frac{1}{2}\sum{i+j=k}K(\nu_i,\nu_j) n_i n_j -n_k\sum_{i=1}^\infty K(\nu_i,nu_k) n_i, k=1,2...N where K(...,...) is a collision kernel (details are not needed, in the example I am just using a constant kernel). This is the snippet of the code I am using now: #! /usr/bin/env python from scipy import * import pylab x=linspace(10,200,80) #set of initial particle volumes lensum=len(x) # number of particle bins I use for the initial state y0=zeros(lensum) #array for the initial state y0[0]=15. # Initial state I want to use # this is the system of 1st order ODE's I want to solve: # \frac{dy[k]}{dt}=0.5*sum_{i+j=k}kernel[i,j]*y[i]*y[j] (creator) + # -y[k]sum_{i=1}^infty kernel[i,k]*y[i] (destructor) # NB: careful since in the formula both i and j start from 1! # In the following, I will be using a trivial constant kernel to test the code kern=zeros((lensum,lensum)) kern[:,:]=1e-3 # constant kernel # the following function is the core of the code since it expresses the RHS of Smoluchowsky # equation. One possible improvement to save up cpu time would be to avoid double counting # the collisions in the creation term, i.e. instead of working out the loop as # ((i+1+j+1)==(k+1)) ....bla...bla and then divide by 2 the contributions, I could try # imposing the condition i>j i.e.: # ((i+1+j+1)==(k+1) and i>j) [not sure about the syntax, but this is the idea] def coupling(y,t,kernel,lensum): creation=zeros(lensum) destruction=zeros(lensum) out=zeros(lensum) for k in range(0,lensum): for i in range(0,lensum): destruction[k]=destruction[k]-kern[i,k]*y[i] for j in range(0,lensum): if ((i+1+j+1)==(k+1) ): #I add 1 to correct the array #indexing creation[k]=creation[k]+kern[i,j]*y[i]*y[j] destruction[k]=y[k]*destruction[k] creation[k]=0.5*creation[k] out[:]=creation[:]+destruction[:] return out t=linspace(0.,4.,20) # I am choosing my time grid for time evolution y = integrate.odeint(coupling, y0, \ t,args=(kern,lensum),printmessg=1,rtol=1e-10,atol=1e-10) print 'y0 is', y0 name_bis='y_complete%05d'%lensum pylab.save(name_bis,y) I have three main questions (1) The first one is actually trivial and does not concern me that much: how comes that the initial condition y0 gets overwritten by the solver? It is not a problem in itself since I find it correctly as the first row of the returned array y, yet I would like to understand what is going on (2)any suggestions about the general set-up of the problem? I am using unfortunately many loops to mimic the various summations involved. Does anyone know how to code it better? Do map() and sum() help here? (3)So far the code has passed my tests, so it is time to run it to solve a really large (~3-7000 equations) system with a more complicated initial state. In principle the code can still run on a desktop PC [due to its modest RAM requirements], but I suspect that in the present form could take weeks to complete (it has already been running for days). So here come my most urgent questions: (3.a) am I losing significant performance by using a Python wrapper instead of Fortran directly? Unless there is a strong motivation (like a five-fold to ten-fold speed-up), I would like to stick to Python due to the ease of developing and debugging code. (3.b) Is there a way of parallelizing the code? Really, apart from the set-up of the initial state and the creation of the kernel matrix by calling only once the appropriate function, everything is done in the reported few lines. I will soon have a Linux cluster available and running this code in parallel on just 7-10 nodes is likely to make a huge difference, if that is possible. Apologies for the long post, but I am not that familiar with high performance computing and I am a totally newbie when it comes to parallel computing. Kind Regards Lorenzo From ckkart at hoc.net Sun May 6 21:05:29 2007 From: ckkart at hoc.net (Christian K) Date: Mon, 07 May 2007 10:05:29 +0900 Subject: [SciPy-user] multiple regression: problem with matrix algebra In-Reply-To: <463D93FF.7020006@curioussymbols.com> References: <463D618E.9080201@curioussymbols.com> <463D93FF.7020006@curioussymbols.com> Message-ID: John Pye wrote: > Christian K. wrote: >> John Pye wrote: >> >>> Hi all >>> >>> I am having a little trouble with using optimize.leastsq for multiple >>> regression. I've used an approach based on this tute, which deals only >>> with single regression: >>> http://linuxgazette.net/115/andreasen.html >>> >>> But now in my adaptation to multiple regression, I can get it to work >>> only if I do some dirty loop-based evaluation for my residuals. But if I >>> try to use elegant matrix evaluation, the fit parameters go awry: >>> >> Set the iteration stepsize (epsfcn keyword arg of leastsq) to something >> higher than the default, which is machine precision (?), e.g. 1e-12. >> Then both methods work. >> >> Christian > 'Tis true! Thanks very much. > > So is it fair to say that the default value of this parameter should > perhaps be changed? I'm not an expert but I would say no. I think it is good begin with the smallest step size to avoid stepping beyond the optimum. Christian From ryanlists at gmail.com Sun May 6 22:34:01 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sun, 6 May 2007 21:34:01 -0500 Subject: [SciPy-user] treshold function ? In-Reply-To: <463DE1F3.90706@ru.nl> References: <463D9AC5.8030008@ru.nl> <463DE1F3.90706@ru.nl> Message-ID: I actually learnerd that reading Travis' article in CiSE. On 5/6/07, Stef Mientki wrote: > > > Ryan Krauss wrote: > > What about this: > > a[a<0.4]=0.0 > > > thanks Ryan, > that seems simple enough. > > cheers. > Stef Mientki > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From prabhu at aero.iitb.ac.in Sun May 6 23:06:37 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Mon, 7 May 2007 08:36:37 +0530 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: <463E1796.5000709@ru.nl> References: <463DE168.4080306@ru.nl> <463DF5C6.6050204@curioussymbols.com> <463E0A70.9060803@ru.nl> <17982.3794.270613.684967@gargle.gargle.HOWL> <463E1796.5000709@ru.nl> Message-ID: <17982.38845.397737.677564@gargle.gargle.HOWL> >>>>> "Stef" == Stef Mientki writes: > >> * What is the one single file that I have to install? >> >> Stef> well you've - Enthought edition (rather buggy) - Enthought Stef> enstaller (for windows users a bit clumsy) >> >> I think it would be more constructive to report bugs at an >> appropriate place and help fix bugs rather than complain or >> reinvent the wheel. >> Stef> I wrote an extensive report with my findings and suggestions Stef> to the team that build the Enstaller edition, but I'm not Stef> sure it arrived (didn't get any reaction), and I already Stef> mentioned that in this newsgroup. I really don't want to I can't see a ticket filed on enthought trac or a message on enthought-dev. AFAIK, this list isn't the place for enthought edition bug reports or enstaller related errors. Nor is it the place for talkback related issues! I don't work at enthought, I just appreciate work done by others for a community and hate it when someone whines without having the sense to post on the right list. Sorry, but the tone of your original email really ticked me off. prabhu From giorgio.luciano at chimica.unige.it Mon May 7 08:43:37 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Mon, 07 May 2007 14:43:37 +0200 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: <17982.38845.397737.677564@gargle.gargle.HOWL> References: <463DE168.4080306@ru.nl> <463DF5C6.6050204@curioussymbols.com> <463E0A70.9060803@ru.nl> <17982.3794.270613.684967@gargle.gargle.HOWL> <463E1796.5000709@ru.nl> <17982.38845.397737.677564@gargle.gargle.HOWL> Message-ID: <463F1EF9.3000902@chimica.unige.it> Thanks to Steph for the answer here reported. I belong to the Don Chisciotte group of people that is trying to teach people to use numpy/scipy and I'm switching from matlab. (my main field is chemometrics). As already reported there are lot of question to answer to people that generally use windows and want a one file installation or need an enviroment for working more "matlabish" (there are some in alpha dev and they promise good). I hope to keep in touch with other learner and if people want we can create a small Don Chisciotte group for teachin numpy/scipy (well very monty python like... since terry gilliam struggle a lot of years trying to finish the film ;) and about Phan, Steph IMHO was really polite, just wrote that sometime Enthough doesn't work well for him..it's just a comment nothing more.. He found a way for solving his problem and shared the solution.. We all know that Enthough's guys are great ;) From S.Mientki at ru.nl Mon May 7 09:06:01 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Mon, 07 May 2007 15:06:01 +0200 Subject: [SciPy-user] treshold function ? In-Reply-To: <463D9AC5.8030008@ru.nl> References: <463D9AC5.8030008@ru.nl> Message-ID: <463F2439.7090604@ru.nl> Stef Mientki wrote: > hello, > > is there a simple treshold function, > that makes all elements of an array zero if the element is smaller than > a given treshold ? > > I couldn't find it in Numpy, Signal, > but maybe I overlooked it. > > Of course it can be done by a simple formula > array_data = array_data * ( array_data > treshold ) > but it's more convenient to use a function. > > I just fund this in "stats" ;-) threshold(a, threshmin=None, threshmax=None, newval=0) Like numpy.clip() except that values threshmax are replaced by newval instead of by threshmin/threshmax (respectively). Returns: a, with values threshmax replaced with newval Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From S.Mientki at ru.nl Mon May 7 09:31:25 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Mon, 07 May 2007 15:31:25 +0200 Subject: [SciPy-user] Combination of negative-exponential + normal distribution, how to tackle ? Message-ID: <463F2A2D.8020009@ru.nl> hello, Unfortunately statistics is not my strongest point, so maybe someone can give me some hints or links, with this problem. I've recordings of the maximum activity of persons, and I want to quantify these recordings. Until now we just calculated mean of the activity pattern. If I look at some of the recordings, (especially the ones of persons with high activities) and split the recordings in week and week-end activity scores, I think I can clearly see a combination of 2 distributions: - person is in relative rest, leading to a negative exponential distribution - person is active for a longer period of time, resulting in normal distribution For persons with a high activity, I can estimate the estimate the mean and SD of the normal distribution, simply by ignoring the low values, where the exponential distribution is high, while the tail of the exponential distribution doesn't affect the normal distribution much. But for persons with lower activity, the 2 distributions merge too much. Is there a way to separate those distributions, In fact I'm only interested in the parameters of the normal distribution. thanks, Stef Mientki Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From david.huard at gmail.com Mon May 7 10:29:02 2007 From: david.huard at gmail.com (David Huard) Date: Mon, 7 May 2007 10:29:02 -0400 Subject: [SciPy-user] Combination of negative-exponential + normal distribution, how to tackle ? In-Reply-To: <463F2A2D.8020009@ru.nl> References: <463F2A2D.8020009@ru.nl> Message-ID: <91cf711d0705070729u34f3ee82q610a9f4d0bf034b5@mail.gmail.com> Hi Stef, I'm not sure if that's really what you want to do, but you could find the parameters maximizing the likelihood of a mixture of both distributions: A * normal_likelihood(mu, sigma) + B * negative_exponential_likelihood(params) A+B = 1 Anyway, google "mixture model" for more info. There is a wikipedia page about this where you might find more appropriate solutions to your problem. David 2007/5/7, Stef Mientki : > > hello, > > Unfortunately statistics is not my strongest point, > so maybe someone can give me some hints or links, > with this problem. > > I've recordings of the maximum activity of persons, > and I want to quantify these recordings. > Until now we just calculated mean of the activity pattern. > > If I look at some of the recordings, > (especially the ones of persons with high activities) > and split the recordings in week and week-end activity scores, > I think I can clearly see a combination of 2 distributions: > > - person is in relative rest, leading to a negative exponential > distribution > - person is active for a longer period of time, resulting in normal > distribution > > For persons with a high activity, > I can estimate the estimate the mean and SD of the normal distribution, > simply by ignoring the low values, where the exponential distribution is > high, > while the tail of the exponential distribution doesn't affect the normal > distribution much. > > But for persons with lower activity, the 2 distributions merge too much. > Is there a way to separate those distributions, > In fact I'm only interested in the parameters of the normal distribution. > > thanks, > Stef Mientki > > > > Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of > Commerce - trade register 41055629 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From val at vtek.com Mon May 7 11:17:40 2007 From: val at vtek.com (val) Date: Mon, 7 May 2007 11:17:40 -0400 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes References: <463DE168.4080306@ru.nl> <463DF5C6.6050204@curioussymbols.com><463E0A70.9060803@ru.nl> <17982.3794.270613.684967@gargle.gargle.HOWL> <463E1796.5000709@ru.nl><17982.38845.397737.677564@gargle.gargle.HOWL> <463F1EF9.3000902@chimica.unige.it> Message-ID: <035d01c790ba$da5c5730$6400a8c0@D380> ----- Original Message ----- From: "Giorgio Luciano" To: "SciPy Users List" Sent: Monday, May 07, 2007 8:43 AM Subject: Re: [SciPy-user] Scipy / matplotlib to replace matlab and indexes > Thanks to Steph for the answer here reported. [...] > Steph IMHO was really polite, just wrote that sometime Enthough doesn't > work well for him..it's just a comment nothing more.. He found a way for > solving his problem and shared the solution.. We all know that > Enthough's guys are great ;) > Well said, Giorgio. val From S.Mientki at ru.nl Mon May 7 11:24:05 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Mon, 07 May 2007 17:24:05 +0200 Subject: [SciPy-user] Combination of negative-exponential + normal distribution, how to tackle ? In-Reply-To: <91cf711d0705070729u34f3ee82q610a9f4d0bf034b5@mail.gmail.com> References: <463F2A2D.8020009@ru.nl> <91cf711d0705070729u34f3ee82q610a9f4d0bf034b5@mail.gmail.com> Message-ID: <463F4495.3090709@ru.nl> thanks David, David Huard wrote: > Hi Stef, > > I'm not sure if that's really what you want to do, but you could find > the parameters maximizing the likelihood of a mixture of both > distributions: > > A * normal_likelihood(mu, sigma) + B * > negative_exponential_likelihood(params) > A+B = 1 > I think that's waht I'm looking for. > Anyway, google "mixture model" for more info. There is a wikipedia > page about this where you might find more appropriate solutions to > your problem. > Oh, enought to spend the whole weekend ;-) Thanks, cheers, Stef Mientki Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From peridot.faceted at gmail.com Mon May 7 16:27:53 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 7 May 2007 16:27:53 -0400 Subject: [SciPy-user] Optimization & Parallelization of integrate.odeint In-Reply-To: <463E47B2.4020902@gmail.com> References: <463E47B2.4020902@gmail.com> Message-ID: First a comment, applicable to many questions we see on this list: the main feature of numpy is that it allows you to write most array operations without loops. This has a performance advantage, which is what people are usually asking about, but that is *not* its main advantage. The main advantage of numpy's vector operations is that you can express your code more clearly and more simply. For a simple example: r = zeros(lensum) for k in xrange(lensum): r[k] = x[k]*y[k] is a bigger conceptual chunk than r = x*y That's the real advantage of numpy. Once you've got used to it, things that looked like complicated algorithms before look much simpler, because they're made up of familiar operations. So my advice is, get used to the basic operations of numpy: elementwise operation, slicing, sum(), dot(), transpose(), boolean indexing, pick-and-choose indexing and where(), convolve(), sort(), searchsorted(), and a few others. (Not really relevant here, but for many applications it helps to think of many of the scipy tools as simple operations: svd(), the spline interpolation tools, quad(), and so on.) > I have three main questions > (1) The first one is actually trivial and does not concern me that much: > how comes that the initial condition y0 gets overwritten by the solver? > It is not a problem in itself since I find it correctly as the first row > of the returned array y, yet I would like to understand what is going on This is a weird convention inherited from the underlying FORTRAN libraries; it's supposed to save copying in the common case that you're just stepping through the output range. > (2)any suggestions about the general set-up of the problem? I am using > unfortunately many loops to mimic the various summations involved. Does > anyone know how to code it better? Do map() and sum() help here? map() is generally appropriate for lists rather than numpy arrays. Several of the loops could be eliminated by using numpy's elementwise operations; the innermost loop is totally unnecessary since it only ever does anything for a single index. sum() is a useful function, and you should also know about dot(). > for k in range(0,lensum): > for i in range(0,lensum): > destruction[k]=destruction[k]-kern[i,k]*y[i] > for j in range(0,lensum): > if ((i+1+j+1)==(k+1) ): #I add 1 to correct the array > #indexing > creation[k]=creation[k]+kern[i,j]*y[i]*y[j] This only does anything if j==k-i-1, so it is creation[k]+=kern[i,k-i-1]*y[i]*y[k-i-1] apart from some boundary checks. > > destruction[k]=y[k]*destruction[k] > > creation[k]=0.5*creation[k] So a not terribly efficient way would be: destruction = -dot(transpose(kern),y)*y kyn = kern*y[:,newaxis]*y[newaxis,:] for k in xrange(lensum): creation[k] = sum(kyn[arange(k),k-arange(k)-1]) > (3)So far the code has passed my tests, so it is time to run it to solve > a really large (~3-7000 equations) system with a more complicated > initial state. > In principle the code can still run on a desktop PC [due to its modest > RAM requirements], but I suspect that in the present form could take > weeks to complete (it has already been running for days). So here come > my most urgent questions: > (3.a) am I losing significant performance by using a Python wrapper > instead of Fortran directly? Unless there is a strong motivation (like > a five-fold to ten-fold speed-up), I would like to stick to Python due > to the ease of developing and debugging code. If you write it efficiently, with minimal loops, you will not lose a great deal of performance by using python. You might also look into one of the other tools - numexpr, weave, etc. - for accelerating code before you jump to FORTRAN. But try python first. The right question is not "how much faster would it run?" but "can I make it run fast enough?" > (3.b) Is there a way of parallelizing the code? Really, apart from the > set-up of the initial state and the creation of the kernel matrix by > calling only once the appropriate function, everything is done in the > reported few lines. I will soon have a Linux cluster available and > running this code in parallel on just 7-10 nodes is likely to make a > huge difference, if that is possible. Solving ODEs, at least the way odeint and friends do it, is a mostly sequential process - you calculate the derivative at a few samples, find an approximate solution that covers the range of values you just sampled; if it's a good approximation, you accept its value at the endpoint and repeat the process, otherwise you cut it down and start working on a smaller step. But until you've taken a step, you don't know what values to use to evaulate your function at the next step. So there's not a lot of scope for parallelization, even if odeint had been written to do it. That said, if your RHS function (what you posted) is slow enough, you might look into parallelizing it, using such tools as MPI. This will probably be laborious and painful; I'd look hard at other ways of speeding it up first. There has probably been some research into how to parallelize solution of ODEs, which I am not at all up-to-date on. Good luck, Anne From Bernhard.Voigt at desy.de Mon May 7 16:35:38 2007 From: Bernhard.Voigt at desy.de (Bernhard Voigt) Date: Mon, 7 May 2007 22:35:38 +0200 Subject: [SciPy-user] Combination of negative-exponential + normal distribution, how to tackle ? (Stef Mientki) Message-ID: <21a270aa0705071335qab4aae1j14865d77f9c20a10@mail.gmail.com> Hi Stef, I've attached an example where I histogram samples drawn from an exponential and normal distribution. The data is fit with a function (sum of exponential and normal distribution) using scipy.optimize.leastsq. The fit doesn't succeed always :-(. One should think about a constraint to the normalization of the sum of both functions and use a normed histogram... Anyway, I hope the example helps. By the way, you need matplotlib to see the result. Best wishes! Bernhard > > Message: 8 > Date: Mon, 07 May 2007 17:24:05 +0200 > From: Stef Mientki > Subject: Re: [SciPy-user] Combination of negative-exponential + normal > distribution, how to tackle ? > To: SciPy Users List > Message-ID: <463F4495.3090709 at ru.nl> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > thanks David, > > David Huard wrote: > > Hi Stef, > > > > I'm not sure if that's really what you want to do, but you could find > > the parameters maximizing the likelihood of a mixture of both > > distributions: > > > > A * normal_likelihood(mu, sigma) + B * > > negative_exponential_likelihood(params) > > A+B = 1 > > > I think that's waht I'm looking for. > > Anyway, google "mixture model" for more info. There is a wikipedia > > page about this where you might find more appropriate solutions to > > your problem. > > > Oh, enought to spend the whole weekend ;-) > Thanks, > cheers, > Stef Mientki > > > Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of > Commerce - trade register 41055629 > > > > > ------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-user Digest, Vol 45, Issue 16 > ****************************************** > -- Bernhard Voigt Phone: ++49 33762 - 7 - 7291 DESY, Zeuthen Mail: bernhard.voigt at desy.de Platanenallee 6 Web: www-zeuthen.desy.de/~bvoigt 15738 Zeuthen AIM/Skype: bernievoigt -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: example.py Type: text/x-python Size: 2869 bytes Desc: not available URL: From S.Mientki at ru.nl Mon May 7 07:01:20 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Mon, 07 May 2007 13:01:20 +0200 Subject: [SciPy-user] Combination of negative-exponential + normal distribution, how to tackle ? Message-ID: <463F0700.2000605@ru.nl> hello, Unfortunately statistics is not my strongest point, so maybe someone can give me some hints or links, with this problem. I've recordings of the maximum activity of persons, and I want to quantify these recordings. Until now we just calculated mean of the activity pattern. If I look at some of the recordings, (especially the ones of persons with high activities) and split the recordings in week and week-end activity scores, I think I can clearly see a combination of 2 distributions: - person is in relative rest, leading to a negative exponential distribution - person is active for a longer period of time, resulting in normal distribution For persons with a high activity, I can estimate the estimate the mean and SD of the normal distribution, simply by ignoring the low values, where the exponential distribution is high, while the tail of the exponential distribution doesn't affect the normal distribution much. But for persons with lower activity, the 2 distributions merge too much. Is there a way to separate those distributions, In fact I'm only interested in the parameters of the normal distribution. thanks, Stef Mientki Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From S.Mientki at ru.nl Tue May 8 04:40:42 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Tue, 08 May 2007 10:40:42 +0200 Subject: [SciPy-user] Combination of negative-exponential + normal distribution, how to tackle ? (Stef Mientki) In-Reply-To: <21a270aa0705071335qab4aae1j14865d77f9c20a10@mail.gmail.com> References: <21a270aa0705071335qab4aae1j14865d77f9c20a10@mail.gmail.com> Message-ID: <4640378A.4030908@ru.nl> hi Bernhard, Bernhard Voigt wrote: > > Hi Stef, > > I've attached an example where I histogram samples drawn from an > exponential and normal distribution. The data is fit with a function > (sum of exponential and normal distribution) using > scipy.optimize.leastsq . The fit doesn't succeed always :-(. One > should think about a constraint to the normalization of the sum of > both functions and use a normed histogram... > > Anyway, I hope the example helps. By the way, you need matplotlib to > see the result. > This is really FANTASTIC, as both a beginner in python and in stats, this would have cost me days. I didn't have the time yet to run it on my own data, but I looked at the demo output of your example, and that seems to be exactly what I need. Thanks very much !! (btw here you see another advantage of SciPy over MatLab, "in general MatLab mailing-list gives you just some hints and directions", "in general SciPy mailing-list helps you, until a solution is available" !!) cheers, Stef Mientki Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From nwagner at iam.uni-stuttgart.de Tue May 8 09:38:37 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 08 May 2007 15:38:37 +0200 Subject: [SciPy-user] spline wavelet decomposition In-Reply-To: <20070504131452.GF23778@mentat.za.net> References: <463B1D61.2050504@aims.ac.za> <20070504131452.GF23778@mentat.za.net> Message-ID: <46407D5D.3070803@iam.uni-stuttgart.de> Stefan van der Walt wrote: > Hi Issa > > On Fri, May 04, 2007 at 01:47:45PM +0200, Issa Karambal wrote: > >> if there anyone who knows how to simulate 'spline wavelet decomposition >> and reconstruction' for a given signal 'f'. >> > > Take a look at > > http://wavelets.scipy.org > > which includes different wavelet families, including Haar, Daubechies, > Symlets, Coiflets and more. > > 'demo/dwt_signal_decomposition.py' shows how a signal is decomposed. > > Regards > St?fan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi all, I tried to install pywt via svn. python setup.py build running build running build_py creating build creating build/lib.linux-x86_64-2.4 creating build/lib.linux-x86_64-2.4/pywt copying pywt/numerix.py -> build/lib.linux-x86_64-2.4/pywt copying pywt/wavelet_packets.py -> build/lib.linux-x86_64-2.4/pywt copying pywt/wnames.py -> build/lib.linux-x86_64-2.4/pywt copying pywt/tresholding.py -> build/lib.linux-x86_64-2.4/pywt copying pywt/release_details.py -> build/lib.linux-x86_64-2.4/pywt copying pywt/__init__.py -> build/lib.linux-x86_64-2.4/pywt copying pywt/multilevel.py -> build/lib.linux-x86_64-2.4/pywt copying pywt/multidim.py -> build/lib.linux-x86_64-2.4/pywt running build_ext building 'pywt._pywt' extension Traceback (most recent call last): File "setup.py", line 77, in ? do_setup() File "setup.py", line 72, in do_setup cmdclass = cmdclass, File "/usr/lib64/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib64/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib64/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/lib64/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 279, in run self.build_extensions() File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 405, in build_extensions self.build_extension(ext) File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 442, in build_extension sources = self.swig_sources(sources, ext) TypeError: swig_sources() takes exactly 2 arguments (3 given) Any idea how to fix this problem ? Nils From filip.wasilewski at gmail.com Tue May 8 10:09:13 2007 From: filip.wasilewski at gmail.com (Filip Wasilewski) Date: Tue, 8 May 2007 16:09:13 +0200 Subject: [SciPy-user] spline wavelet decomposition In-Reply-To: <46407D5D.3070803@iam.uni-stuttgart.de> References: <463B1D61.2050504@aims.ac.za> <20070504131452.GF23778@mentat.za.net> <46407D5D.3070803@iam.uni-stuttgart.de> Message-ID: Hi Nils, can you provide more details on your build environment? Please note that you need the Pyrex branch from the lxml project in order to build pywt from source -- import from http://codespeak.net/svn/lxml/pyrex/ svn repository. Filip On 5/8/07, Nils Wagner wrote: > Stefan van der Walt wrote: > > Hi Issa > > > > On Fri, May 04, 2007 at 01:47:45PM +0200, Issa Karambal wrote: > > > >> if there anyone who knows how to simulate 'spline wavelet decomposition > >> and reconstruction' for a given signal 'f'. > >> > > > > Take a look at > > > > http://wavelets.scipy.org > > > > which includes different wavelet families, including Haar, Daubechies, > > Symlets, Coiflets and more. > > > > 'demo/dwt_signal_decomposition.py' shows how a signal is decomposed. > > > > Regards > > St?fan > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > Hi all, > > I tried to install pywt via svn. > > python setup.py build > running build > running build_py > creating build > creating build/lib.linux-x86_64-2.4 > creating build/lib.linux-x86_64-2.4/pywt > copying pywt/numerix.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/wavelet_packets.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/wnames.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/tresholding.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/release_details.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/__init__.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/multilevel.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/multidim.py -> build/lib.linux-x86_64-2.4/pywt > running build_ext > building 'pywt._pywt' extension > Traceback (most recent call last): > File "setup.py", line 77, in ? > do_setup() > File "setup.py", line 72, in do_setup > cmdclass = cmdclass, > File "/usr/lib64/python2.4/distutils/core.py", line 149, in setup > dist.run_commands() > File "/usr/lib64/python2.4/distutils/dist.py", line 946, in run_commands > self.run_command(cmd) > File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "/usr/lib64/python2.4/distutils/command/build.py", line 112, in run > self.run_command(cmd_name) > File "/usr/lib64/python2.4/distutils/cmd.py", line 333, in run_command > self.distribution.run_command(command) > File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 279, > in run > self.build_extensions() > File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 405, > in build_extensions > self.build_extension(ext) > File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 442, > in build_extension > sources = self.swig_sources(sources, ext) > TypeError: swig_sources() takes exactly 2 arguments (3 given) > > > Any idea how to fix this problem ? > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Tue May 8 10:51:53 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 08 May 2007 16:51:53 +0200 Subject: [SciPy-user] spline wavelet decomposition In-Reply-To: References: <463B1D61.2050504@aims.ac.za> <20070504131452.GF23778@mentat.za.net> <46407D5D.3070803@iam.uni-stuttgart.de> Message-ID: On Tue, 8 May 2007 16:09:13 +0200 "Filip Wasilewski" wrote: > Hi Nils, > > can you provide more details on your build environment? > > Please note that you need the Pyrex branch from the lxml >project in > order to build pywt from source -- import from > http://codespeak.net/svn/lxml/pyrex/ svn repository. > >Filip > Hi Filip, Thank you for your prompt reply. Just now I have installed pyrex on OpenSuSE10.2 (Linux 64 bit system) The next problem is python setup.py build running build running build_py creating build creating build/lib.linux-x86_64-2.5 creating build/lib.linux-x86_64-2.5/pywt copying pywt/multilevel.py -> build/lib.linux-x86_64-2.5/pywt copying pywt/multidim.py -> build/lib.linux-x86_64-2.5/pywt copying pywt/__init__.py -> build/lib.linux-x86_64-2.5/pywt copying pywt/numerix.py -> build/lib.linux-x86_64-2.5/pywt copying pywt/tresholding.py -> build/lib.linux-x86_64-2.5/pywt copying pywt/release_details.py -> build/lib.linux-x86_64-2.5/pywt copying pywt/wavelet_packets.py -> build/lib.linux-x86_64-2.5/pywt copying pywt/wnames.py -> build/lib.linux-x86_64-2.5/pywt running build_ext building 'pywt._pywt' extension creating build/temp.linux-x86_64-2.5 creating build/temp.linux-x86_64-2.5/src gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -DPY_EXTENSION -Isrc -I/usr/include/python2.5 -c src/_pywt.c -o build/temp.linux-x86_64-2.5/src/_pywt.o -Wno-uninitialized -Wno-unused -O2 gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -DPY_EXTENSION -Isrc -I/usr/include/python2.5 -c src/common.c -o build/temp.linux-x86_64-2.5/src/common.o -Wno-uninitialized -Wno-unused -O2 gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -DPY_EXTENSION -Isrc -I/usr/include/python2.5 -c src/convolution.c -o build/temp.linux-x86_64-2.5/src/convolution.o -Wno-uninitialized -Wno-unused -O2 src/convolution.c:78: error: conflicting types for ?downsampling_convolution? src/convolution.h:27: error: previous declaration of ?downsampling_convolution? was here src/convolution.c:299: error: conflicting types for ?allocating_downsampling_convolution? src/convolution.h:32: error: previous declaration of ?allocating_downsampling_convolution? was here error: command 'gcc' failed with exit status 1 Nils From filip.wasilewski at gmail.com Tue May 8 11:44:08 2007 From: filip.wasilewski at gmail.com (Filip Wasilewski) Date: Tue, 8 May 2007 17:44:08 +0200 Subject: [SciPy-user] spline wavelet decomposition In-Reply-To: References: <463B1D61.2050504@aims.ac.za> <20070504131452.GF23778@mentat.za.net> <46407D5D.3070803@iam.uni-stuttgart.de> Message-ID: Hi, recently I was doing some changes to the trunk and that might introduce some problems on 64bit platforms (I'm still doing most dev on 32bits). I have committed a possible fix, but don't know if it solves the problem on your platform. Can you checkout the 0.1.6 release version and tell if the issue is also present there? -- http://wavelets.scipy.org/svn/multiresolution/pywt/tags/release_0_1_6/ Thanks, Filip On 5/8/07, Nils Wagner wrote: > On Tue, 8 May 2007 16:09:13 +0200 > "Filip Wasilewski" wrote: > > Hi Nils, > > > > can you provide more details on your build environment? > > > > Please note that you need the Pyrex branch from the lxml > >project in > > order to build pywt from source -- import from > > http://codespeak.net/svn/lxml/pyrex/ svn repository. > > > >Filip > > > Hi Filip, > > Thank you for your prompt reply. Just now I have > installed pyrex on OpenSuSE10.2 (Linux 64 bit system) > > The next problem is > > python setup.py build > running build > running build_py > creating build > creating build/lib.linux-x86_64-2.5 > creating build/lib.linux-x86_64-2.5/pywt > copying pywt/multilevel.py -> > build/lib.linux-x86_64-2.5/pywt > copying pywt/multidim.py -> > build/lib.linux-x86_64-2.5/pywt > copying pywt/__init__.py -> > build/lib.linux-x86_64-2.5/pywt > copying pywt/numerix.py -> build/lib.linux-x86_64-2.5/pywt > copying pywt/tresholding.py -> > build/lib.linux-x86_64-2.5/pywt > copying pywt/release_details.py -> > build/lib.linux-x86_64-2.5/pywt > copying pywt/wavelet_packets.py -> > build/lib.linux-x86_64-2.5/pywt > copying pywt/wnames.py -> build/lib.linux-x86_64-2.5/pywt > running build_ext > building 'pywt._pywt' extension > creating build/temp.linux-x86_64-2.5 > creating build/temp.linux-x86_64-2.5/src > gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 > -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC > -DPY_EXTENSION -Isrc -I/usr/include/python2.5 -c > src/_pywt.c -o build/temp.linux-x86_64-2.5/src/_pywt.o > -Wno-uninitialized -Wno-unused -O2 > gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 > -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC > -DPY_EXTENSION -Isrc -I/usr/include/python2.5 -c > src/common.c -o build/temp.linux-x86_64-2.5/src/common.o > -Wno-uninitialized -Wno-unused -O2 > gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 > -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC > -DPY_EXTENSION -Isrc -I/usr/include/python2.5 -c > src/convolution.c -o > build/temp.linux-x86_64-2.5/src/convolution.o > -Wno-uninitialized -Wno-unused -O2 > src/convolution.c:78: error: conflicting types for > 'downsampling_convolution' > src/convolution.h:27: error: previous declaration of > 'downsampling_convolution' was here > src/convolution.c:299: error: conflicting types for > 'allocating_downsampling_convolution' > src/convolution.h:32: error: previous declaration of > 'allocating_downsampling_convolution' was here > error: command 'gcc' failed with exit status 1 > > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Filip Wasilewski From nwagner at iam.uni-stuttgart.de Tue May 8 12:12:47 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 08 May 2007 18:12:47 +0200 Subject: [SciPy-user] spline wavelet decomposition In-Reply-To: References: <463B1D61.2050504@aims.ac.za> <20070504131452.GF23778@mentat.za.net> <46407D5D.3070803@iam.uni-stuttgart.de> Message-ID: On Tue, 8 May 2007 17:44:08 +0200 "Filip Wasilewski" wrote: > Hi, > > recently I was doing some changes to the trunk and that >might > introduce some problems on 64bit platforms (I'm still >doing most dev > on 32bits). > > I have committed a possible fix, but don't know if it >solves the > problem on your platform. > > Can you checkout the 0.1.6 release version and tell if >the issue is > also present there? -- > Works like a charm ! Thank you very much ! Cheers, Nils http://wavelets.scipy.org/svn/multiresolution/pywt/tags/release_0_1_6/ > > Thanks, >Filip From lfriedri at imtek.de Tue May 8 12:26:59 2007 From: lfriedri at imtek.de (Lars Friedrich) Date: Tue, 08 May 2007 18:26:59 +0200 Subject: [SciPy-user] Optimization & Parallelization of, integrate.odeint In-Reply-To: References: Message-ID: <4640A4D3.4040307@imtek.de> Hello Anne, hello all, thank you very much for your post. I am also interested in solving ODEs and your post encourages me to use python for this. Anne Archibald wrote: > If you write it efficiently, with minimal loops, you will not lose a > great deal of performance by using python. Ok. I have the problem, that my ODEs are usually very simple like x'(t) = input(t) - k(x) * x(t) where k(x) is a nonlinear function of x and x is a scalar(!). So I don't have the chance to avoid loops, because there are none. > > Solving ODEs, at least the way odeint and friends do it, is a mostly > sequential process - you calculate the derivative at a few samples, > find an approximate solution that covers the range of values you just > sampled; if it's a good approximation, you accept its value at the > endpoint and repeat the process, otherwise you cut it down and start > working on a smaller step. But until you've taken a step, you don't > know what values to use to evaulate your function at the next step. So > there's not a lot of scope for parallelization, even if odeint had > been written to do it. I understand. This means, that the solver will to the unavoidable "looping" over the system function for me. But if the system function is a basic python function, this will cause some overhead, that makes the simulation slow, or did I get something wrong here? > You might also look into > one of the other tools - numexpr, weave, etc. - for accelerating code > before you jump to FORTRAN. Ok. What do you recommend? (Fortran is no option for me ;-) Numexpr is in the sandbox, so I started to look at weave. But it seems to me that I need some work to start using weave. (setting up the compiler?, understanding how weave works...) This is no problem, if I know, that I am on the right way. So can you recommend the combination weave/odeint or is there a better way? Do you think this will speed up simulations with *scalar* or *low-dimensional* states? Thanks Lars From peridot.faceted at gmail.com Tue May 8 12:46:19 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 8 May 2007 12:46:19 -0400 Subject: [SciPy-user] Optimization & Parallelization of, integrate.odeint In-Reply-To: <4640A4D3.4040307@imtek.de> References: <4640A4D3.4040307@imtek.de> Message-ID: On 08/05/07, Lars Friedrich wrote: > Anne Archibald wrote: > > If you write it efficiently, with minimal loops, you will not lose a > > great deal of performance by using python. > Ok. I have the problem, that my ODEs are usually very simple like > > x'(t) = input(t) - k(x) * x(t) > > where k(x) is a nonlinear function of x and x is a scalar(!). So I don't > have the chance to avoid loops, because there are none. Well, the answer depends on exactly why the simple approach is too slow: are you solving the equation many times? are you following the equation at many times? does the solution have many sharp features the solver is working hard to track? is the problem stiff (forcing the solver to work as if there were sharp features even if there aren't - scipy's solvers should work fine here but check their full output to see if it's happening)? is the nonlinear function k(x) expensive to evaluate? There are ways to speed it up in most of the above cases, but they are very different. > I understand. This means, that the solver will to the unavoidable > "looping" over the system function for me. But if the system function is > a basic python function, this will cause some overhead, that makes the > simulation slow, or did I get something wrong here? Yes. It's not looping per se that's the problem, it's that execution of python code is fairly slow. And note, "fairly slow" just means "much slower than C code accomplishing the same task". It may be fast enough for your application. > Ok. What do you recommend? (Fortran is no option for me ;-) Numexpr is > in the sandbox, so I started to look at weave. But it seems to me that I > need some work to start using weave. (setting up the compiler?, > understanding how weave works...) This is no problem, if I know, that I > am on the right way. So can you recommend the combination weave/odeint > or is there a better way? Do you think this will speed up simulations > with *scalar* or *low-dimensional* states? Again, it depends what the problem is. Using odeint will certainly incur *some* overhead for every call of the RHS function, simply because the call has to go through python. But weave will allow you to speed up the RHS itself. There's also a pure python optimizer (pyrex? pyro? not sure what it's called) that's supposed to work fairly well. In short: the answer depends totally on why the obvious approach is too slow. Anne From c.gillespie at ncl.ac.uk Tue May 8 12:47:19 2007 From: c.gillespie at ncl.ac.uk (Colin Gillespie) Date: Tue, 08 May 2007 17:47:19 +0100 Subject: [SciPy-user] hyp1f1 -- Confluent hypergeometric function (1F1) Message-ID: <4640A997.3040404@ncl.ac.uk> Dear All, I'm using the confluent hypergeometric function and I'm running into some problems. >>> from scipy.special import hyp1f1 >>> hyp1f1(1,1,10) 22026.465794806718 #Same as maple and Abramowitz and Stegun >>> hyp1f1(3,4,-6) 0.027777777777777769 #Maple gives0.0260564.. There seems to be some sort of rounding error. Can I request more digits (or is this a bug)? Thanks Colin -- Dr Colin Gillespie http://www.mas.ncl.ac.uk/~ncsg3/ -- Dr Colin Gillespie http://www.mas.ncl.ac.uk/~ncsg3/ From rhc28 at cornell.edu Tue May 8 12:49:16 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Tue, 8 May 2007 12:49:16 -0400 Subject: [SciPy-user] Optimization & Parallelization of, integrate.odeint In-Reply-To: <4640A4D3.4040307@imtek.de> References: <4640A4D3.4040307@imtek.de> Message-ID: Hi, Frankly, for low-dimensional systems, I find the scipy-wrapped vode and odeint solvers to be quite fast enough for most uses. It's only if you're going to start doing things like parameter sensitivity calculations or large parameter sweeps that you might have to wait a bit longer for your answers. Not to be flippant, but the only truly fast way to use python for solving ODEs is to only use it for managing your system, and actually *solve* the thing entirely in the same piece of C or Fortran code. By which I am trying to say that for ODE problems for which you really care about performance, the C- or Fortran-based integrators that have been interfaced to by PyDSTool and SloppyCell are your best bet, AFAIK. With these you can specify and manipulate your ODE definitions, parameters etc. in python, and work with the output in python, but the RHS functions etc. will actually get *automatically* converted into C code for the integrators. So there's no call-back to Python functions, which has a lot of inefficient overhead. Also, I wouldn't have thought that using weave for your RHS functions will really make a great improvement, as from my understanding information still has to flow back and forth to odeint via the python layer for every function call. But I'd be interested to see a comparative test of that, or to be educated otherwise! -Rob From novak at ucolick.org Tue May 8 13:45:10 2007 From: novak at ucolick.org (Greg Novak) Date: Tue, 8 May 2007 10:45:10 -0700 Subject: [SciPy-user] Intel compiler error that should be a warning? Message-ID: I'm trying to compile Scipy using the Intel C Compiler and I'm getting an error that I think should be a warning. The machine is a cluster of Intel 64 bit processors and the error happens when compiling Lib/special/cephes/const.c. Here are the error messages: novak at pleiades cephes]$ icc const.c const.c(92): error: floating-point operation result is out of range double INFINITY = 1.0/0.0; /* 99e999; */ ^ const.c(97): error: floating-point operation result is out of range double NAN = 1.0/0.0 - 1.0/0.0; ^ const.c(97): error: floating-point operation result is out of range double NAN = 1.0/0.0 - 1.0/0.0; ^ compilation aborted for const.c (code 2) I looked and looked, and I can't seem to find a compiler flag that will make this into a warning rather than an error. There's an option -ww that looks like it might work, but it requires a number, like in the following example they give in the manual: prompt>icc -Wall test.c remark #177: variable 'x' was declared but never referenced To disable warning #177, use the -wd option: prompt>icc -Wall -wd177 test.c Of course, there's no identifying number in my messages, just the line number in the source code. Any idea how to make this work? Thanks, Greg From in.the.darkside at gmail.com Tue May 8 14:45:43 2007 From: in.the.darkside at gmail.com (darkside) Date: Tue, 8 May 2007 18:45:43 +0000 (UTC) Subject: [SciPy-user] fminbound syntax Message-ID: ello everyone: I'm new using scipy, so I'm sorry if any of my questions are silly. I'm trying to find the maxima, absolut and local, of a function, in order to fit an exponencial curve and get de exponecial argument. My function if the soluction of a couple equations system: ------------ def derivs3(x,t,gamma,omega,dl): d1 = omega*x[2] - gamma *x[0] d2 = dl*x[2] - (gamma/2.)* x[1] d3 = -omega *x[0] - dl*x[1] - (gamma/2.)* x[2] + (omega/2.) return d1,d2,d3 def solucion(a,t,gamma, omega, dl): sol=odeint(derivs3,a,t,(gamma,omega,dl)) return sol -------------------------------------- The case I'm interesting in, the soluction have the form of a sin*exp, so I want to find the function that envolves it, a exponencial function. To do this, I can find the maximas, and fit them, so I use: ------------------------------------ def g4(t1): sol2=odes.solucion((0.5,0,0),t1,0.5,5,0) return sol2[:,0] x_max = optimize.fminbound(g4,0,2) --------------------------------------- To use fminbound, I need a function that returns only a single array, so I use de g4 function. I use (0,2) as bounds. The problem I have is that the return is: ------------------- x_max 1.9999959949686341 ---------------------- That aren't the maximas in the interval. If I change the interval: -------------- x_max = optimize.fminbound(g4,0.1,4) x_max 3.9999945129622403 --------------- And so on. I don't know what I'm doing wrong, so if everyone can help, I'm going to be really happy. From cookedm at physics.mcmaster.ca Tue May 8 16:44:15 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 8 May 2007 16:44:15 -0400 Subject: [SciPy-user] Intel compiler error that should be a warning? In-Reply-To: References: Message-ID: <20070508204415.GA19076@arbutus.physics.mcmaster.ca> On Tue, May 08, 2007 at 10:45:10AM -0700, Greg Novak wrote: > I'm trying to compile Scipy using the Intel C Compiler and I'm getting > an error that I think should be a warning. The machine is a cluster > of Intel 64 bit processors and the error happens when compiling > Lib/special/cephes/const.c. Here are the error messages: > > novak at pleiades cephes]$ icc const.c > const.c(92): error: floating-point operation result is out of range > double INFINITY = 1.0/0.0; /* 99e999; */ > ^ > const.c(97): error: floating-point operation result is out of range > double NAN = 1.0/0.0 - 1.0/0.0; > ^ > const.c(97): error: floating-point operation result is out of range > double NAN = 1.0/0.0 - 1.0/0.0; > ^ > compilation aborted for const.c (code 2) Are you using the latest svn version of scipy? This bit of code should only be compiled if UNK was defined in mconf.h, and that's never defined anymore. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From elfnor at gmail.com Tue May 8 19:24:35 2007 From: elfnor at gmail.com (Eleanor) Date: Tue, 8 May 2007 23:24:35 +0000 (UTC) Subject: [SciPy-user] Least squares with covariance - equivalent to Matlab [x, stdx, mse] = lscov(A, b, V) Message-ID: Hi I'm fairly new to python and Scipy. I need to solve overdeteremined least squares A * x = b with b having a known covariance matrix V. I've previously used Matlab's lscov function http://www.mathworks.com/access/helpdesk/help/techdoc/ref/lscov.html and as a learning exercise, I wrote a low level version of this using the cholesky, qr, solve, svd functions from scipy.linalg. (I had a peek at lscov.m for help). My question is: Can I do this more directly with an existing scipy function? cheers Eleanor From nwagner at iam.uni-stuttgart.de Wed May 9 02:53:35 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 May 2007 08:53:35 +0200 Subject: [SciPy-user] spline wavelet decomposition In-Reply-To: <46407D5D.3070803@iam.uni-stuttgart.de> References: <463B1D61.2050504@aims.ac.za> <20070504131452.GF23778@mentat.za.net> <46407D5D.3070803@iam.uni-stuttgart.de> Message-ID: <46416FEF.1090400@iam.uni-stuttgart.de> Nils Wagner wrote: > Stefan van der Walt wrote: > >> Hi Issa >> >> On Fri, May 04, 2007 at 01:47:45PM +0200, Issa Karambal wrote: >> >> >>> if there anyone who knows how to simulate 'spline wavelet decomposition >>> and reconstruction' for a given signal 'f'. >>> >>> >> Take a look at >> >> http://wavelets.scipy.org >> >> which includes different wavelet families, including Haar, Daubechies, >> Symlets, Coiflets and more. >> >> 'demo/dwt_signal_decomposition.py' shows how a signal is decomposed. >> >> Regards >> St?fan >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > Hi all, > > I tried to install pywt via svn. > > python setup.py build > running build > running build_py > creating build > creating build/lib.linux-x86_64-2.4 > creating build/lib.linux-x86_64-2.4/pywt > copying pywt/numerix.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/wavelet_packets.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/wnames.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/tresholding.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/release_details.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/__init__.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/multilevel.py -> build/lib.linux-x86_64-2.4/pywt > copying pywt/multidim.py -> build/lib.linux-x86_64-2.4/pywt > running build_ext > building 'pywt._pywt' extension > Traceback (most recent call last): > File "setup.py", line 77, in ? > do_setup() > File "setup.py", line 72, in do_setup > cmdclass = cmdclass, > File "/usr/lib64/python2.4/distutils/core.py", line 149, in setup > dist.run_commands() > File "/usr/lib64/python2.4/distutils/dist.py", line 946, in run_commands > self.run_command(cmd) > File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "/usr/lib64/python2.4/distutils/command/build.py", line 112, in run > self.run_command(cmd_name) > File "/usr/lib64/python2.4/distutils/cmd.py", line 333, in run_command > self.distribution.run_command(command) > File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command > cmd_obj.run() > File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 279, > in run > self.build_extensions() > File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 405, > in build_extensions > self.build_extension(ext) > File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 442, > in build_extension > sources = self.swig_sources(sources, ext) > TypeError: swig_sources() takes exactly 2 arguments (3 given) > > > Any idea how to fix this problem ? > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > The problem described above persists on Suse Linux 10.0. I have installed pyrex and swig. Any idea ? Nils rpm -q pyrex pyrex-0.9.3-7 rpm -q swig swig-1.3.24-4 From filip.wasilewski at gmail.com Wed May 9 03:47:52 2007 From: filip.wasilewski at gmail.com (Filip Wasilewski) Date: Wed, 9 May 2007 09:47:52 +0200 Subject: [SciPy-user] spline wavelet decomposition In-Reply-To: <46416FEF.1090400@iam.uni-stuttgart.de> References: <463B1D61.2050504@aims.ac.za> <20070504131452.GF23778@mentat.za.net> <46407D5D.3070803@iam.uni-stuttgart.de> <46416FEF.1090400@iam.uni-stuttgart.de> Message-ID: Hi, On 5/9/07, Nils Wagner wrote: [..] > > Hi all, > > > > I tried to install pywt via svn. > > > > python setup.py build > > running build > > running build_py > > creating build > > creating build/lib.linux-x86_64-2.4 > > creating build/lib.linux-x86_64-2.4/pywt > > copying pywt/numerix.py -> build/lib.linux-x86_64-2.4/pywt > > copying pywt/wavelet_packets.py -> build/lib.linux-x86_64-2.4/pywt > > copying pywt/wnames.py -> build/lib.linux-x86_64-2.4/pywt > > copying pywt/tresholding.py -> build/lib.linux-x86_64-2.4/pywt > > copying pywt/release_details.py -> build/lib.linux-x86_64-2.4/pywt > > copying pywt/__init__.py -> build/lib.linux-x86_64-2.4/pywt > > copying pywt/multilevel.py -> build/lib.linux-x86_64-2.4/pywt > > copying pywt/multidim.py -> build/lib.linux-x86_64-2.4/pywt > > running build_ext > > building 'pywt._pywt' extension > > Traceback (most recent call last): > > File "setup.py", line 77, in ? > > do_setup() > > File "setup.py", line 72, in do_setup > > cmdclass = cmdclass, > > File "/usr/lib64/python2.4/distutils/core.py", line 149, in setup > > dist.run_commands() > > File "/usr/lib64/python2.4/distutils/dist.py", line 946, in run_commands > > self.run_command(cmd) > > File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command > > cmd_obj.run() > > File "/usr/lib64/python2.4/distutils/command/build.py", line 112, in run > > self.run_command(cmd_name) > > File "/usr/lib64/python2.4/distutils/cmd.py", line 333, in run_command > > self.distribution.run_command(command) > > File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command > > cmd_obj.run() > > File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 279, > > in run > > self.build_extensions() > > File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 405, > > in build_extensions > > self.build_extension(ext) > > File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 442, > > in build_extension > > sources = self.swig_sources(sources, ext) > > TypeError: swig_sources() takes exactly 2 arguments (3 given) > > > > > > Any idea how to fix this problem ? > > > > Nils > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > The problem described above persists on Suse Linux 10.0. > I have installed pyrex and swig. > Any idea ? > > Nils > > rpm -q pyrex > pyrex-0.9.3-7 > rpm -q swig > swig-1.3.24-4 How about updating Pyrex? Alternatively you can remove it from your system and then the build chain should use the included .c sources (translated from .pyx files) instead of trying to recreate them. Filip From nwagner at iam.uni-stuttgart.de Wed May 9 04:05:35 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 May 2007 10:05:35 +0200 Subject: [SciPy-user] hyp1f1 -- Confluent hypergeometric function (1F1) In-Reply-To: <4640A997.3040404@ncl.ac.uk> References: <4640A997.3040404@ncl.ac.uk> Message-ID: <464180CF.3020804@iam.uni-stuttgart.de> Colin Gillespie wrote: > Dear All, > > I'm using the confluent hypergeometric function and I'm running into > some problems. > > >>>> from scipy.special import hyp1f1 >>>> hyp1f1(1,1,10) >>>> > 22026.465794806718 #Same as maple and Abramowitz and Stegun > >>>> hyp1f1(3,4,-6) >>>> > 0.027777777777777769 #Maple gives0.0260564.. > > There seems to be some sort of rounding error. Can I request more digits (or is this a bug)? > > Thanks > > Colin > > > It seems to be a bug. I found a Matlab implementation which reproduces the Maple result. http://ceta.mit.edu/comp_spec_func/mchgm.m Nils From nwagner at iam.uni-stuttgart.de Wed May 9 04:10:07 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 May 2007 10:10:07 +0200 Subject: [SciPy-user] spline wavelet decomposition In-Reply-To: References: <463B1D61.2050504@aims.ac.za> <20070504131452.GF23778@mentat.za.net> <46407D5D.3070803@iam.uni-stuttgart.de> <46416FEF.1090400@iam.uni-stuttgart.de> Message-ID: <464181DF.3030705@iam.uni-stuttgart.de> Filip Wasilewski wrote: > Hi, > > On 5/9/07, Nils Wagner wrote: > [..] > > >>> Hi all, >>> >>> I tried to install pywt via svn. >>> >>> python setup.py build >>> running build >>> running build_py >>> creating build >>> creating build/lib.linux-x86_64-2.4 >>> creating build/lib.linux-x86_64-2.4/pywt >>> copying pywt/numerix.py -> build/lib.linux-x86_64-2.4/pywt >>> copying pywt/wavelet_packets.py -> build/lib.linux-x86_64-2.4/pywt >>> copying pywt/wnames.py -> build/lib.linux-x86_64-2.4/pywt >>> copying pywt/tresholding.py -> build/lib.linux-x86_64-2.4/pywt >>> copying pywt/release_details.py -> build/lib.linux-x86_64-2.4/pywt >>> copying pywt/__init__.py -> build/lib.linux-x86_64-2.4/pywt >>> copying pywt/multilevel.py -> build/lib.linux-x86_64-2.4/pywt >>> copying pywt/multidim.py -> build/lib.linux-x86_64-2.4/pywt >>> running build_ext >>> building 'pywt._pywt' extension >>> Traceback (most recent call last): >>> File "setup.py", line 77, in ? >>> do_setup() >>> File "setup.py", line 72, in do_setup >>> cmdclass = cmdclass, >>> File "/usr/lib64/python2.4/distutils/core.py", line 149, in setup >>> dist.run_commands() >>> File "/usr/lib64/python2.4/distutils/dist.py", line 946, in run_commands >>> self.run_command(cmd) >>> File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command >>> cmd_obj.run() >>> File "/usr/lib64/python2.4/distutils/command/build.py", line 112, in run >>> self.run_command(cmd_name) >>> File "/usr/lib64/python2.4/distutils/cmd.py", line 333, in run_command >>> self.distribution.run_command(command) >>> File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command >>> cmd_obj.run() >>> File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 279, >>> in run >>> self.build_extensions() >>> File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 405, >>> in build_extensions >>> self.build_extension(ext) >>> File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 442, >>> in build_extension >>> sources = self.swig_sources(sources, ext) >>> TypeError: swig_sources() takes exactly 2 arguments (3 given) >>> >>> >>> Any idea how to fix this problem ? >>> >>> Nils >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >> The problem described above persists on Suse Linux 10.0. >> I have installed pyrex and swig. >> Any idea ? >> >> Nils >> >> rpm -q pyrex >> pyrex-0.9.3-7 >> rpm -q swig >> swig-1.3.24-4 >> > > > How about updating Pyrex? Alternatively you can remove it from your > system and then the build chain should use the included .c sources > (translated from .pyx files) instead of trying to recreate them. > > Filip > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Ok. I have removed pyrex from my system. Now it works for me. Which version of pyrex is actually required by pywt ? Python 2.4.1 (#1, Oct 13 2006, 16:51:58) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pywt >>> help (pywt) >>> pywt.__version__ '0.1.7' From s.mientki at ru.nl Wed May 9 06:22:28 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 09 May 2007 12:22:28 +0200 Subject: [SciPy-user] diff function, different from MatLab ... Message-ID: <4641A0E4.6090002@ru.nl> hello, Coming from MatLab, it's very nice to see the better perfomance / optimization of SciPy, especially views instead of copies and taking the smallest result for a new variable. But sometimes, I wonder ... ... why ? F = False T = True s = asarray ( [ F, F, F, T, T, T, F,F ] ) d = diff ( s ) print d >>> d = ( [ F, F, T, F, F, T, F ] ) why not: >>> d = ( [ 0, 0, 1, 0, 0, -1, 0 ] ) which can be achieved by d = diff (1 * s ) thanks, Stef Mientki From matthieu.brucher at gmail.com Wed May 9 07:04:03 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 9 May 2007 13:04:03 +0200 Subject: [SciPy-user] diff function, different from MatLab ... In-Reply-To: <4641A0E4.6090002@ru.nl> References: <4641A0E4.6090002@ru.nl> Message-ID: Hi, I suppose that if s is a boolean array, diff is a boolean array as well. Matlab works on doubles every time, not numpy, and your result is coherent in a boolean algebra. Matthieu 2007/5/9, Stef Mientki : > > hello, > > Coming from MatLab, it's very nice to see the better perfomance / > optimization of SciPy, > especially views instead of copies and taking the smallest result for a > new variable. > > But sometimes, I wonder ... > ... why ? > > F = False > T = True > s = asarray ( [ F, F, F, T, T, T, F,F ] ) > d = diff ( s ) > print d > >>> d = ( [ F, F, T, F, F, T, F ] ) > > why not: > >>> d = ( [ 0, 0, 1, 0, 0, -1, 0 ] ) > > which can be achieved by > d = diff (1 * s ) > > thanks, > Stef Mientki > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Wed May 9 08:05:20 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 09 May 2007 14:05:20 +0200 Subject: [SciPy-user] diff function, different from MatLab ... In-Reply-To: References: <4641A0E4.6090002@ru.nl> Message-ID: <4641B900.4020303@ru.nl> hi Matthieu, Matthieu Brucher wrote: > Hi, > > I suppose that if s is a boolean array, diff is a boolean array as > well. Matlab works on doubles every time, not numpy, and your result > is coherent in a boolean algebra. I guess you're right, and also integer is not "upsized" to float: 5/3 = 1 (but I believe that's a still a discussion for the Python 3 ;-) on the other hand, some functions do upsize the result sqrt(4) = 2.0 sqrt(-4) = 2j I think I've just to familiarize with this ;-) thanks, Stef Mientki From filip.wasilewski at gmail.com Wed May 9 10:29:43 2007 From: filip.wasilewski at gmail.com (Filip Wasilewski) Date: Wed, 9 May 2007 16:29:43 +0200 Subject: [SciPy-user] spline wavelet decomposition In-Reply-To: <464181DF.3030705@iam.uni-stuttgart.de> References: <463B1D61.2050504@aims.ac.za> <20070504131452.GF23778@mentat.za.net> <46407D5D.3070803@iam.uni-stuttgart.de> <46416FEF.1090400@iam.uni-stuttgart.de> <464181DF.3030705@iam.uni-stuttgart.de> Message-ID: Hi Nils, On 5/9/07, Nils Wagner wrote: > Filip Wasilewski wrote: > > Hi, > > > > On 5/9/07, Nils Wagner wrote: > > [..] > > > > > >>> Hi all, > >>> > >>> I tried to install pywt via svn. > >>> > >>> python setup.py build > >>> running build > >>> running build_py > >>> creating build > >>> creating build/lib.linux-x86_64-2.4 > >>> creating build/lib.linux-x86_64-2.4/pywt > >>> copying pywt/numerix.py -> build/lib.linux-x86_64-2.4/pywt > >>> copying pywt/wavelet_packets.py -> build/lib.linux-x86_64-2.4/pywt > >>> copying pywt/wnames.py -> build/lib.linux-x86_64-2.4/pywt > >>> copying pywt/tresholding.py -> build/lib.linux-x86_64-2.4/pywt > >>> copying pywt/release_details.py -> build/lib.linux-x86_64-2.4/pywt > >>> copying pywt/__init__.py -> build/lib.linux-x86_64-2.4/pywt > >>> copying pywt/multilevel.py -> build/lib.linux-x86_64-2.4/pywt > >>> copying pywt/multidim.py -> build/lib.linux-x86_64-2.4/pywt > >>> running build_ext > >>> building 'pywt._pywt' extension > >>> Traceback (most recent call last): > >>> File "setup.py", line 77, in ? > >>> do_setup() > >>> File "setup.py", line 72, in do_setup > >>> cmdclass = cmdclass, > >>> File "/usr/lib64/python2.4/distutils/core.py", line 149, in setup > >>> dist.run_commands() > >>> File "/usr/lib64/python2.4/distutils/dist.py", line 946, in run_commands > >>> self.run_command(cmd) > >>> File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command > >>> cmd_obj.run() > >>> File "/usr/lib64/python2.4/distutils/command/build.py", line 112, in run > >>> self.run_command(cmd_name) > >>> File "/usr/lib64/python2.4/distutils/cmd.py", line 333, in run_command > >>> self.distribution.run_command(command) > >>> File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command > >>> cmd_obj.run() > >>> File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 279, > >>> in run > >>> self.build_extensions() > >>> File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 405, > >>> in build_extensions > >>> self.build_extension(ext) > >>> File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 442, > >>> in build_extension > >>> sources = self.swig_sources(sources, ext) > >>> TypeError: swig_sources() takes exactly 2 arguments (3 given) > >>> > >>> > >>> Any idea how to fix this problem ? > >>> > >>> Nils > >>> > >>> _______________________________________________ > >>> SciPy-user mailing list > >>> SciPy-user at scipy.org > >>> http://projects.scipy.org/mailman/listinfo/scipy-user > >>> > >>> > >> The problem described above persists on Suse Linux 10.0. > >> I have installed pyrex and swig. > >> Any idea ? > >> > >> Nils > >> > >> rpm -q pyrex > >> pyrex-0.9.3-7 > >> rpm -q swig > >> swig-1.3.24-4 > >> > > > > > > How about updating Pyrex? Alternatively you can remove it from your > > system and then the build chain should use the included .c sources > > (translated from .pyx files) instead of trying to recreate them. > > > > Filip > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > Ok. I have removed pyrex from my system. Now it works for me. > Which version of pyrex is actually required by pywt ? > > Python 2.4.1 (#1, Oct 13 2006, 16:51:58) > [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import pywt > >>> help (pywt) > > >>> pywt.__version__ > '0.1.7' The version of Pyrex that is required for building pywt from scratch is the one from lxml project - http://codespeak.net/svn/lxml/pyrex/, not the regular one. I probably should emphasis this in the doc or think of including Pyrex as a subpackage/configuring svn externals. Filip From fredmfp at gmail.com Wed May 9 10:40:36 2007 From: fredmfp at gmail.com (fred) Date: Wed, 09 May 2007 16:40:36 +0200 Subject: [SciPy-user] shift FFT2D... Message-ID: <4641DD64.1090106@gmail.com> Hi all, I use an FFT 2D on a matrix to compute a convolution like this: a = fft2(input_data) b = fft2(output_data) c = real(ifft2(a*b)) The problem is that c should look like this: c1 | c2 ----------- c3 | c4 but it looks like this: c4 | c3 ----------- c2 | c1 How can I get efficiently the right result ? (something like shift ?) Thanks in advance. Cheers, -- http://scipy.org/FredericPetit From faltet at carabos.com Wed May 9 11:06:15 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 09 May 2007 17:06:15 +0200 Subject: [SciPy-user] shift FFT2D... In-Reply-To: <4641DD64.1090106@gmail.com> References: <4641DD64.1090106@gmail.com> Message-ID: <1178723175.3546.11.camel@localhost.localdomain> El dc 09 de 05 del 2007 a les 16:40 +0200, en/na fred va escriure: > Hi all, > > I use an FFT 2D on a matrix to compute a convolution like this: > > a = fft2(input_data) > b = fft2(output_data) > c = real(ifft2(a*b)) > > The problem is that c should look like this: > > c1 | c2 > ----------- > c3 | c4 > > but it looks like this: > > c4 | c3 > ----------- > c2 | c1 > > How can I get efficiently the right result ? (something like shift ?) Something like: N4 = N/4 c1 = c[:N4]; c2 = c[N4:2*N4]; c3 = c[2*N4:3*N4]; c4 = c[3*N4:] tmp = c1.copy(); c1[:] = c4; c4[:] = tmp tmp = c2.copy(); c2[:] = c3; c3[:] = tmp should do the trick. As c1, c2, c3 and 4 are views and not copies, the above snippet should be fairly optimal in terms of both speed and memory usage. HTH, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From faltet at carabos.com Wed May 9 11:10:00 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 09 May 2007 17:10:00 +0200 Subject: [SciPy-user] shift FFT2D... In-Reply-To: <1178723175.3546.11.camel@localhost.localdomain> References: <4641DD64.1090106@gmail.com> <1178723175.3546.11.camel@localhost.localdomain> Message-ID: <1178723400.3546.13.camel@localhost.localdomain> El dc 09 de 05 del 2007 a les 17:06 +0200, en/na Francesc Altet va escriure: > El dc 09 de 05 del 2007 a les 16:40 +0200, en/na fred va escriure: > > Hi all, > > > > I use an FFT 2D on a matrix to compute a convolution like this: > > > > a = fft2(input_data) > > b = fft2(output_data) > > c = real(ifft2(a*b)) > > > > The problem is that c should look like this: > > > > c1 | c2 > > ----------- > > c3 | c4 > > > > but it looks like this: > > > > c4 | c3 > > ----------- > > c2 | c1 > > > > How can I get efficiently the right result ? (something like shift ?) > > Something like: > > N4 = N/4 > c1 = c[:N4]; c2 = c[N4:2*N4]; c3 = c[2*N4:3*N4]; c4 = c[3*N4:] > tmp = c1.copy(); c1[:] = c4; c4[:] = tmp > tmp = c2.copy(); c2[:] = c3; c3[:] = tmp Ops, in the above N == len(c), of course. -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From fredmfp at gmail.com Wed May 9 11:19:00 2007 From: fredmfp at gmail.com (fred) Date: Wed, 09 May 2007 17:19:00 +0200 Subject: [SciPy-user] shift FFT2D... In-Reply-To: <1178723175.3546.11.camel@localhost.localdomain> References: <4641DD64.1090106@gmail.com> <1178723175.3546.11.camel@localhost.localdomain> Message-ID: <4641E664.7020801@gmail.com> Francesc Altet a ?crit : > El dc 09 de 05 del 2007 a les 16:40 +0200, en/na fred va escriure: > >> Hi all, >> >> I use an FFT 2D on a matrix to compute a convolution like this: >> >> a = fft2(input_data) >> b = fft2(output_data) >> c = real(ifft2(a*b)) >> >> The problem is that c should look like this: >> >> c1 | c2 >> ----------- >> c3 | c4 >> >> but it looks like this: >> >> c4 | c3 >> ----------- >> c2 | c1 >> >> How can I get efficiently the right result ? (something like shift ?) >> > > Something like: > I was thinking something like this, but I was wondering if there was not a builtin function for example... Thanks anyway. Another question: how can I put a small matrix in a bigger (say (10x10) centered in (200x200)) ? Cheers, -- http://scipy.org/FredericPetit From faltet at carabos.com Wed May 9 11:36:43 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 09 May 2007 17:36:43 +0200 Subject: [SciPy-user] shift FFT2D... In-Reply-To: <4641E664.7020801@gmail.com> References: <4641DD64.1090106@gmail.com> <1178723175.3546.11.camel@localhost.localdomain> <4641E664.7020801@gmail.com> Message-ID: <1178725004.3546.18.camel@localhost.localdomain> El dc 09 de 05 del 2007 a les 17:19 +0200, en/na fred va escriure: > I was thinking something like this, but I was wondering if there was > not > a builtin function for example... > > Thanks anyway. You are welcome. > Another question: how can I put a small matrix in a bigger (say > (10x10) > centered in (200x200)) ? You mean something like: In [84]:a=numpy.zeros((100,100)) In [85]:b=numpy.ones((2,2)) In [86]:a[1:1+b.shape[0], 1:1+b.shape[1]] = b In [87]:a Out[87]: array([[ 0., 0., 0., ..., 0., 0., 0.], [ 0., 1., 1., ..., 0., 0., 0.], [ 0., 1., 1., ..., 0., 0., 0.], ..., [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.]]) where b has been 'put' into a in coordinates (1,1) ? Or maybe you want something more? Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From rmay at ou.edu Wed May 9 11:45:37 2007 From: rmay at ou.edu (Ryan May) Date: Wed, 09 May 2007 10:45:37 -0500 Subject: [SciPy-user] shift FFT2D... In-Reply-To: <4641DD64.1090106@gmail.com> References: <4641DD64.1090106@gmail.com> Message-ID: <4641ECA1.9040403@ou.edu> fred wrote: > Hi all, > > I use an FFT 2D on a matrix to compute a convolution like this: > > a = fft2(input_data) > b = fft2(output_data) > c = real(ifft2(a*b)) > > The problem is that c should look like this: > > c1 | c2 > ----------- > c3 | c4 > > but it looks like this: > > c4 | c3 > ----------- > c2 | c1 > > How can I get efficiently the right result ? (something like shift ?) > Have you tried numpy.fft.fftshift? I'm not sure if it works on matrices, but it shifts the 0 frequency component to the middle, which is what I think you're looking for. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From faltet at carabos.com Wed May 9 11:48:00 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 09 May 2007 17:48:00 +0200 Subject: [SciPy-user] shift FFT2D... In-Reply-To: <1178725004.3546.18.camel@localhost.localdomain> References: <4641DD64.1090106@gmail.com> <1178723175.3546.11.camel@localhost.localdomain> <4641E664.7020801@gmail.com> <1178725004.3546.18.camel@localhost.localdomain> Message-ID: <1178725680.3546.20.camel@localhost.localdomain> El dc 09 de 05 del 2007 a les 17:36 +0200, en/na Francesc Altet va escriure: > El dc 09 de 05 del 2007 a les 17:19 +0200, en/na fred va escriure: > > I was thinking something like this, but I was wondering if there was > > not > > a builtin function for example... > > > > Thanks anyway. > > You are welcome. > > > Another question: how can I put a small matrix in a bigger (say > > (10x10) > > centered in (200x200)) ? > > You mean something like: > > In [84]:a=numpy.zeros((100,100)) > In [85]:b=numpy.ones((2,2)) > In [86]:a[1:1+b.shape[0], 1:1+b.shape[1]] = b > In [87]:a > Out[87]: > array([[ 0., 0., 0., ..., 0., 0., 0.], > [ 0., 1., 1., ..., 0., 0., 0.], > [ 0., 1., 1., ..., 0., 0., 0.], > ..., > [ 0., 0., 0., ..., 0., 0., 0.], > [ 0., 0., 0., ..., 0., 0., 0.], > [ 0., 0., 0., ..., 0., 0., 0.]]) > > where b has been 'put' into a in coordinates (1,1) > > ? Or maybe you want something more? Ops. I was wrong. I think what you want is something like: In [96]:r=b.shape[0]; rp=b.shape[1] In [97]:r2=b.shape[0]/2; rp2=b.shape[1]/2 In [98]:a[3-r2:3+r-r2, 3-rp2:3+rp-rp2] = b where b has been 'put' into a in coordinates (3,3) HTH, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From wbaxter at gmail.com Wed May 9 15:01:55 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 10 May 2007 04:01:55 +0900 Subject: [SciPy-user] diff function, different from MatLab ... In-Reply-To: <4641B900.4020303@ru.nl> References: <4641A0E4.6090002@ru.nl> <4641B900.4020303@ru.nl> Message-ID: On 5/9/07, Stef Mientki wrote: > hi Matthieu, > > Matthieu Brucher wrote: > > Hi, > > > > I suppose that if s is a boolean array, diff is a boolean array as > > well. Matlab works on doubles every time, not numpy, and your result > > is coherent in a boolean algebra. > I guess you're right, > > and also integer is not "upsized" to float: 5/3 = 1 > (but I believe that's a still a discussion for the Python 3 ;-) > > on the other hand, some functions do upsize the result > sqrt(4) = 2.0 > sqrt(-4) = 2j > > I think I've just to familiarize with this ;-) Well the rule in NumPy as I understand it is basically never to do automatic upcasts. The above is not the behavior of NumPy for sqrt(-4). That seems to be SciPy only. I'm not sure what SciPy's rules are, but I thought they were the same as NumPy till seeing your sqrt example. In [43]: npy.sqrt(-1) Warning: invalid value encountered in sqrt --bb From nolambar at gmail.com Wed May 9 15:23:50 2007 From: nolambar at gmail.com (=?UTF-8?Q?Nolambar_von_L=C3=B3meanor?=) Date: Wed, 9 May 2007 15:23:50 -0400 Subject: [SciPy-user] About Genetic Algorithms Message-ID: <10f88f6d0705091223ya321cc6u41298c9bcab63281@mail.gmail.com> Hi I can't find the GA module in the latests releases in Scipy, does anybody know why? Which Scipy release still has the GA module? Thanks -- Nolambar von L?meanor http://nolambar.blogspot.com <- mi blog -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed May 9 15:28:49 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 09 May 2007 14:28:49 -0500 Subject: [SciPy-user] About Genetic Algorithms In-Reply-To: <10f88f6d0705091223ya321cc6u41298c9bcab63281@mail.gmail.com> References: <10f88f6d0705091223ya321cc6u41298c9bcab63281@mail.gmail.com> Message-ID: <464220F1.7010906@gmail.com> Nolambar von L?meanor wrote: > Hi > > I can't find the GA module in the latests releases in Scipy, does > anybody know why? > > Which Scipy release still has the GA module? It's in the sandbox until someone comes along and ports it to numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From novak at ucolick.org Wed May 9 16:15:06 2007 From: novak at ucolick.org (Greg Novak) Date: Wed, 9 May 2007 13:15:06 -0700 Subject: [SciPy-user] Intel compiler error that should be a warning? In-Reply-To: <20070508204415.GA19076@arbutus.physics.mcmaster.ca> References: <20070508204415.GA19076@arbutus.physics.mcmaster.ca> Message-ID: No, I'm using 0.5.2, but this is one installation among many one many different machines, so I'd like to keep things harmonized across the different machines. Greg On 5/8/07, David M. Cooke wrote: > On Tue, May 08, 2007 at 10:45:10AM -0700, Greg Novak wrote: > > I'm trying to compile Scipy using the Intel C Compiler and I'm getting > > an error that I think should be a warning. The machine is a cluster > > of Intel 64 bit processors and the error happens when compiling > > Lib/special/cephes/const.c. Here are the error messages: > > > > > novak at pleiades cephes]$ icc const.c > > const.c(92): error: floating-point operation result is out of range > > double INFINITY = 1.0/0.0; /* 99e999; */ > > ^ > > const.c(97): error: floating-point operation result is out of range > > double NAN = 1.0/0.0 - 1.0/0.0; > > ^ > > const.c(97): error: floating-point operation result is out of range > > double NAN = 1.0/0.0 - 1.0/0.0; > > ^ > > compilation aborted for const.c (code 2) > > Are you using the latest svn version of scipy? This bit of code should > only be compiled if UNK was defined in mconf.h, and that's never defined > anymore. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cookedm at physics.mcmaster.ca Wed May 9 16:42:32 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 9 May 2007 16:42:32 -0400 Subject: [SciPy-user] Intel compiler error that should be a warning? In-Reply-To: References: <20070508204415.GA19076@arbutus.physics.mcmaster.ca> Message-ID: <20070509204232.GA16854@arbutus.physics.mcmaster.ca> On Wed, May 09, 2007 at 01:15:06PM -0700, Greg Novak wrote: > No, I'm using 0.5.2, but this is one installation among many one many > different machines, so I'd like to keep things harmonized across the > different machines. You'll want to look at at changeset 2838 http://projects.scipy.org/scipy/scipy/changeset/2838 Applying that diff should fix it -- it removes ad-hoc checks for endianness with Python's from pyconfig.h. > On 5/8/07, David M. Cooke wrote: > > On Tue, May 08, 2007 at 10:45:10AM -0700, Greg Novak wrote: > > > I'm trying to compile Scipy using the Intel C Compiler and I'm getting > > > an error that I think should be a warning. The machine is a cluster > > > of Intel 64 bit processors and the error happens when compiling > > > Lib/special/cephes/const.c. Here are the error messages: > > > > > > > > novak at pleiades cephes]$ icc const.c > > > const.c(92): error: floating-point operation result is out of range > > > double INFINITY = 1.0/0.0; /* 99e999; */ > > > ^ > > > const.c(97): error: floating-point operation result is out of range > > > double NAN = 1.0/0.0 - 1.0/0.0; > > > ^ > > > const.c(97): error: floating-point operation result is out of range > > > double NAN = 1.0/0.0 - 1.0/0.0; > > > ^ > > > compilation aborted for const.c (code 2) > > > > Are you using the latest svn version of scipy? This bit of code should > > only be compiled if UNK was defined in mconf.h, and that's never defined > > anymore. > > > > -- > > |>|\/|< > > /--------------------------------------------------------------------------\ > > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > > |cookedm at physics.mcmaster.ca > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fredmfp at gmail.com Wed May 9 17:11:01 2007 From: fredmfp at gmail.com (fred) Date: Wed, 09 May 2007 23:11:01 +0200 Subject: [SciPy-user] shift FFT2D... In-Reply-To: <4641ECA1.9040403@ou.edu> References: <4641DD64.1090106@gmail.com> <4641ECA1.9040403@ou.edu> Message-ID: <464238E5.3040103@gmail.com> Ryan May a ?crit : > fred wrote: > >> Hi all, >> >> I use an FFT 2D on a matrix to compute a convolution like this: >> >> a = fft2(input_data) >> b = fft2(output_data) >> c = real(ifft2(a*b)) >> >> The problem is that c should look like this: >> >> c1 | c2 >> ----------- >> c3 | c4 >> >> but it looks like this: >> >> c4 | c3 >> ----------- >> c2 | c1 >> >> How can I get efficiently the right result ? (something like shift ?) >> >> > Have you tried numpy.fft.fftshift? > > 1) Because I have no fft.fftshift in numpy ;-) 2) Because I tried fftpack.fftshift from scipy and it gives not the right result: input_data_FFT = fftshift(input_data) weights_FFT = fftshift(weights) output_data = real(ifft2(input_data_FFT*weights_FFT)) 3) Maybe I'm doing something wrong ? > I'm not sure if it works on matrices, but it shifts the 0 frequency > component to the middle, which is what I think you're looking for. > I agree with you: I'm looking for shifting to 0 frequency and this is why I asked (AFAIK, octave has such a function, but I do not use octave anymore since a few years). Thanks anyway. Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Wed May 9 17:23:36 2007 From: fredmfp at gmail.com (fred) Date: Wed, 09 May 2007 23:23:36 +0200 Subject: [SciPy-user] shift FFT2D... In-Reply-To: <1178725680.3546.20.camel@localhost.localdomain> References: <4641DD64.1090106@gmail.com> <1178723175.3546.11.camel@localhost.localdomain> <4641E664.7020801@gmail.com> <1178725004.3546.18.camel@localhost.localdomain> <1178725680.3546.20.camel@localhost.localdomain> Message-ID: <46423BD8.4080205@gmail.com> Francesc Altet a ?crit : > Ops. I was wrong. Yes ;-) > I think what you want is something like: > No ;-))) I want this: from scipy import * m, n = 10, 10 a = zeros((m,n)) p, q = 4, 4 b = ones((p,q)) a[m/2-p/2:m/2+p/2,n/2-q/2:n/2+q/2] = b print a But once again, I was wondering if a builtin function does not already exists for that. Thanks anyway. Cheers, -- http://scipy.org/FredericPetit From stefan at sun.ac.za Wed May 9 17:25:04 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 9 May 2007 23:25:04 +0200 Subject: [SciPy-user] shift FFT2D... In-Reply-To: <4641DD64.1090106@gmail.com> References: <4641DD64.1090106@gmail.com> Message-ID: <20070509212504.GC16021@mentat.za.net> On Wed, May 09, 2007 at 04:40:36PM +0200, fred wrote: > I use an FFT 2D on a matrix to compute a convolution like this: > > a = fft2(input_data) > b = fft2(output_data) > c = real(ifft2(a*b)) Also take a look at scipy.signal.fftconvolve. Cheers St?fan From fredmfp at gmail.com Wed May 9 17:40:28 2007 From: fredmfp at gmail.com (fred) Date: Wed, 09 May 2007 23:40:28 +0200 Subject: [SciPy-user] shift FFT2D... In-Reply-To: <20070509212504.GC16021@mentat.za.net> References: <4641DD64.1090106@gmail.com> <20070509212504.GC16021@mentat.za.net> Message-ID: <46423FCC.7010405@gmail.com> Stefan van der Walt a ?crit : > On Wed, May 09, 2007 at 04:40:36PM +0200, fred wrote: > >> I use an FFT 2D on a matrix to compute a convolution like this: >> >> a = fft2(input_data) >> b = fft2(output_data) >> c = real(ifft2(a*b)) >> > > Also take a look at scipy.signal.fftconvolve. > Hmm, seems that does not work for my purpose. By the way, I now use ndimage.convolve() which fits my needs. Thanks anyway. Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Wed May 9 18:29:47 2007 From: fredmfp at gmail.com (fred) Date: Thu, 10 May 2007 00:29:47 +0200 Subject: [SciPy-user] shift FFT2D... In-Reply-To: <46423FCC.7010405@gmail.com> References: <4641DD64.1090106@gmail.com> <20070509212504.GC16021@mentat.za.net> <46423FCC.7010405@gmail.com> Message-ID: <46424B5B.2070303@gmail.com> fred a ?crit : > Stefan van der Walt a ?crit : >> On Wed, May 09, 2007 at 04:40:36PM +0200, fred wrote: >> >>> I use an FFT 2D on a matrix to compute a convolution like this: >>> >>> a = fft2(input_data) >>> b = fft2(output_data) >>> c = real(ifft2(a*b)) >>> >> >> Also take a look at scipy.signal.fftconvolve. >> > Hmm, seems that does not work for my purpose. Hmm, seems I made a mistake. Works fine now. > By the way, I now use ndimage.convolve() which fits my needs. Still right too ;-) Thanks. Cheers, -- http://scipy.org/FredericPetit From franz.maikaefer at gmail.com Wed May 9 23:45:19 2007 From: franz.maikaefer at gmail.com (=?ISO-8859-1?Q?Franz_Maik=E4fer?=) Date: Thu, 10 May 2007 00:45:19 -0300 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: References: Message-ID: <199e17dd0705092045l4a973dcamb2e61928f93bb50f@mail.gmail.com> On 5/6/07, Nicolas Pettiaux wrote: > I want to give arguments to the faculty (applied sciences at the Free > University of Brussels) where I am implied in the teaching of > numerical analysis, today with matlab and octave, to consider > switching to python with scipy and numpy. > > I am looking for examples and similar actions by others, and I am also > looking for answers to questions and remarks I may get. > > One that I have already received is that in scipy / matplotlib (and > python) the indices of matrices and arrays is different than in matlab > / octave /scilab : in python with numpy for example, the first element > is 0 while in matlab it is 1, as shown in > > http://www.scipy.org/NumPy_for_Matlab_Users#head-5a9301ba4c6f5a12d5eb06e478b9fb8bbdc25084 > > as for example > > matlab : numpy > a(2,:) : a[1] or a[1,:] : entire second row of a > > For me , the octave / matlab notation, sith first element having the 1 > numeber, is rather self explanatory. I count with one as the first > number. > > What would you answer ? Is it possible to change the behavior of numpy > and if not, how can I argue for the python way of counting ? > > THanks, > > Nicolas > -- > Nicolas Pettiaux - email: nicolas.pettiaux at ael.be > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi Nicolas, a good explanation of reasons leading to the convention of using zero based indexing is that from prof.dr. Edsger W. Dijkstra. See the corresponding EWD at: http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html []s, Franz. From nicolas.pettiaux at ael.be Thu May 10 01:07:50 2007 From: nicolas.pettiaux at ael.be (Nicolas Pettiaux) Date: Thu, 10 May 2007 07:07:50 +0200 Subject: [SciPy-user] Scipy / matplotlib to replace matlab and indexes In-Reply-To: <199e17dd0705092045l4a973dcamb2e61928f93bb50f@mail.gmail.com> References: <199e17dd0705092045l4a973dcamb2e61928f93bb50f@mail.gmail.com> Message-ID: 2007/5/10, Franz Maik?fer : thank you Frank > a good explanation of reasons leading to the convention of using zero > based indexing is that from prof.dr. Edsger W. Dijkstra. See the > corresponding EWD at: > > http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html very interesting reading, Nicolas Pettiaux -- Nicolas Pettiaux - email: nicolas.pettiaux at ael.be From emanuelez at gmail.com Thu May 10 10:00:11 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Thu, 10 May 2007 16:00:11 +0200 Subject: [SciPy-user] installing scipy without root permissions Message-ID: Hello, i am trying to install Scipy in my home directory of the Solaris based server of my university. Python is already installed (version 2.3.5) but distutils is not. I have managed to install Numpy locally but Scipy requires distutils i i'm kinda stuck. As far as you know, is it possible to install distutils locally? Emanuele -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlratzel at enthought.com Thu May 10 11:07:26 2007 From: rlratzel at enthought.com (Rick Ratzel) Date: Thu, 10 May 2007 10:07:26 -0500 (CDT) Subject: [SciPy-user] installing scipy without root permissions In-Reply-To: (emanuelez@gmail.com) References: Message-ID: <20070510150726.1C6851DF502@mail.enthought.com> > Date: Thu, 10 May 2007 16:00:11 +0200 > From: "Emanuele Zattin" > > Hello, > i am trying to install Scipy in my home directory of the Solaris based > server of my university. > Python is already installed (version 2.3.5) but distutils is not. > I have managed to install Numpy locally but Scipy requires distutils i i'm > kinda stuck. > As far as you know, is it possible to install distutils locally? > > Emanuele > Distutils is normally part of the Python standard library for that version of Python, but I believe some linux distros and apparently others like Solaris put large chunks of the std. library in a separate package (annoying). Here's the link I think you need: http://www.python.org/community/sigs/current/distutils-sig/download -- Rick Ratzel - Enthought, Inc. 515 Congress Avenue, Suite 2100 - Austin, Texas 78701 512-536-1057 x229 - Fax: 512-536-1059 http://www.enthought.com From perry at stsci.edu Thu May 10 15:27:58 2007 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 10 May 2007 15:27:58 -0400 Subject: [SciPy-user] numpy version of Interactive Data Analysis tutorial available Message-ID: <1EB96C7F-F072-4F57-85A0-FE157CAF323B@stsci.edu> I have updated the "Using Python for Interactive Data Analysis" tutorial to use numpy instead of numarray (finally!). There are further improvements I would like to make in its organization and formatting (in the process including suggestions others have made to that end), but I'd rather get this version out, which I believe addresses all the content changes needed to make it useful for numpy, without delaying it any further. The tutorial, as well as other supporting material and information, can be obtained from: http://www.scipy.org/wikis/topical_software/Tutorial I'm sure errors remain; please let me know of any you find. Perry From s.mientki at ru.nl Sat May 12 05:29:02 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 12 May 2007 11:29:02 +0200 Subject: [SciPy-user] numpy version of Interactive Data Analysis tutorial available In-Reply-To: <1EB96C7F-F072-4F57-85A0-FE157CAF323B@stsci.edu> References: <1EB96C7F-F072-4F57-85A0-FE157CAF323B@stsci.edu> Message-ID: <464588DE.7040105@ru.nl> hello Perry and Robert, Your tutorial is really an impressive work of art. I still consider myself a newbie, because I've more questions then answers ;-) I read the first chapter and it's written very clear and also takes the user in small steps to larger applications. Now I think I miss one important thing, At the end of chapter 1 there are a few exercises, exercise 5: "create some function, display it, compute the FFT" I doubt if any new reader is able to perform the last step: "compute the FFT". - how does the reader know what libraries to import - how does the reader know what the exact function name is - how does the reader know what the function parameters and the return values are, and about certain constraints In other words, I didn't find anything on "help", how to search for certain functions. I've to admit that this is one of the biggest problems for newbies, Python is so large, that's often very difficult to find the right answer. It's also strongly dependent on the editor / IDE you use what help functions are available, and I think IPython has (for a plain text interface) some powerful commands. So in short, I would suggest (for your second edition ;-) to add a paragraph on getting help to the first chapter. thanks for the tutorial, cheers, Stef Mientki Perry Greenfield wrote: > I have updated the "Using Python for Interactive Data Analysis" > tutorial to use numpy instead of numarray (finally!). There are > further improvements I would like to make in its organization and > formatting (in the process including suggestions others have made to > that end), but I'd rather get this version out, which I believe > addresses all the content changes needed to make it useful for numpy, > without delaying it any further. > > The tutorial, as well as other supporting material and information, > can be obtained from: > > http://www.scipy.org/wikis/topical_software/Tutorial > > I'm sure errors remain; please let me know of any you find. > > Perry > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From s.mientki at ru.nl Sat May 12 06:14:38 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 12 May 2007 12:14:38 +0200 Subject: [SciPy-user] linear regression Message-ID: <4645938E.5070604@ru.nl> hello I've a set of x-y pairs in the range of 0..600 that are correlated (calibration of gain and offset of a sensor). So I perform a linear regression and get ( I think it's called regression coefficient) and r = 0.97, so from what I can remember that's not bad. But I see that the relation is not completely linear, so I want to split the x-y pairs in 2 sets, one upto the value of 150 and and with the values above, and would expect that this split up will give better correlations. So I calculate these 2 regression lines, and I would expect that the "r" of these 2 separate regressions lines each to be better that 0.97. but I get r = 0.91 (for the low values) r = 0.92 (for the high values) I'm doing something wrong ? Do I miss some valuable "insight" (of course ;-) ? If the above values are true, does this imply that I can better stick to just 1 regression ? any help (or links) would be much appreciated, thanks, Stef Mientki From zunzun at zunzun.com Sat May 12 07:34:46 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sat, 12 May 2007 07:34:46 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <4645938E.5070604@ru.nl> References: <4645938E.5070604@ru.nl> Message-ID: <20070512113446.GA30707@zunzun.com> On Sat, May 12, 2007 at 12:14:38PM +0200, Stef Mientki wrote: > > But I see that the relation is not completely linear, Try the 2D function finder at http/zunzun.com, or the site fitting code at http://sf.net/projects/pythonequations. Bot scipy and weave are needed to use the source code. > so I want to split the x-y pairs in 2 sets, one upto the value of 150 > and and with the values above, > and would expect that this split up will give better correlations. Having calibrated a lot of sensors myself, this would seem reasonable. My work was mostly X-ray and gamma ray stuff - flouresence, backscatter and absorbtion of monoenergetic and full spectrum. I also worked with beta ray and optical sensors. > I'm doing something wrong ? Send me an example dataset (after you try the site) and I'll be glad to take a look at it. James From gael.varoquaux at normalesup.org Sat May 12 07:40:11 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 12 May 2007 13:40:11 +0200 Subject: [SciPy-user] Getting started wiki page Message-ID: <20070512114011.GE7911@clipper.ens.fr> Hi all, I would very much link the Getting Started wiki page ( http://scipy.org/Getting_Started ) to the front page. But I am not sure it is of good enough quality so far. Could people please have a look and make comments, or edit the page. Cheers, Ga?l From nwagner at iam.uni-stuttgart.de Sat May 12 07:59:21 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 12 May 2007 13:59:21 +0200 Subject: [SciPy-user] linear regression In-Reply-To: <20070512113446.GA30707@zunzun.com> References: <4645938E.5070604@ru.nl> <20070512113446.GA30707@zunzun.com> Message-ID: On Sat, 12 May 2007 07:34:46 -0400 zunzun at zunzun.com wrote: > On Sat, May 12, 2007 at 12:14:38PM +0200, Stef Mientki >wrote: >> >> But I see that the relation is not completely linear, > > Try the 2D function finder at http/zunzun.com, or the > site fitting code at >http://sf.net/projects/pythonequations. > Bot scipy and weave are needed to use the source code. > > >> so I want to split the x-y pairs in 2 sets, one upto the >>value of 150 >> and and with the values above, >> and would expect that this split up will give better >>correlations. > > Having calibrated a lot of sensors myself, this would >seem reasonable. > My work was mostly X-ray and gamma ray stuff - >flouresence, > backscatter and absorbtion of monoenergetic and full >spectrum. > I also worked with beta ray and optical sensors. > > >> I'm doing something wrong ? > > Send me an example dataset (after you try the site) and >I'll > be glad to take a look at it. > > James > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user I just installed pythonequations 1.3 on a 64-bit machine (OpenSuSE 10.2). I started with the examples. python NonLinearFit2D.py /home/nwagner/.python25_compiled/sc_3bc1bbd1512867cc1c4a5e726bdbd0b1938.cpp:1: error: CPU you selected does not support x86-64 instruction set /home/nwagner/.python25_compiled/sc_3bc1bbd1512867cc1c4a5e726bdbd0b1938.cpp:1: error: -malign-double makes no sense in the 64bit mode /home/nwagner/.python25_compiled/sc_3bc1bbd1512867cc1c4a5e726bdbd0b1938.cpp:1: error: CPU you selected does not support x86-64 instruction set /home/nwagner/.python25_compiled/sc_3bc1bbd1512867cc1c4a5e726bdbd0b1938.cpp:1: error: -malign-double makes no sense in the 64bit mode Traceback (most recent call last): File "NonLinearFit2D.py", line 22, in equation.FitToCacheData() # perform the fit File "/home/nwagner/src/PythonEquations/Examples/../../PythonEquations/EquationBaseClasses.py", line 615, in FitToCacheData tempCoeffs = scipy.optimize.fmin(self.CalculateFittingTarget, self.coefficientTuple, maxiter = len(self.coefficientTuple) * self.fminIterationLimit, maxfun = len(self.coefficientTuple) * self.fminFunctionLimit, disp = 0, xtol=self.fmin_xtol, ftol=self.fmin_ftol) File "/usr/local/lib64/python2.5/site-packages/scipy/optimize/optimize.py", line 180, in fmin fsim[0] = func(x0) File "/usr/local/lib64/python2.5/site-packages/scipy/optimize/optimize.py", line 95, in function_wrapper return function(x, *args) File "/home/nwagner/src/PythonEquations/Examples/../../PythonEquations/EquationBaseClasses.py", line 551, in CalculateFittingTarget weave.inline(calc_code, ['equationNumber', 'coeffs', 'indepData', 'depData', 'target', 'upperConstraints', 'lowerConstraints', 'CalculatedTarget'], support_code = self.calc_target_with_cached_data_cpp_code, extra_compile_args = ['-O3','-funroll-loops','-march=i686','-malign-double'], compiler = 'gcc') File "/usr/local/lib64/python2.5/site-packages/scipy/weave/inline_tools.py", line 339, in inline **kw) File "/usr/local/lib64/python2.5/site-packages/scipy/weave/inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "/usr/local/lib64/python2.5/site-packages/scipy/weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/usr/local/lib64/python2.5/site-packages/scipy/weave/build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/usr/local/lib64/python2.5/site-packages/numpy/distutils/core.py", line 172, in setup return old_setup(**new_attr) File "/usr/lib64/python2.5/distutils/core.py", line 168, in setup raise SystemExit, "error: " + str(msg) scipy.weave.build_tools.CompileError: error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -I/usr/local/lib64/python2.5/site-packages/scipy/weave -I/usr/local/lib64/python2.5/site-packages/scipy/weave/scxx -I/usr/local/lib64/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c /home/nwagner/.python25_compiled/sc_3bc1bbd1512867cc1c4a5e726bdbd0b1938.cpp -o /tmp/nwagner/python25_intermediate/compiler_18e66bfe87dea39099840dcce7b98cd8/home/nwagner/.python25_compiled/sc_3bc1bbd1512867cc1c4a5e726bdbd0b1938.o -O3 -funroll-loops -march=i686 -malign-double" failed with exit status 1 Nils From matthew.brett at gmail.com Sat May 12 08:01:10 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 12 May 2007 13:01:10 +0100 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070512114011.GE7911@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> Message-ID: <1e2af89e0705120501p1dccd8c5yfc2b229578484420@mail.gmail.com> > I would very much link the Getting Started wiki page ( > http://scipy.org/Getting_Started ) to the front page. But I am not sure > it is of good enough quality so far. Could people please have a look and > make comments, or edit the page. Thank you for doing this. It's pitched very well. Matthew From aisaac at american.edu Sat May 12 10:01:56 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 12 May 2007 10:01:56 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <4645938E.5070604@ru.nl> References: <4645938E.5070604@ru.nl> Message-ID: On Sat, 12 May 2007, Stef Mientki apparently wrote: > does this imply that I can better stick to > just 1 regression ? http://en.wikipedia.org/wiki/Chow_test hth, Alan Isaac From zunzun at zunzun.com Sat May 12 11:51:12 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sat, 12 May 2007 11:51:12 -0400 Subject: [SciPy-user] linear regression In-Reply-To: References: <4645938E.5070604@ru.nl> <20070512113446.GA30707@zunzun.com> Message-ID: <20070512155112.GA2631@zunzun.com> On Sat, May 12, 2007 at 01:59:21PM +0200, Nils Wagner wrote: > > I just installed pythonequations 1.3 on a 64-bit machine (OpenSuSE 10.2). > I started with the examples. > > python NonLinearFit2D.py > which yielded two types of errors relating to 64-bit compilation: > error: CPU you selected does not support x86-64 instruction set > error: -malign-double makes no sense in the 64bit mode Weave allows passing compiler arguments, and in my code I have hard-coded (!!!) the following: extra_compile_args = ['-O3','-funroll-loops','-march=i686','-malign-double'] which gives rise to the errors you see. For a short-term fix, search the EquationBaseClasses.py for all instances of the text extra_compile_args and set these to ['-O2']. I see that I have applied these inconsistently as well, which is likely slowing down my web site a bit when fitting. In the longer term, 1) I should not hard-code these (scraping egg off face) as they are, rather use something milder as a default (-O2) which will be moved to a configuration file so it can be applied in all places in the same way. 2) I should add this information to the README file. I have time this weekend to correct my design mistakes, and will post an updated Python Equations 1.4 a.s.a.p. James From s.mientki at ru.nl Sat May 12 13:35:17 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 12 May 2007 19:35:17 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070512114011.GE7911@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> Message-ID: <4645FAD5.9020306@ru.nl> hello Gael, Gael Varoquaux wrote: > Hi all, > > I would very much link the Getting Started wiki page ( > http://scipy.org/Getting_Started ) to the front page. But I am not sure > it is of good enough quality so far. I think it's very good, thank you ! don't mind it will never be perfect ;-) So yes put it on the front page, I had to grab back to this email, to get back, after browsing some other pages ;-) I miss 2 things: first thing is of course is "help", how do I find anything ? sorry I still don't know the answer after playing around for a few months with SciPy. Seem that IPython, has some quiet powerfull functions, like scipy.*fft* I never used IPython, if this a IDE feature or somekind of library that can be used in any IDE, Second remark, concerns about installing Python, As a normal windows users, I get a hart attack if I see that page ;-) I think you should emphasize the Enthought edition, which exactly what windows users want. And maybe the "Enstaller" Enthought edition is even better, but I don't know if it's already officially launched. thanks, Stef Mientki From s.mientki at ru.nl Sat May 12 13:39:09 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 12 May 2007 19:39:09 +0200 Subject: [SciPy-user] linear regression In-Reply-To: <20070512113446.GA30707@zunzun.com> References: <4645938E.5070604@ru.nl> <20070512113446.GA30707@zunzun.com> Message-ID: <4645FBBD.9050907@ru.nl> hi James, zunzun at zunzun.com wrote: > On Sat, May 12, 2007 at 12:14:38PM +0200, Stef Mientki wrote: > >> But I see that the relation is not completely linear, >> > > Try the 2D function finder at http/zunzun.com, or the > site fitting code at http://sf.net/projects/pythonequations. > Bot scipy and weave are needed to use the source code. > > thanks, but I'm not looking for a better fit, I just don't understand the results ;-) Besides I've a few reasons to keep it linear, and at most twice linear. Finally it must fit in a small 8-bit micro. Another reason is that the signal contains a lot of noise, and I always learned to keep the degrees of freedom as low as possible. cheers, Stef Mientki From gael.varoquaux at normalesup.org Sat May 12 13:39:14 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 12 May 2007 19:39:14 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <4645FAD5.9020306@ru.nl> References: <20070512114011.GE7911@clipper.ens.fr> <4645FAD5.9020306@ru.nl> Message-ID: <20070512173913.GB11346@clipper.ens.fr> On Sat, May 12, 2007 at 07:35:17PM +0200, Stef Mientki wrote: > I miss 2 things: > first thing is of course is "help", how do I find anything ? > sorry I still don't know the answer after playing around for a few > months with SciPy. Well this is still been worked on and far from beeing perfect. > Seem that IPython, has some quiet powerfull functions, like > scipy.*fft* > I never used IPython, if this a IDE feature or somekind of library that > can be used in any IDE, It the closest thing you'll get to a IDE, so far. It is a enhanced intercative shell. The "getting started" page shows a sample session using it. > Second remark, concerns about installing Python, > As a normal windows users, I get a hart attack if I see that page ;-) > I think you should emphasize the Enthought edition, Fare enough. This is a good remark. I'll emphasize this more on the "getting started" page > And maybe the "Enstaller" Enthought edition is even better, > but I don't know if it's already officially launched. It seems it still has a few quirks. Thanks for your comments, Ga?l From fperez.net at gmail.com Sat May 12 14:12:21 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 12 May 2007 12:12:21 -0600 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070512114011.GE7911@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> Message-ID: On 5/12/07, Gael Varoquaux wrote: > Hi all, > > I would very much link the Getting Started wiki page ( > http://scipy.org/Getting_Started ) to the front page. But I am not sure > it is of good enough quality so far. Could people please have a look and > make comments, or edit the page. Great work! Thanks a lot for putting time into this, which is extremely useful to newcomers. One minor nit: I think it would be better to more prominently mention the -pylab switch right at the begginning of your FFT example. The reason is that without it, plotting is really nearly unusable, so rather than 1. show really doesn't-works-well approach 2. show solution later I think it would be best to start with the -pylab approach from the start. You can mention that -pylab is only needed if you want plotting and requires matplotlib. Just my 1e-2, f From zunzun at zunzun.com Sun May 13 06:24:43 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sun, 13 May 2007 06:24:43 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <20070512155112.GA2631@zunzun.com> References: <4645938E.5070604@ru.nl> <20070512113446.GA30707@zunzun.com> <20070512155112.GA2631@zunzun.com> Message-ID: <20070513102443.GA24195@zunzun.com> On Sat, May 12, 2007 at 11:51:12AM -0400, zunzun at zunzun.com wrote: > > > error: CPU you selected does not support x86-64 instruction set > > error: -malign-double makes no sense in the 64bit mode 64-bit compilation should be corrected and a new release deployed, and my thanks. James From elcorto at gmx.net Sun May 13 06:37:47 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Sun, 13 May 2007 12:37:47 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: References: <20070512114011.GE7911@clipper.ens.fr> Message-ID: <4646EA7B.3010109@gmx.net> Fernando Perez wrote: > On 5/12/07, Gael Varoquaux wrote: >> Hi all, >> >> I would very much link the Getting Started wiki page ( >> http://scipy.org/Getting_Started ) to the front page. But I am not sure >> it is of good enough quality so far. Could people please have a look and >> make comments, or edit the page. First of all: very cool work, thanks! and ... here are my 2 cents: You used "from scipy import *" which is totally OK for interactive work and small scripts as long as one knows what was imported from where. Maybe it's worth mentioning that for larger projects it's adviced to "import scipy.fft" or "from scipy.fftpack import fft" so as to educate people into this direction from the beginning. In the FFT example, maybe you could also mention (for completeness) scipy's fft stuff. People who just get started would maybe wonder what this mysterious numpy.dual is and why they should import fft from there :) Another small thing: After the "concatenate?" introduction, there is In [10]: plot(abs(N.concatenate((b[500:],b[:500])))) where you didn't mention that N is numpy. With the -pylab switch this could even be omitted. You focused much on interactive work. Maybe mention that for doing the above in a program one has to import concatenate from somewhere (numpy) manually. OK, this were my little stumble-upons. Otherwise as I said, many thanks for putting this together! -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From gael.varoquaux at normalesup.org Sun May 13 06:52:07 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 13 May 2007 12:52:07 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <4646EA7B.3010109@gmx.net> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> Message-ID: <20070513105207.GC7289@clipper.ens.fr> On Sun, May 13, 2007 at 12:37:47PM +0200, Steve Schmerler wrote: > You used "from scipy import *" which is totally OK for interactive work > and small scripts as long as one knows what was imported from where. > Maybe it's worth mentioning that for larger projects it's adviced to > "import scipy.fft" or "from scipy.fftpack import fft" so as to educate > people into this direction from the beginning. My experience is that it scares people, so I actually changed from this approach back to "from foo import *" > In the FFT example, maybe you could also mention (for completeness) > scipy's fft stuff. People who just get started would maybe wonder what > this mysterious numpy.dual is and why they should import fft from there > :) Good point. > Another small thing: After the "concatenate?" introduction, there is > In [10]: plot(abs(N.concatenate((b[500:],b[:500])))) > where you didn't mention that N is numpy. Good catch ! This is a left-out from a previous version where numpy was imported as N. I have played with the front page and the link to "getting started" should now be hard to miss ! I think those two pages are very important as they and I invite you to modify them to make them as nice as possible. In particular, if someone could make a screenshot of the example session described in the getting started page under windows (I cannot, I do not have windows on any of my boxes). Something to replace the current image with the session described in this page. Cheers, Ga?l From fredmfp at gmail.com Sun May 13 07:07:50 2007 From: fredmfp at gmail.com (fred) Date: Sun, 13 May 2007 13:07:50 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070513105207.GC7289@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <20070513105207.GC7289@clipper.ens.fr> Message-ID: <4646F186.4080409@gmail.com> Gael Varoquaux a ?crit : > In particular, if someone could make a screenshot of the example session > described in the getting started page under windows (I cannot, I do not > have windows on any of my boxes). Something to replace the current image > with the session described in this page. > My 2 cents. Why do you want it under Windows ? By the way, figures 1 & 2 are snapshots of matplotlib, whereas figure 3 is a screenshot. (I hope I'm clear ;-) I think the first two figures should be displayed as the figure 3, in order to show what can be done with the UI, and not only the FFT result. Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Sun May 13 07:10:25 2007 From: fredmfp at gmail.com (fred) Date: Sun, 13 May 2007 13:10:25 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070513105207.GC7289@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <20070513105207.GC7289@clipper.ens.fr> Message-ID: <4646F221.7080708@gmail.com> Gael Varoquaux a ?crit : > In particular, if someone could make a screenshot of the example session > described in the getting started page under windows (I cannot, I do not > have windows on any of my boxes). Something to replace the current image > with the session described in this page. > Again my 2 cents, I like to have a toc in the wiki pages... :-) Cheers, -- http://scipy.org/FredericPetit From gael.varoquaux at normalesup.org Sun May 13 07:12:07 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 13 May 2007 13:12:07 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <4646F186.4080409@gmail.com> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <20070513105207.GC7289@clipper.ens.fr> <4646F186.4080409@gmail.com> Message-ID: <20070513111205.GD7289@clipper.ens.fr> On Sun, May 13, 2007 at 01:07:50PM +0200, fred wrote: > Gael Varoquaux a ?crit : > > In particular, if someone could make a screenshot of the example session > > described in the getting started page under windows (I cannot, I do not > > have windows on any of my boxes). Something to replace the current image > > with the session described in this page. > My 2 cents. > Why do you want it under Windows ? Because it is what most people use. I don't want people having a look at this page and thinking that this is another linux software. I want people to be able to relate to the screenshot and think, "Yeah, I could be doing this right here on my desktop, and it would look like that". > By the way, figures 1 & 2 are snapshots of matplotlib, whereas figure 3 > is a screenshot. > (I hope I'm clear ;-) > I think the first two figures should be displayed as the figure 3, in > order to show what can be done > with the UI, and not only the FFT result. Fare enough, go ahead :->. It might be nice though to have these screenshots under windows too. Ga?l From elcorto at gmx.net Sun May 13 07:57:09 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Sun, 13 May 2007 13:57:09 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070513105207.GC7289@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <20070513105207.GC7289@clipper.ens.fr> Message-ID: <4646FD15.1040502@gmx.net> Gael Varoquaux wrote: > On Sun, May 13, 2007 at 12:37:47PM +0200, Steve Schmerler wrote: >> You used "from scipy import *" which is totally OK for interactive work >> and small scripts as long as one knows what was imported from where. >> Maybe it's worth mentioning that for larger projects it's adviced to >> "import scipy.fft" or "from scipy.fftpack import fft" so as to educate >> people into this direction from the beginning. > > My experience is that it scares people, so I actually changed from this > approach back to "from foo import *" > Agreed for interactive use and for convincing a half-willing converter from Matlab :) I wouldn't change the wiki examples but the issue should be mentioned somewhere so that poeple are aware of it if they start coding something serious. > I have played with the front page and the link to "getting started" > should now be hard to miss ! I think those two pages are very important > as they and I invite you to modify them to make them as nice as possible. Right. I've never edited a wiki pages (although I have an account :)) but I'll (try to) add my import bla bla stuff ... -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From gael.varoquaux at normalesup.org Sun May 13 07:58:08 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 13 May 2007 13:58:08 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <4646FD15.1040502@gmx.net> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <20070513105207.GC7289@clipper.ens.fr> <4646FD15.1040502@gmx.net> Message-ID: <20070513115808.GF7289@clipper.ens.fr> On Sun, May 13, 2007 at 01:57:09PM +0200, Steve Schmerler wrote: > Agreed for interactive use and for convincing a half-willing converter > from Matlab :) I wouldn't change the wiki examples but the issue should > be mentioned somewhere so that poeple are aware of it if they start > coding something serious. Exactly, you can add this in the last section, about writing scripts. Ga?l From s.mientki at ru.nl Sun May 13 13:47:01 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 13 May 2007 19:47:01 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <4646EA7B.3010109@gmx.net> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> Message-ID: <46474F15.8030807@ru.nl> > where you didn't mention that N is numpy. With the -pylab switch this could even be > omitted. please explain what the "-pylab switch" is, is it an interactive command or a startup parameter ? I'm definitely a newbie !! cheers, Stef From gael.varoquaux at normalesup.org Sun May 13 13:52:17 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 13 May 2007 19:52:17 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <46474F15.8030807@ru.nl> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <46474F15.8030807@ru.nl> Message-ID: <20070513175217.GI7289@clipper.ens.fr> On Sun, May 13, 2007 at 07:47:01PM +0200, Stef Mientki wrote: > > where you didn't mention that N is numpy. With the -pylab switch this could even be > > omitted. > please explain what the "-pylab switch" is, > is it an interactive command or a startup parameter ? It is a startup parameter. Would you mind reformulating this part of the page so that it seems as clear as possible to you. That way other wont stumbled on this, and be to shy to ask here. > I'm definitely a newbie !! Great, then you can be my guinea pig :->. Don't hesitate to point out other parts that are not clear. Ga?l From s.mientki at ru.nl Sun May 13 17:56:58 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 13 May 2007 23:56:58 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070513175217.GI7289@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <46474F15.8030807@ru.nl> <20070513175217.GI7289@clipper.ens.fr> Message-ID: <464789AA.6010709@ru.nl> Gael Varoquaux wrote: > On Sun, May 13, 2007 at 07:47:01PM +0200, Stef Mientki wrote: > > >>> where you didn't mention that N is numpy. With the -pylab switch this could even be >>> omitted. >>> >> please explain what the "-pylab switch" is, >> is it an interactive command or a startup parameter ? >> > > It is a startup parameter. Sorry I might be even more stupid then a newbie, what does it do ? I guess it disables numpy, and replaces it by Scipy, so is everything I need in Scipy ? (I thought Scipy was an encapsulation / extension of Numpy ??? > Would you mind reformulating this part of the > page so that it seems as clear as possible to you. That way other wont > stumbled on this, and be to shy to ask here. > I'm glad to change it, but again I'm too stupid to do it ... ... yes I'm real windows user, ... so before even thinking of Python, I first looked if there was a good IDE, then I found a few, selected SPE and PyScripter, and after a couple of days, decided for PyScripter. I never even tried to start Python from a commandline ;-) And I really don't know how to set a commandline parameter for Python. And now you might understand that the reason that Linux never will break through, is thanks to spoiled M$ users like me ;-) cheers, Stef Mientki > >> I'm definitely a newbie !! >> > > Great, then you can be my guinea pig :->. > > Don't hesitate to point out other parts that are not clear. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From gael.varoquaux at normalesup.org Sun May 13 18:04:19 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 14 May 2007 00:04:19 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <464789AA.6010709@ru.nl> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <46474F15.8030807@ru.nl> <20070513175217.GI7289@clipper.ens.fr> <464789AA.6010709@ru.nl> Message-ID: <20070513220419.GM7289@clipper.ens.fr> On Sun, May 13, 2007 at 11:56:58PM +0200, Stef Mientki wrote: > > Would you mind reformulating this part of the > > page so that it seems as clear as possible to you. That way other wont > > stumbled on this, and be to shy to ask here. > I'm glad to change it, but again I'm too stupid to do it ... > ... yes I'm real windows user, > ... so before even thinking of Python, > I first looked if there was a good IDE, > then I found a few, selected SPE and PyScripter, > and after a couple of days, decided for PyScripter. > I never even tried to start Python from a commandline ;-) > And I really don't know how to set a commandline parameter for Python. Well, I can tell you I don't know a good way to do it under windows. I do know a way, though. Copy the shortcut you use to start ipython. Copy it to say a shortcut called "ipython pylab". Then edit its properties, and change the command line to add at the very end " -pylab". I don't have windows around but this is approximately the way I would do it. > And now you might understand that the reason that Linux never will break > through, > is thanks to spoiled M$ users like me ;-) Well you know this, actually windows is really making your life harder by trying to hide the command line as much as possible. Some things are just easier to do on a good command line. Unfortunately the windows command line is pretty bad and doesn't help you. This does not mean the under Linux you _need_ the command line to do the things you do without the command line under windows. You just can do more things because you happen to have a good command line on top of other things. Do try to achieve what I have described with the sortcut, and if you succeed please describe it on the wiki. If you don't, post here for some more help, I am sure someone will be able to help you. If not I will look tomorrow on one of the lab's computer with windows (you are lucky, we still have 2). Cheers, Ga?l From fredmfp at gmail.com Sun May 13 18:46:26 2007 From: fredmfp at gmail.com (fred) Date: Mon, 14 May 2007 00:46:26 +0200 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070513111205.GD7289@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <20070513105207.GC7289@clipper.ens.fr> <4646F186.4080409@gmail.com> <20070513111205.GD7289@clipper.ens.fr> Message-ID: <46479542.2090609@gmail.com> Gael Varoquaux a ?crit : > Fare enough, go ahead :->. It might be nice though to have these > screenshots under windows too. > I have no Windows at all. I let it to someone else ;-) Cheers, -- http://scipy.org/FredericPetit From wbaxter at gmail.com Sun May 13 18:47:09 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 14 May 2007 07:47:09 +0900 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070513220419.GM7289@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <46474F15.8030807@ru.nl> <20070513175217.GI7289@clipper.ens.fr> <464789AA.6010709@ru.nl> <20070513220419.GM7289@clipper.ens.fr> Message-ID: On 5/14/07, Gael Varoquaux wrote: > > On Sun, May 13, 2007 at 11:56:58PM +0200, Stef Mientki wrote: > > > Would you mind reformulating this part of the > > > page so that it seems as clear as possible to you. That way other wont > > > stumbled on this, and be to shy to ask here. > > > I'm glad to change it, but again I'm too stupid to do it ... > > ... yes I'm real windows user, > > ... so before even thinking of Python, > > I first looked if there was a good IDE, > > then I found a few, selected SPE and PyScripter, > > and after a couple of days, decided for PyScripter. > > I never even tried to start Python from a commandline ;-) > > And I really don't know how to set a commandline parameter for Python. Well, I can tell you I don't know a good way to do it under windows. I do > know a way, though. Copy the shortcut you use to start ipython. Copy it > to say a shortcut called "ipython pylab". Then edit its properties, and > change the command line to add at the very end " -pylab". I don't have > windows around but this is approximately the way I would do it. I have such a shortcut in my start menu. I guess I made it myself. I think IPython's windows installer really should create this shortcut for the user. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Sun May 13 18:49:16 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sun, 13 May 2007 17:49:16 -0500 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070513220419.GM7289@clipper.ens.fr> References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <46474F15.8030807@ru.nl> <20070513175217.GI7289@clipper.ens.fr> <464789AA.6010709@ru.nl> <20070513220419.GM7289@clipper.ens.fr> Message-ID: Here is the shortcut I use underwindows: C:\Python25\python.exe C:\Python25\scripts\ipython -pylab -p scipy which is actually the value of Target if I right click on the shortcut and go to properties. The -p scipy part automatically loads scipy and the -pylab part makes IPython handle threading of plots very nicely. You don't have to use scipy and pylab together. Scipy uses numpy and adds additional tools to it - it does not replace it. Ryan On 5/13/07, Gael Varoquaux wrote: > On Sun, May 13, 2007 at 11:56:58PM +0200, Stef Mientki wrote: > > > Would you mind reformulating this part of the > > > page so that it seems as clear as possible to you. That way other wont > > > stumbled on this, and be to shy to ask here. > > > I'm glad to change it, but again I'm too stupid to do it ... > > ... yes I'm real windows user, > > ... so before even thinking of Python, > > I first looked if there was a good IDE, > > then I found a few, selected SPE and PyScripter, > > and after a couple of days, decided for PyScripter. > > I never even tried to start Python from a commandline ;-) > > And I really don't know how to set a commandline parameter for Python. > > Well, I can tell you I don't know a good way to do it under windows. I do > know a way, though. Copy the shortcut you use to start ipython. Copy it > to say a shortcut called "ipython pylab". Then edit its properties, and > change the command line to add at the very end " -pylab". I don't have > windows around but this is approximately the way I would do it. > > > And now you might understand that the reason that Linux never will break > > through, > > is thanks to spoiled M$ users like me ;-) > > Well you know this, actually windows is really making your life harder by > trying to hide the command line as much as possible. Some things are > just easier to do on a good command line. Unfortunately the windows > command line is pretty bad and doesn't help you. > > This does not mean the under Linux you _need_ the command line to do the > things you do without the command line under windows. You just can do > more things because you happen to have a good command line on top of > other things. > > Do try to achieve what I have described with the sortcut, and if you > succeed please describe it on the wiki. If you don't, post here for some > more help, I am sure someone will be able to help you. If not I will look > tomorrow on one of the lab's computer with windows (you are lucky, we > still have 2). > > Cheers, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fperez.net at gmail.com Sun May 13 19:13:10 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 13 May 2007 17:13:10 -0600 Subject: [SciPy-user] Getting started wiki page In-Reply-To: References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <46474F15.8030807@ru.nl> <20070513175217.GI7289@clipper.ens.fr> <464789AA.6010709@ru.nl> <20070513220419.GM7289@clipper.ens.fr> Message-ID: On 5/13/07, Bill Baxter wrote: > On 5/14/07, Gael Varoquaux wrote: > > On Sun, May 13, 2007 at 11:56:58PM +0200, Stef Mientki wrote: > > > > Would you mind reformulating this part of the > > > > page so that it seems as clear as possible to you. That way other wont > > > > stumbled on this, and be to shy to ask here. > > > > > I'm glad to change it, but again I'm too stupid to do it ... > > > ... yes I'm real windows user, > > > ... so before even thinking of Python, > > > I first looked if there was a good IDE, > > > then I found a few, selected SPE and PyScripter, > > > and after a couple of days, decided for PyScripter. > > > I never even tried to start Python from a commandline ;-) > > > And I really don't know how to set a commandline parameter for Python. > > > > Well, I can tell you I don't know a good way to do it under windows. I do > > know a way, though. Copy the shortcut you use to start ipython. Copy it > > to say a shortcut called "ipython pylab". Then edit its properties, and > > change the command line to add at the very end " -pylab". I don't have > > windows around but this is approximately the way I would do it. > > I have such a shortcut in my start menu. I guess I made it myself. I think > IPython's windows installer really should create this shortcut for the user. It was added to the most recent release, thanks to Ryan. I might actually add another one: one with 'plain pylab' and one loading the full scipy profile (scipy adds startup time). Cheers, f From wbaxter at gmail.com Sun May 13 19:25:15 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 14 May 2007 08:25:15 +0900 Subject: [SciPy-user] Getting started wiki page In-Reply-To: References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <46474F15.8030807@ru.nl> <20070513175217.GI7289@clipper.ens.fr> <464789AA.6010709@ru.nl> <20070513220419.GM7289@clipper.ens.fr> Message-ID: On 5/14/07, Fernando Perez wrote: > > On 5/13/07, Bill Baxter wrote: > > On 5/14/07, Gael Varoquaux wrote: > I have such a shortcut in my start menu. I guess I made it myself. I > think > > IPython's windows installer really should create this shortcut for the > user. > > It was added to the most recent release, thanks to Ryan. Excellent. I might actually add another one: one with 'plain pylab' and one > loading the full scipy profile (scipy adds startup time). That sounds good too as long as it is given a non-confusing name. Something like "Pylab" and "Pylab with SciPy" seems less confusing to me than "Plain Pylab" and "Pylab". --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sun May 13 19:46:05 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 13 May 2007 17:46:05 -0600 Subject: [SciPy-user] Getting started wiki page In-Reply-To: References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <46474F15.8030807@ru.nl> <20070513175217.GI7289@clipper.ens.fr> <464789AA.6010709@ru.nl> <20070513220419.GM7289@clipper.ens.fr> Message-ID: On 5/13/07, Bill Baxter wrote: > That sounds good too as long as it is given a non-confusing name. Something > like "Pylab" and "Pylab with SciPy" seems less confusing to me than "Plain > Pylab" and "Pylab". No worries, I didn't mean those as the actual shortcut names. We'll use something more descriptive. cheers, f From david at ar.media.kyoto-u.ac.jp Sun May 13 20:24:52 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 14 May 2007 09:24:52 +0900 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning Message-ID: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Dear scipy developers and users, As some of you may know already, my proposal for pymachine, a python toolbox for machine learning in python, has been accepted for the Summer of Code 2007. The detailed proposal is online [1], and wikified [2]. The proposal timeline consists of two main steps: - first improving existing tools related to machine learning in scipy, such as they become part of "official scipy" (eg all tools in toolbox going into main scipy namespace). This includes scipy.cluster, scipy.sandbox.pyem and scipy.sandbox.svm. - Then building from this set of toolboxes a more high level package, in the spirit of similar softwares, such as orange or weka [3], including some visualization tools for data exploration. This part of the code would be put in scikits (because it will require extra dependencies). All development will happen in the scipy and scikits subversion repositories. Now, before starting working on it, I would like to get some feedback about what other people think is necessary with respect to those goals: - What are the requirements for a toolbox to go from the sandbox into the scipy namespace ? - For people willing to use machine learning related software in python/scipy, what are the main requirements/concern ? (eg Data exploration GUI, efficiency, readability of the algorithms, etc...) cheers, David [1] http://www.ar.media.kyoto-u.ac.jp/members/david/fullproposal.html [2] http://projects.scipy.org/scipy/scipy/wiki/MachineLearning [3] orange http://magix.fri.uni-lj.si/orange/, weka: http://www.cs.waikato.ac.nz/ml/weka/ From ryanlists at gmail.com Sun May 13 21:07:22 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sun, 13 May 2007 20:07:22 -0500 Subject: [SciPy-user] Getting started wiki page In-Reply-To: References: <20070512114011.GE7911@clipper.ens.fr> <4646EA7B.3010109@gmx.net> <46474F15.8030807@ru.nl> <20070513175217.GI7289@clipper.ens.fr> <464789AA.6010709@ru.nl> <20070513220419.GM7289@clipper.ens.fr> Message-ID: Scipy definitely adds start up time, ao having both is a great idea. I like Pylab and "IPython Scientific" as the two names. That is what I use. Not that Fernando wants help naming them or that this is the IPython list :) - I'll shut up now. On 5/13/07, Fernando Perez wrote: > On 5/13/07, Bill Baxter wrote: > > > That sounds good too as long as it is given a non-confusing name. Something > > like "Pylab" and "Pylab with SciPy" seems less confusing to me than "Plain > > Pylab" and "Pylab". > > No worries, I didn't mean those as the actual shortcut names. We'll > use something more descriptive. > > cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Mon May 14 02:10:54 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 14 May 2007 08:10:54 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: > > - For people willing to use machine learning related software in > python/scipy, what are the main requirements/concern ? (eg Data > exploration GUI, efficiency, readability of the algorithms, etc...) > It depends on what you call visualization, from my point of view. Visualization is, for me, made by dimensionality reduction - I want to port a Matlab toolbox that encompass every algorithm in the field, I wish I had more time ! -, KPCA is used for that purpose, not EM or SVMs, for instance. I didn't know of Orange, it seems very intersting, I had a similar piece of software in mind, with a pipeline processor, but with more interaction (several inputs, outputs, flow control, ...) BTW, if you need a neighboor tree implementation in C++ with a Python interface - useful for clustering ? -, let me know ;) Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From erendisaldarion at gmail.com Mon May 14 02:53:32 2007 From: erendisaldarion at gmail.com (Aldarion) Date: Mon, 14 May 2007 14:53:32 +0800 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: <4648076C.7090705@gmail.com> David Cournapeau wrote: > Dear scipy developers and users, > - For people willing to use machine learning related software in > python/scipy, what are the main requirements/concern ? (eg Data > exploration GUI, efficiency, readability of the algorithms, etc...) > > cheers, > > David to me, efficiency and readability of the algorithm. and orange impressed me. but neither orange nor numpy handle sparse matrix smoothly, for example,don't know howto SVD a large-scale sparse matrix with numpy. From opossumnano at gmail.com Mon May 14 03:06:04 2007 From: opossumnano at gmail.com (Tiziano Zito) Date: Mon, 14 May 2007 09:06:04 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: > I had a similar piece of software in mind, with a pipeline processor, but with > more interaction (several inputs, outputs, flow control, ...) > FWIW, MDP ( http://mdp-toolkit.sourceforge.net ) may fit your needs :-) ciao, tiziano From matthieu.brucher at gmail.com Mon May 14 03:13:34 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 14 May 2007 09:13:34 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: Thank for the link ;) My view was still a GUI framework, like Orange, but with more flexibility. For instance, every scipy module or scikit could have a wrapper in this framework and be used efficiently. In ly lab, our objective is to build such a tool, but with limited application field, only medical imaging. Perhaps David could tell us if his thoughts were as mine ? With this GUI application, there would be a command-line tool to launch pipelines too. Matthieu 2007/5/14, Tiziano Zito : > > > I had a similar piece of software in mind, with a pipeline processor, > but with > > more interaction (several inputs, outputs, flow control, ...) > > > > FWIW, MDP ( http://mdp-toolkit.sourceforge.net ) may fit your needs :-) > > ciao, > tiziano > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sim at klubko.net Mon May 14 03:13:40 2007 From: sim at klubko.net (Petr =?utf-8?q?=C5=A0imon?=) Date: Mon, 14 May 2007 15:13:40 +0800 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <4648076C.7090705@gmail.com> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <4648076C.7090705@gmail.com> Message-ID: <200705141513.40916.sim@klubko.net> On Monday 14 May 2007 14:53:32 Aldarion wrote: > David Cournapeau wrote: > > Dear scipy developers and users, > > - For people willing to use machine learning related software in > > python/scipy, what are the main requirements/concern ? (eg Data > > exploration GUI, efficiency, readability of the algorithms, etc...) > > > > cheers, > > > > David > > to me, efficiency and readability of the algorithm. > and orange impressed me. > but neither orange nor numpy handle sparse matrix smoothly, > for example,don't know howto SVD a large-scale sparse matrix with numpy. > In general most of the ML packages like weka and orange are great for small projects, but since they typically load all the data into memory, you are on your own with larger dataset. This is what I find to be a major concern for me. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Petr ?imon http://www.klubko.net PhD student, TIGP-CLCLP Academia Sinica http://clclp.ling.sinica.edu.tw "... what the Buddhist call 'right livelyhood', I didn't have that, I didn't have any way of making a living, and to make a living is to be doing something that you love, something that was creative, something that made sense..." Mark Bittner, parrot caretaker, Telegraph Hill From gael.varoquaux at normalesup.org Mon May 14 03:40:01 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 14 May 2007 09:40:01 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: <20070514073958.GD9842@clipper.ens.fr> On Mon, May 14, 2007 at 09:13:34AM +0200, Matthieu Brucher wrote: > My view was still a GUI framework, like Orange, but with more flexibility. > For instance, every scipy module or scikit could have a wrapper in this > framework and be used efficiently. I am not to sure what you call machine learning, but the GUI of Orange reminds me of LabView. I was thinking of how to do a labview-like UI with Python... I think Traits[1] would be a great solution. Each node would be represented as an object. If it is a filter like object with one output and two input, all floats, it could be something like: +++++++++++++++++++++++++++++++++++ class Filter(HasTraits): in1 = Float() in2 = Float() out = Float() def _in1_changed(self, old, new): self.compute(new, self.in1) def _in2_changed(self, old, new): self.compute(self.in2, new) def compute(self, in1, in2): # Do some computation with in1 and in2 and store the results in # out +++++++++++++++++++++++++++++++++++ Say you want to make an addition filter: +++++++++++++++++++++++++++++++++++ class AdditionFilter(Filter): def computer(self, in1, in2): self.out = in1 + in2 +++++++++++++++++++++++++++++++++++ Now to have a nice diagram representation of your filters you need to build an UML-like diagram. Maybe http://allendowney.com/swampy/lumpy.html can be a starting point. If you want to have dialogs associated with your objects you can use traitsUI. Say you want a way of inputing a float : +++++++++++++++++++++++++++++++++++ class FloatInput(HasTraits): out = Float() +++++++++++++++++++++++++++++++++++ The dialog can be brought up by calling the "edit_traits" method of a FloatInput object. A display can be coded with: +++++++++++++++++++++++++++++++++++ class FloatDisplay(HasTraits): in = Float() view = View(Item('in', style='readonly') +++++++++++++++++++++++++++++++++++ Its dialog will also be displayed using the "edit_traits" method. ... The nice thing with the approach is that it is very general and easy to extend. What do you think ? Ga?l [1] Traits and TraitsUI, see: http://code.enthought.com/traits/ http://www.gael-varoquaux.info/computers/traits_tutorial From giorgio.luciano at chimica.unige.it Mon May 14 03:49:12 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Mon, 14 May 2007 09:49:12 +0200 Subject: [SciPy-user] linear regression In-Reply-To: <4645938E.5070604@ru.nl> References: <4645938E.5070604@ru.nl> Message-ID: <46481478.8050102@chimica.unige.it> Dont' look at R while judging a regression ! look at residues . I know that journal want R and R2, but the more data you entry R will alwyas be better, and doesn't give you an idea of the real goodness of your regression, so dont' invest too much effort on it, use another method. my two cents Giorgio From david at ar.media.kyoto-u.ac.jp Mon May 14 03:44:41 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 14 May 2007 16:44:41 +0900 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <200705141513.40916.sim@klubko.net> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <4648076C.7090705@gmail.com> <200705141513.40916.sim@klubko.net> Message-ID: <46481369.6020309@ar.media.kyoto-u.ac.jp> Petr ?imon wrote: > On Monday 14 May 2007 14:53:32 Aldarion wrote: >> David Cournapeau wrote: >>> Dear scipy developers and users, >>> - For people willing to use machine learning related software in >>> python/scipy, what are the main requirements/concern ? (eg Data >>> exploration GUI, efficiency, readability of the algorithms, etc...) >>> >>> cheers, >>> >>> David >> to me, efficiency and readability of the algorithm. >> and orange impressed me. >> but neither orange nor numpy handle sparse matrix smoothly, >> for example,don't know howto SVD a large-scale sparse matrix with numpy. >> > In general most of the ML packages like weka and orange are great for small > projects, but since they typically load all the data into memory, you are on > your own with larger dataset. This is what I find to be a major concern for > me. I understand the limitation. I think it is important on the frontend side to have a global mechanism to enable ondisk data, for streaming data directly from files instead of loading everything in memory. But then, there is a problem on the back-end side: most algorithms expects all their input data at once. For example, one of the algorithm which will be supported is Expectation Maximization for mixture of Gaussian. Every iteration of the EM algorithm expects its data to be available; there are some extension possible to enable iterative EM algorithms (one implementation is available in sandbox.pyem, but really slow for now for no good reason outside lazyness). Basically, I have not thought a lot about this problem, but I think that it needs explicit support from the algorithm itself to be useful, generally. The algorithm has to be able to run several times on different parts of the dataset, while remembering already computed parts (I don't know if there is a global name for this kind of behaviour). What do you have in mind when talking about big problems ? What kind of size are we talking about ? David From matthieu.brucher at gmail.com Mon May 14 03:54:11 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 14 May 2007 09:54:11 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <20070514073958.GD9842@clipper.ens.fr> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> Message-ID: > > I am not to sure what you call machine learning, but the GUI of Orange > reminds me of LabView. It was for more than only machine learning, it was for most processing :) For instance, audio filtering, neuroimaging (SPM process, noamalization, ...), ... I was thinking of how to do a labview-like UI with Python... I think > Traits[1] would be a great solution. > > Each node would be represented as an object. If it is a filter like > object with one output and two input, all floats, it could be something > like: Interesting, indeed. I have to check Traits now ;) - I was for the moment wxPython-based - On the other hand, I don't think that having fixed inputs and outpus is a good thing, for instance a "for" filter - yes, in my view, control flow are implemented like a filter, and they can contain other filters - needs one or more inputs (for el1, el2, el3 in zip(seq1, seq2, seq3): for instance) and several outpus, I don't know at this point how I would design this, but I'm thinking of it. Say you want to make an addition filter: > > +++++++++++++++++++++++++++++++++++ > class AdditionFilter(Filter): > def computer(self, in1, in2): > self.out = in1 + in2 > +++++++++++++++++++++++++++++++++++ > > Now to have a nice diagram representation of your filters you need to > build an UML-like diagram. Maybe http://allendowney.com/swampy/lumpy.html > can be a starting point. Yes, the UI is like something I would like to have, although Orange GUI is very interesting too because of the icons. If you want to have dialogs associated with your objects you can use > traitsUI. Say you want a way of inputing a float : > > +++++++++++++++++++++++++++++++++++ > class FloatInput(HasTraits): > out = Float() > +++++++++++++++++++++++++++++++++++ > > The dialog can be brought up by calling the "edit_traits" method of a > FloatInput object. > > A display can be coded with: > > +++++++++++++++++++++++++++++++++++ > class FloatDisplay(HasTraits): > in = Float() > view = View(Item('in', style='readonly') > +++++++++++++++++++++++++++++++++++ > > Its dialog will also be displayed using the "edit_traits" method. > > ... > > The nice thing with the approach is that it is very general and easy to > extend. > > What do you think ? It is almost like I thought of it, for the moment. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From erendisaldarion at gmail.com Mon May 14 03:55:27 2007 From: erendisaldarion at gmail.com (Aldarion) Date: Mon, 14 May 2007 15:55:27 +0800 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <200705141513.40916.sim@klubko.net> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <4648076C.7090705@gmail.com> <200705141513.40916.sim@klubko.net> Message-ID: <464815EF.9070303@gmail.com> Petr ?imon wrote: > In general most of the ML packages like weka and orange are great for small > projects, but since they typically load all the data into memory, you are on > your own with larger dataset. This is what I find to be a major concern for > me. Thanks for the info. orange(weka) includes several methods for classification,but short of feature selection, FS is important too, but more domain specific. maybe the GUI part hinder the scale further... From matthieu.brucher at gmail.com Mon May 14 03:59:19 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 14 May 2007 09:59:19 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <46481369.6020309@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <4648076C.7090705@gmail.com> <200705141513.40916.sim@klubko.net> <46481369.6020309@ar.media.kyoto-u.ac.jp> Message-ID: > > I understand the limitation. I think it is important on the frontend > side to have a global mechanism to enable ondisk data, for streaming > data directly from files instead of loading everything in memory. But > then, there is a problem on the back-end side: most algorithms expects > all their input data at once. We have the same problem here, one image is 64Mb, we use several images at once, not talking about deformation fields (400Mb). What we thought, as a solution, was to write down after each filter the result on the hard drive (simple to display to know if something went wrong, simple to reload, ...) and to load what was needed by the next filter. Well that does not work for too-bug-to-fit-in-memory datasets, but it is a good-for-a-start policy. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From sim at klubko.net Mon May 14 04:01:45 2007 From: sim at klubko.net (Petr =?utf-8?q?=C5=A0imon?=) Date: Mon, 14 May 2007 16:01:45 +0800 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <46481369.6020309@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <200705141513.40916.sim@klubko.net> <46481369.6020309@ar.media.kyoto-u.ac.jp> Message-ID: <200705141601.45532.sim@klubko.net> On Monday 14 May 2007 15:44:41 David Cournapeau wrote: > Petr ?imon wrote: > > On Monday 14 May 2007 14:53:32 Aldarion wrote: > >> David Cournapeau wrote: > >>> Dear scipy developers and users, > >>> - For people willing to use machine learning related software in > >>> python/scipy, what are the main requirements/concern ? (eg Data > >>> exploration GUI, efficiency, readability of the algorithms, etc...) > >>> > >>> cheers, > >>> > >>> David > >> > >> to me, efficiency and readability of the algorithm. > >> and orange impressed me. > >> but neither orange nor numpy handle sparse matrix smoothly, > >> for example,don't know howto SVD a large-scale sparse matrix with numpy. > > > > In general most of the ML packages like weka and orange are great for > > small projects, but since they typically load all the data into memory, > > you are on your own with larger dataset. This is what I find to be a > > major concern for me. > > I understand the limitation. I think it is important on the frontend > side to have a global mechanism to enable ondisk data, for streaming > data directly from files instead of loading everything in memory. But > then, there is a problem on the back-end side: most algorithms expects > all their input data at once. > > For example, one of the algorithm which will be supported is Expectation > Maximization for mixture of Gaussian. Every iteration of the EM > algorithm expects its data to be available; there are some extension > possible to enable iterative EM algorithms (one implementation is > available in sandbox.pyem, but really slow for now for no good reason > outside lazyness). > > Basically, I have not thought a lot about this problem, but I think that > it needs explicit support from the algorithm itself to be useful, > generally. The algorithm has to be able to run several times on > different parts of the dataset, while remembering already computed parts > (I don't know if there is a global name for this kind of behaviour). > > What do you have in mind when talking about big problems ? What kind of > size are we talking about ? > Yes I know it's not as easy as I would wish :), you did spell it out quite well. E.g. I had cca 14mil 5-D vectors. Petr > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From gael.varoquaux at normalesup.org Mon May 14 04:07:59 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 14 May 2007 10:07:59 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> Message-ID: <20070514080759.GE9842@clipper.ens.fr> On Mon, May 14, 2007 at 09:54:11AM +0200, Matthieu Brucher wrote: > On the other hand, I don't think that having fixed inputs and outpus is a > good thing, for instance a "for" filter - yes, in my view, control flow > are implemented like a filter, and they can contain other filters - needs > one or more inputs (for el1, el2, el3 in zip(seq1, seq2, seq3): for > instance) and several outpus, I don't know at this point how I would > design this, but I'm thinking of it. You can implement a for filter with a fixed number of outputs ( (el1, el2? el3) in you example). I do agree that zip, sequence unpacking, etc are a bit more tricky. These might need special approach. However for the general case I think you are better of having fixed numbers of outputs. This will make designing your UI way simpler. I think a node needs to display how many inputs and outputs it has. The nice thing with Traits is not inly its GUI generation code (which I really find great), it is also its inversion of control: +++++++++++++++++++++++++++++++++++ class Filter(HasTraits): in1 = Float() in2 = Float() out = Float() def _in1_changed(self, old, new): self.compute(new, self.in1) def _in2_changed(self, old, new): self.compute(self.in2, new) def compute(self, in1, in2): # Do some computation with in1 and in2 and store the results in # out +++++++++++++++++++++++++++++++++++ The compute method is call automatically when the inputs are changed. There is no need for some manual update code. Similarily, dialogs can update objects live, and can reflect the current value of the attributes, even if they change. This will really make your code simpler and more readable, I think. My 2 cents, Ga?l From matthieu.brucher at gmail.com Mon May 14 04:21:58 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 14 May 2007 10:21:58 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <20070514080759.GE9842@clipper.ens.fr> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> <20070514080759.GE9842@clipper.ens.fr> Message-ID: > > You can implement a for filter with a fixed number of outputs ( (el1, > el2? el3) in you example). I do agree that zip, sequence unpacking, etc > are a bit more tricky. These might need special approach. However for the > general case I think you are better of having fixed numbers of outputs. > This will make designing your UI way simpler. I think a node needs to > display how many inputs and outputs it has. I agree with your point. I have to digg this a little more to find a usable solution. The compute method is call automatically when the inputs are changed. > There is no need for some manual update code. Similarily, dialogs can > update objects live, and can reflect the current value of the attributes, > even if they change. > > This will really make your code simpler and more readable, I think. > Exactly what is needed in such an application, I would say ! From the table of contents of yout tutorial, I had come to the same conclusion, so I will read it with attention - be cause, there is a part on threads too ;) - Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon May 14 04:16:17 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 14 May 2007 17:16:17 +0900 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <200705141601.45532.sim@klubko.net> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <200705141513.40916.sim@klubko.net> <46481369.6020309@ar.media.kyoto-u.ac.jp> <200705141601.45532.sim@klubko.net> Message-ID: <46481AD1.9050004@ar.media.kyoto-u.ac.jp> Petr ?imon wrote: > On Monday 14 May 2007 15:44:41 David Cournapeau wrote: >> Petr ?imon wrote: >>> On Monday 14 May 2007 14:53:32 Aldarion wrote: >>>> David Cournapeau wrote: >>>>> Dear scipy developers and users, >>>>> - For people willing to use machine learning related software in >>>>> python/scipy, what are the main requirements/concern ? (eg Data >>>>> exploration GUI, efficiency, readability of the algorithms, etc...) >>>>> >>>>> cheers, >>>>> >>>>> David >>>> to me, efficiency and readability of the algorithm. >>>> and orange impressed me. >>>> but neither orange nor numpy handle sparse matrix smoothly, >>>> for example,don't know howto SVD a large-scale sparse matrix with numpy. >>> In general most of the ML packages like weka and orange are great for >>> small projects, but since they typically load all the data into memory, >>> you are on your own with larger dataset. This is what I find to be a >>> major concern for me. >> I understand the limitation. I think it is important on the frontend >> side to have a global mechanism to enable ondisk data, for streaming >> data directly from files instead of loading everything in memory. But >> then, there is a problem on the back-end side: most algorithms expects >> all their input data at once. >> >> For example, one of the algorithm which will be supported is Expectation >> Maximization for mixture of Gaussian. Every iteration of the EM >> algorithm expects its data to be available; there are some extension >> possible to enable iterative EM algorithms (one implementation is >> available in sandbox.pyem, but really slow for now for no good reason >> outside lazyness). >> >> Basically, I have not thought a lot about this problem, but I think that >> it needs explicit support from the algorithm itself to be useful, >> generally. The algorithm has to be able to run several times on >> different parts of the dataset, while remembering already computed parts >> (I don't know if there is a global name for this kind of behaviour). >> >> What do you have in mind when talking about big problems ? What kind of >> size are we talking about ? >> > Yes I know it's not as easy as I would wish :), you did spell it out quite > well. E.g. I had cca 14mil 5-D vectors. Well, if by 14mil you mean 14 millions, if every point is a double complex, that is it takes 16 bytes, then it should more or less fit in memory, no ? For memory problems, I see at least 2 different cases in the machine learning context: - the data used for learning - the testing data/ data to classify The second case is really "just" a question of being careful and having a good framework: this is a constraint I will definitely try to respect. The first case is much more difficult from an algorithmic point of view, because most learning algorithms do not respect the locality property very well, at least in direct implementations. David From nwagner at iam.uni-stuttgart.de Mon May 14 04:33:02 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 May 2007 10:33:02 +0200 Subject: [SciPy-user] linalg.eigvalsh Message-ID: <46481EBE.6050704@iam.uni-stuttgart.de> Hi, linalg.eigvalsh returns eigenvalues even when the matrix is not Hermitian. >>> H(4) array([[ 0.+840000.j, 0. +0.j, 0. +0.j, 0. +0.j], [ 0. +0.j, 0. +0.j, 0. +0.j, 0. +0.j], [ 0. +0.j, 0. +0.j, 0.+840000.j, -0.-840000.j], [ 0. +0.j, 0. +0.j, -0.-840000.j, 0.+840000.j]]) >>> linalg.eigvalsh(H(4)) array([-840000., 0., 0., 840000.]) Is this behaviour expected ? I would prefer a warning similar to >>> linalg.cho_factor(H(4)) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/scipy/linalg/decomp.py", line 560, in cho_factor if info>0: raise LinAlgError, "matrix not positive definite" numpy.linalg.linalg.LinAlgError: matrix not positive definite e.g. raise LinAlgError,"matrix not Hermitian". Any comment ? Nils From david at ar.media.kyoto-u.ac.jp Mon May 14 04:36:22 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 14 May 2007 17:36:22 +0900 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <20070514073958.GD9842@clipper.ens.fr> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> Message-ID: <46481F86.4060202@ar.media.kyoto-u.ac.jp> Gael Varoquaux wrote: > On Mon, May 14, 2007 at 09:13:34AM +0200, Matthieu Brucher wrote: >> My view was still a GUI framework, like Orange, but with more flexibility. >> For instance, every scipy module or scikit could have a wrapper in this >> framework and be used efficiently. > > I am not to sure what you call machine learning, but the GUI of Orange > reminds me of LabView. When I am cynical, I say that machine learning is a cute name for maximization over constraints. More seriously, machine learning is more or less synonymous with AI (at least a pretty big subfield of AI), and tries to find methods for finding pattern in data. Concrete examples: - you have a dataset of music extracts, and want to find the extracts similar to a given melody. So you want to find data "similar" to your extract, according to some "similarity" measures. - you have a dataset of brain images with some diseases, and you want to find the disease corresponding to a new patient. > > I was thinking of how to do a labview-like UI with Python... I think > Traits[1] would be a great solution. One of the thing I am not sure about is the choice of a toolkit for the GUI. - matplotlib is great for some widget which are pretty complete (spectrogram, etc...), but not really usable for general GUI. - TraitsUI looks great, but I am a bit worried that it may be difficult to install on some platforms. - Other toolkits: problem similar to TraitsUI. Except Tkinter, which I really don't intend to use, none of the GUI toolkit for python are standards. Before choosing a toolkit, I will have to work a bit with TraitsUI. Do you know if it is easy to add matplotlib widgets inside TraitsUI ? I am always a bit concerned with toolkit choices, because on Unix, you cannot easily mix different toolkits inside one process, and this can makes things really hairy if you want to use widgets from different toolkits. David From sim at klubko.net Mon May 14 05:11:38 2007 From: sim at klubko.net (Petr =?utf-8?q?=C5=A0imon?=) Date: Mon, 14 May 2007 17:11:38 +0800 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <46481AD1.9050004@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <200705141601.45532.sim@klubko.net> <46481AD1.9050004@ar.media.kyoto-u.ac.jp> Message-ID: <200705141711.39829.sim@klubko.net> On Monday 14 May 2007 16:16:17 David Cournapeau wrote: > Petr ?imon wrote: > > On Monday 14 May 2007 15:44:41 David Cournapeau wrote: > >> Petr ?imon wrote: > >>> On Monday 14 May 2007 14:53:32 Aldarion wrote: > >>>> David Cournapeau wrote: > >>>>> Dear scipy developers and users, > >>>>> - For people willing to use machine learning related software in > >>>>> python/scipy, what are the main requirements/concern ? (eg Data > >>>>> exploration GUI, efficiency, readability of the algorithms, etc...) > >>>>> > >>>>> cheers, > >>>>> > >>>>> David > >>>> > >>>> to me, efficiency and readability of the algorithm. > >>>> and orange impressed me. > >>>> but neither orange nor numpy handle sparse matrix smoothly, > >>>> for example,don't know howto SVD a large-scale sparse matrix with > >>>> numpy. > >>> > >>> In general most of the ML packages like weka and orange are great for > >>> small projects, but since they typically load all the data into memory, > >>> you are on your own with larger dataset. This is what I find to be a > >>> major concern for me. > >> > >> I understand the limitation. I think it is important on the frontend > >> side to have a global mechanism to enable ondisk data, for streaming > >> data directly from files instead of loading everything in memory. But > >> then, there is a problem on the back-end side: most algorithms expects > >> all their input data at once. > >> > >> For example, one of the algorithm which will be supported is Expectation > >> Maximization for mixture of Gaussian. Every iteration of the EM > >> algorithm expects its data to be available; there are some extension > >> possible to enable iterative EM algorithms (one implementation is > >> available in sandbox.pyem, but really slow for now for no good reason > >> outside lazyness). > >> > >> Basically, I have not thought a lot about this problem, but I think that > >> it needs explicit support from the algorithm itself to be useful, > >> generally. The algorithm has to be able to run several times on > >> different parts of the dataset, while remembering already computed parts > >> (I don't know if there is a global name for this kind of behaviour). > >> > >> What do you have in mind when talking about big problems ? What kind of > >> size are we talking about ? > > > > Yes I know it's not as easy as I would wish :), you did spell it out > > quite well. E.g. I had cca 14mil 5-D vectors. > > Well, if by 14mil you mean 14 millions, if every point is a double > complex, that is it takes 16 bytes, then it should more or less fit in > memory, no ? true, I don't mean that it wouldn't fit into memory as it is, it's an implementation problem. I don't remember exactly where it failed (it's been a while since I tried), but trying to load cca 700MB csv into orange did not went quite well (and this is certainly not a very large dataset). > For memory problems, I see at least 2 different cases in > the machine learning context: > > - the data used for learning > - the testing data/ data to classify > > The second case is really "just" a question of being careful and having > a good framework: this is a constraint I will definitely try to respect. yes > The first case is much more difficult from an algorithmic point of view, > because most learning algorithms do not respect the locality property > very well, at least in direct implementations. > Yes. I am not trying to say 'do it' just that it is my concern :) > David Petr > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From matthieu.brucher at gmail.com Mon May 14 05:14:05 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 14 May 2007 11:14:05 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <46481F86.4060202@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> <46481F86.4060202@ar.media.kyoto-u.ac.jp> Message-ID: > > Before choosing a toolkit, I will have to work a bit with TraitsUI. > Do you know if it is easy to add matplotlib widgets inside TraitsUI ? I > am always a bit concerned with toolkit choices, because on Unix, you > cannot easily mix different toolkits inside one process, and this can > makes things really hairy if you want to use widgets from different > toolkits. > TraitsUI is written on top of wxPython, so as MPL has a wx backend, it should not be a problem. In fact, as I see it from Gael's tuto, the GUI could be in Python, TraitsUI used when needed, and MPL for data display. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon May 14 05:16:31 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 14 May 2007 11:16:31 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <46481F86.4060202@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> <46481F86.4060202@ar.media.kyoto-u.ac.jp> Message-ID: <20070514091627.GF9842@clipper.ens.fr> On Mon, May 14, 2007 at 05:36:22PM +0900, David Cournapeau wrote: > > I was thinking of how to do a labview-like UI with Python... I think > > Traits[1] would be a great solution. > One of the thing I am not sure about is the choice of a toolkit for the > GUI. > - matplotlib is great for some widget which are pretty complete > (spectrogram, etc...), but not really usable for general GUI. > - TraitsUI looks great, but I am a bit worried that it may be > difficult to install on some platforms. > - Other toolkits: problem similar to TraitsUI. Except Tkinter, which > I really don't intend to use, none of the GUI toolkit for python are > standards. > Before choosing a toolkit, I will have to work a bit with TraitsUI. > Do you know if it is easy to add matplotlib widgets inside TraitsUI ? I > am always a bit concerned with toolkit choices, because on Unix, you > cannot easily mix different toolkits inside one process, and this can > makes things really hairy if you want to use widgets from different > toolkits. TraitsUI is based on WxPython. It is a bid tedious to installed so far (no formal release for a long time) but the version 2 is planned for the end of the month, and it should be packageable nicely (ie uncoupled from the rest of the enthought tools suite). MPL has a WxPython backend and does fit into TraitsUI pretty well. I have shown how to do this in my tutorial. The nice thing with using WxPython is that you can use a lot of existing code... Apart from Orange, which is based on QT :-<. Ga?l From erendisaldarion at gmail.com Mon May 14 06:05:42 2007 From: erendisaldarion at gmail.com (Aldarion) Date: Mon, 14 May 2007 18:05:42 +0800 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <46481F86.4060202@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> <46481F86.4060202@ar.media.kyoto-u.ac.jp> Message-ID: <46483476.4050403@gmail.com> David Cournapeau wrote: > - TraitsUI looks great, but I am a bit worried that it may be > difficult to install on some platforms. > +1 for this. I can't find how to install Traits(UI) on windows or linux... > - Other toolkits: problem similar to TraitsUI. Except Tkinter, which > I really don't intend to use, none of the GUI toolkit for python are > standards. maybe wxpython is not difficult to install as pyqt,though I installed both of them before. From matthieu.brucher at gmail.com Mon May 14 06:09:50 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 14 May 2007 12:09:50 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <46483476.4050403@gmail.com> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> <46481F86.4060202@ar.media.kyoto-u.ac.jp> <46483476.4050403@gmail.com> Message-ID: > > +1 for this. I can't find how to install Traits(UI) on windows or linux... > The package on the Enthought site does not work for Windows ? I'll try to compile it for Linux this afternoon. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon May 14 06:13:23 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 14 May 2007 12:13:23 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <46483476.4050403@gmail.com> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> <46481F86.4060202@ar.media.kyoto-u.ac.jp> <46483476.4050403@gmail.com> Message-ID: <20070514101323.GH9842@clipper.ens.fr> On Mon, May 14, 2007 at 06:05:42PM +0800, Aldarion wrote: > David Cournapeau wrote: > > - TraitsUI looks great, but I am a bit worried that it may be > > difficult to install on some platforms. > +1 for this. I can't find how to install Traits(UI) on windows or linux... Under windows it is very easy: download the windows installer on http://code.enthought.com/traits/ Under Linux... Hum, compile from source. There needs some packaging to be done for Linux. I hope the release of version 2 will bring this. I see Matthieu is saying that he will compile this for Linux. If you are just planning to compile than let me tell you, it works fine. But are you also planning to make some packages :->. I don't have the skills to do this, but I really wish some one did. Ga?l From matthieu.brucher at gmail.com Mon May 14 06:31:41 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 14 May 2007 12:31:41 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <20070514101323.GH9842@clipper.ens.fr> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> <46481F86.4060202@ar.media.kyoto-u.ac.jp> <46483476.4050403@gmail.com> <20070514101323.GH9842@clipper.ens.fr> Message-ID: > > I see Matthieu is saying that he will compile this for Linux. If you are > just planning to compile than let me tell you, it works fine. But are you > also planning to make some packages :->. I don't have the skills to do > this, but I really wish some one did. For the moment, I don't know how to make packages... :( Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.gillespie at ncl.ac.uk Mon May 14 06:41:57 2007 From: c.gillespie at ncl.ac.uk (Colin Gillespie) Date: Mon, 14 May 2007 11:41:57 +0100 Subject: [SciPy-user] hyp1f1 -- Confluent hypergeometric function (1F1) In-Reply-To: <464180CF.3020804@iam.uni-stuttgart.de> References: <4640A997.3040404@ncl.ac.uk> <464180CF.3020804@iam.uni-stuttgart.de> Message-ID: <46483CF5.4020609@ncl.ac.uk> >> hyp1f1(3,4,-6) >> >> >> 0.027777777777777769 #Maple gives0.0260564.. >> >> There seems to be some sort of rounding error. Can I request more digits (or is this a bug)? >> >> > It seems to be a bug. I found a Matlab implementation which reproduces > the Maple result. http://ceta.mit.edu/comp_spec_func/mchgm.m > Do I need to report the bug formally? Cheers Colin -- Dr Colin Gillespie http://www.mas.ncl.ac.uk/~ncsg3/ From erendisaldarion at gmail.com Mon May 14 07:02:40 2007 From: erendisaldarion at gmail.com (Aldarion) Date: Mon, 14 May 2007 19:02:40 +0800 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <20070514101323.GH9842@clipper.ens.fr> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> <46481F86.4060202@ar.media.kyoto-u.ac.jp> <46483476.4050403@gmail.com> <20070514101323.GH9842@clipper.ens.fr> Message-ID: <464841D0.1080300@gmail.com> Gael Varoquaux wrote: > Under windows it is very easy: download the windows installer on > http://code.enthought.com/traits/ after intall enthon-python2.4-1.0.0.exe and enthought.traits-1.1.0.win32-py2.4.exe, can't find traitsUI for python2.4? or maybe it's enough to use traitsUI, i can't run the sample of http://www.gael-varoquaux.info/computers/traits_tutorial/ > Under Linux... Hum, compile from source. There needs some packaging to be > done for Linux. I hope the release of version 2 will bring this. > > I see Matthieu is saying that he will compile this for Linux. If you are > just planning to compile than let me tell you, it works fine. But are you > also planning to make some packages :->. I don't have the skills to do > this, but I really wish some one did. > oh, I am a newbie of linux,almost can't install anything without apt-get or setup.py install or eazy_install blabla > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From matthieu.brucher at gmail.com Mon May 14 07:26:32 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 14 May 2007 13:26:32 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <464841D0.1080300@gmail.com> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <20070514073958.GD9842@clipper.ens.fr> <46481F86.4060202@ar.media.kyoto-u.ac.jp> <46483476.4050403@gmail.com> <20070514101323.GH9842@clipper.ens.fr> <464841D0.1080300@gmail.com> Message-ID: Well installing for Linux is straightforward. Just follow the intructions in the README file. Well, the problem is that then the enthought module is not in the site-package/ after that, and that the setup files at the root of the archives can't be used... Matthieu 2007/5/14, Aldarion : > > Gael Varoquaux wrote: > > Under windows it is very easy: download the windows installer on > > http://code.enthought.com/traits/ > after intall enthon-python2.4-1.0.0.exe and > enthought.traits-1.1.0.win32-py2.4.exe, > can't find traitsUI for python2.4? or maybe it's enough to use traitsUI, > i can't run the sample > of http://www.gael-varoquaux.info/computers/traits_tutorial/ > > Under Linux... Hum, compile from source. There needs some packaging to > be > > done for Linux. I hope the release of version 2 will bring this. > > > > I see Matthieu is saying that he will compile this for Linux. If you are > > just planning to compile than let me tell you, it works fine. But are > you > > also planning to make some packages :->. I don't have the skills to do > > this, but I really wish some one did. > > > oh, I am a newbie of linux,almost can't install anything without apt-get > or setup.py install or eazy_install blabla > > Ga?l > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuele at relativita.com Mon May 14 08:09:33 2007 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 14 May 2007 14:09:33 +0200 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: <4648517D.7060300@relativita.com> Dear David, I'm working in ML area using numpy/scipy. An important requirement could be a decent abstraction of model selection that work with different classifiers/regression algo to find out good hyperparameters. I use SVM (scipy.sandbox.svm) and Gaussian Process Regression (self-made, quite efficient and stable). I'm using the optimization package (scipy.optimize) to find hyperparameters in a fast way and in both cases it works well. I can release something decent in the next weeks. If you believe it's useful I can participate to pymachine with this code. Another idea: having a common module to host/generate datasets to test ML methods under different assumptions. It would be extremely useful. Cheers, Emanuele David Cournapeau wrote: > Dear scipy developers and users, > > As some of you may know already, my proposal for pymachine, a python > toolbox for machine learning in python, has been accepted for the Summer > of Code 2007. The detailed proposal is online [1], and wikified [2]. The > proposal timeline consists of two main steps: > - first improving existing tools related to machine learning in > scipy, such as they become part of "official scipy" (eg all tools in > toolbox going into main scipy namespace). This includes scipy.cluster, > scipy.sandbox.pyem and scipy.sandbox.svm. > - Then building from this set of toolboxes a more high level package, > in the spirit of similar softwares, such as orange or weka [3], > including some visualization tools for data exploration. This part of > the code would be put in scikits (because it will require extra > dependencies). > All development will happen in the scipy and scikits subversion > repositories. > > Now, before starting working on it, I would like to get some feedback > about what other people think is necessary with respect to those goals: > - What are the requirements for a toolbox to go from the sandbox into > the scipy namespace ? > - For people willing to use machine learning related software in > python/scipy, what are the main requirements/concern ? (eg Data > exploration GUI, efficiency, readability of the algorithms, etc...) > > cheers, > > David > > [1] http://www.ar.media.kyoto-u.ac.jp/members/david/fullproposal.html > > [2] http://projects.scipy.org/scipy/scipy/wiki/MachineLearning > > [3] orange http://magix.fri.uni-lj.si/orange/, weka: > http://www.cs.waikato.ac.nz/ml/weka/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From cookedm at physics.mcmaster.ca Mon May 14 09:50:14 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 14 May 2007 09:50:14 -0400 Subject: [SciPy-user] hyp1f1 -- Confluent hypergeometric function (1F1) In-Reply-To: <46483CF5.4020609@ncl.ac.uk> References: <4640A997.3040404@ncl.ac.uk> <464180CF.3020804@iam.uni-stuttgart.de> <46483CF5.4020609@ncl.ac.uk> Message-ID: <20070514135013.GA6557@arbutus.physics.mcmaster.ca> On Mon, May 14, 2007 at 11:41:57AM +0100, Colin Gillespie wrote: > > >> hyp1f1(3,4,-6) > >> > >> > >> 0.027777777777777769 #Maple gives0.0260564.. > >> > >> There seems to be some sort of rounding error. Can I request more digits (or is this a bug)? > >> > >> > > It seems to be a bug. I found a Matlab implementation which reproduces > > the Maple result. http://ceta.mit.edu/comp_spec_func/mchgm.m > > > Do I need to report the bug formally? It helps :-) Assign it to me (cookedm), and I'll have a look at it. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From gvenezian at yahoo.com Mon May 14 11:20:48 2007 From: gvenezian at yahoo.com (Giulio Venezian) Date: Mon, 14 May 2007 08:20:48 -0700 (PDT) Subject: [SciPy-user] zeros of bessel functions: jn_zeros and jnp_zeros in Windows Message-ID: <802356.50951.qm@web51003.mail.re2.yahoo.com> I am unable to get the functions jn_zeros and jnp_zeros to work. I am running vpython on Windows XP Home Edition. I have Python 2.5 Tk version 8.4 and IDLE version 1.2 and NumPy 1.0.2 and SciPy 0.5.2. I used the Windows self installers from the respective websites. I installed the programs in this order: Python 2.5,and then the VPython, Numpy, and Scipy that were specified for Python 2.5. When I call jn_zeros or jnp_zeros, an error message text box that opens up in a separate window that says: "pythonw.exe has encountered a problem and needs to close. We are sorry for the inconvenience. If you were in the middle of something, the information you were working on might be lost. ..." After more stuff, there are three buttons Debug, Send Error Report, and Don't Send The program I ran was from scipy import special ## calculate values of bessel function (this works) order=0 for x in range (0,11): a=special.jn(order,x) print x,a ##help(special) print 'trying jn.zeros' ## this doesn't work #calculate zeros (this doesn't work) print special.jn_zeros(3,5) and the output was >>> 0 1.0 1 0.765197686558 2 0.223890779141 3 -0.260051954902 4 -0.397149809864 5 -0.177596771314 6 0.150645257251 7 0.30007927052 8 0.171650807138 9 -0.0903336111829 10 -0.245935764451 trying jn.zeros then the program stops, the text box opens up and, when I close the text box, I get >>> The error occurs whether I use IDLE or the Python command line, except that when tI close the text box, the command line box window closes too. Someone suggested that I modify my listing to: from scipy import * import scipy scipy.__version__ ## calculate values of bessel function (this works) order=0 for x in range (0,11): a=special.jn(order,x) print x,a ##help(special) print 'trying jn.zeros' ## this doesn't work #calculate zeros (this doesn't work) print special.jnp_zeros(3,5) this resulted in the same error message. The only new thing was that the response to the line scipy.__version__ when running the command line Python was '0.5.2' There was no response in IDLE; it just went to the next step. Any suggestions? Giulio ____________________________________________________________________________________Get the free Yahoo! toolbar and rest assured with the added security of spyware protection. http://new.toolbar.yahoo.com/toolbar/features/norton/index.php From Karl.Young at ucsf.edu Mon May 14 14:08:38 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Mon, 14 May 2007 11:08:38 -0700 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: <4648A5A6.7040803@ucsf.edu> Sorry if I missed a reply where this was discussed but has anyone explored the possibility of doing this in conjunction with the Orange project ? There was a discussion about this a while back and the Orange developers were fairly quick to adapt their code to support numpy arrays. It would be great if users/developers didn't have to either choose between sets of packages that had some subset of the functionality they needed or install a bunch of different, incompatible packages. It seems to me that a really important thing for a ML package to embody, given the huge variety of available algorithms, is the ability to easily add algorithms, either locally or to submit back to a repository. This would presumably be better supported by having the largest set of python ML users/developers focused on one package. >Dear scipy developers and users, > > As some of you may know already, my proposal for pymachine, a python >toolbox for machine learning in python, has been accepted for the Summer >of Code 2007. The detailed proposal is online [1], and wikified [2]. The >proposal timeline consists of two main steps: > - first improving existing tools related to machine learning in >scipy, such as they become part of "official scipy" (eg all tools in >toolbox going into main scipy namespace). This includes scipy.cluster, >scipy.sandbox.pyem and scipy.sandbox.svm. > - Then building from this set of toolboxes a more high level package, >in the spirit of similar softwares, such as orange or weka [3], >including some visualization tools for data exploration. This part of >the code would be put in scikits (because it will require extra >dependencies). >All development will happen in the scipy and scikits subversion >repositories. > > Now, before starting working on it, I would like to get some feedback >about what other people think is necessary with respect to those goals: > - What are the requirements for a toolbox to go from the sandbox into >the scipy namespace ? > - For people willing to use machine learning related software in >python/scipy, what are the main requirements/concern ? (eg Data >exploration GUI, efficiency, readability of the algorithms, etc...) > > cheers, > > David > >[1] http://www.ar.media.kyoto-u.ac.jp/members/david/fullproposal.html > >[2] http://projects.scipy.org/scipy/scipy/wiki/MachineLearning > >[3] orange http://magix.fri.uni-lj.si/orange/, weka: >http://www.cs.waikato.ac.nz/ml/weka/ >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From ggellner at uoguelph.ca Mon May 14 16:55:00 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Mon, 14 May 2007 16:55:00 -0400 Subject: [SciPy-user] Optimization & Parallelization of, integrate.odeint In-Reply-To: References: <4640A4D3.4040307@imtek.de> Message-ID: <20070514205500.GA16459@giton> I can't really speak to high dimensional systems, as my needs are quite plain, but I find that defining the function in fortran, while still using the odeint function makes a MASSIVE difference. For example the simple script I wrote (http://www.scipy.org/Cookbook/Theoretical_Ecology/Hastings_and_Powell, sorry for the terseness, I intend to fill it out in the next couple of days, and figure out how to include images), using the fortran subroutine on my computer makes the code over 100 times faster, (and only 10 times slower than a pure fortran version I originally wrote). I don't know if weave is different, but the overhead caused by the python data passing seems modest (mind you, only one parameter is changing in this example, I am not sure how it scales, but so far using this technique for many similar examples is always fast enough for my needs). That being said PyDSTool is currently my preferred choice for more difficult bifurcation problems! Though odeint is very nice for simple systems like this, and avoids a lot of the configuration 'overhead' that I find I need to do in PyDSTool. Gabriel On Tue, May 08, 2007 at 12:49:16PM -0400, Rob Clewley wrote: > Hi, > > Frankly, for low-dimensional systems, I find the scipy-wrapped vode > and odeint solvers to be quite fast enough for most uses. It's only if > you're going to start doing things like parameter sensitivity > calculations or large parameter sweeps that you might have to wait a > bit longer for your answers. > > Not to be flippant, but the only truly fast way to use python for > solving ODEs is to only use it for managing your system, and actually > *solve* the thing entirely in the same piece of C or Fortran code. By > which I am trying to say that for ODE problems for which you really > care about performance, the C- or Fortran-based integrators that have > been interfaced to by PyDSTool and SloppyCell are your best bet, > AFAIK. With these you can specify and manipulate your ODE definitions, > parameters etc. in python, and work with the output in python, but the > RHS functions etc. will actually get *automatically* converted into C > code for the integrators. So there's no call-back to Python functions, > which has a lot of inefficient overhead. > > Also, I wouldn't have thought that using weave for your RHS functions > will really make a great improvement, as from my understanding > information still has to flow back and forth to odeint via the python > layer for every function call. But I'd be interested to see a > comparative test of that, or to be educated otherwise! > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From vaftrudner at gmail.com Mon May 14 16:56:23 2007 From: vaftrudner at gmail.com (Martin Blom) Date: Mon, 14 May 2007 15:56:23 -0500 Subject: [SciPy-user] Null values in csv Message-ID: Hello, I'm trying to import a huge comma (tab really) separated value file into numpy/scipy. Trouble is, it's a mix of numerical values and 'null' values (encoded as the string 'null'). I guess I could just import it as an array with dtype=string32. However, the file is quite big and I'd like to use as little memory as possible, and worse, it seems like a really ugly solution. Since dealing with null values in experimental science must be a fairly standard problem, I was wondering what people in general do when confronted with them? Is there some standard data type that I have overlooked and should use? Are there any clever workarounds? Or am I stuck with strings? The file (which contains DNA microarray data, in case anyone wondered) looks sort of like this, but bigger: 0.021 -0.041 0.282 0.021 null 0.299 0.198 0.144 null -0.046 null null -0.081 -0.322 null -0.005 null null Thank you Martin Blom -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdh2358 at gmail.com Mon May 14 17:09:50 2007 From: jdh2358 at gmail.com (John Hunter) Date: Mon, 14 May 2007 16:09:50 -0500 Subject: [SciPy-user] Null values in csv In-Reply-To: References: Message-ID: <88e473830705141409q188f58cdn3bab69da5440d200@mail.gmail.com> On 5/14/07, Martin Blom wrote: > Hello, > > I'm trying to import a huge comma (tab really) separated value file into > numpy/scipy. Trouble is, it's a mix of numerical values and 'null' values > (encoded as the string 'null'). I guess I could just import it as an array > with dtype=string32. However, the file is quite big and I'd like to use as > little memory as possible, and worse, it seems like a really ugly solution. > Since dealing with null values in experimental science must be a fairly > standard problem, I was wondering what people in general do when confronted > with them? Is there some standard data type that I have overlooked and > should use? Are there any clever workarounds? Or am I stuck with strings? > > The file (which contains DNA microarray data, in case anyone wondered) looks > sort of like this, but bigger: > 0.021 -0.041 0.282 0.021 null 0.299 > 0.198 0.144 null -0.046 null null > -0.081 -0.322 null -0.005 null null When parsing CSV, I apply converter functions to each field, to convert for example datestrings->datetime.date objects and numbers -> floats. You can use None or numpy.nan for your nulls. My converter functions typically look something like those below. class converter: def __init__(self, missing='Null'): self.missing = missing def __call__(self, s): if s==self.missing: return None return s class tostr(converter): 'convert to string or None' pass class todatetime(converter): 'convert to a datetime or None' def __init__(self, fmt='%Y-%m-%d', missing='Null'): 'use a time.strptime format string for conversion' converter.__init__(self, missing) self.fmt = fmt def __call__(self, s): if s==self.missing: return None tup = time.strptime(s, self.fmt) return datetime.datetime(*tup[:6]) class todate(converter): 'convert to a date or None' def __init__(self, fmt='%Y-%m-%d', missing='Null'): 'use a time.strptime format string for conversion' converter.__init__(self, missing) self.fmt = fmt def __call__(self, s): if s==self.missing: return None tup = time.strptime(s, self.fmt) return datetime.date(*tup[:3]) class tofloat(converter): 'convert to a float or None' def __init__(self, missing='Null'): converter.__init__(self, missing) def __call__(self, s): if s==self.missing: return None return float(s) From david at ar.media.kyoto-u.ac.jp Mon May 14 21:59:18 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 15 May 2007 10:59:18 +0900 Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <4648A5A6.7040803@ucsf.edu> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <4648A5A6.7040803@ucsf.edu> Message-ID: <464913F6.7020305@ar.media.kyoto-u.ac.jp> Karl Young wrote: > Sorry if I missed a reply where this was discussed but has anyone > explored the possibility of doing this in conjunction with the Orange > project ? There was a discussion about this a while back and the Orange > developers were fairly quick to adapt their code to support numpy > arrays. It would be great if users/developers didn't have to either > choose between sets of packages that had some subset of the > functionality they needed or install a bunch of different, incompatible > packages. It seems to me that a really important thing for a ML package > to embody, given the huge variety of available algorithms, is the > ability to easily add algorithms, either locally or to submit back to a > repository. This would presumably be better supported by having the > largest set of python ML users/developers focused on one package. > Short answer: orange is released under the GPL, hence code sharing is difficult with scipy. Long answer: orange, as I understand, is a C++ package with advanced scripting through python. My proposal is about implementing everything in python: this is not just to reimplement something with a new language. python is much better suited to understand algorithms, and hence improving them. Also, an important point of the proposal is to have two parts: the "high level" part where you can do things ala orange, and the "low level" part, which will be included in scipy (well, if the code is considered good enough, of course :) ). Now, I agree with your concern, and that's why part of the project is to implement an arff reader/writer to share data with weka : http://www.cs.waikato.ac.nz/~ml/weka/arff.html. Also, both parts (high level vs low level) would be as decoupled as possible, so that using the high level part with other algorithms would be possible. Cheers, David From josegomez at gmx.net Tue May 15 06:08:01 2007 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Tue, 15 May 2007 12:08:01 +0200 Subject: [SciPy-user] scipy and svm Message-ID: <20070515100801.282130@gmx.net> While on the subject of pymachine... :) I understand David will be dusting the SVM code in the sandbox. I have plans to use SVMs (for classification and regression, no less!) in the very near future (today and tomorrow). I have been toying with the svm module, and while I mostly understand how it works (mostly!), I still don't know how much it's going to change after the SoC project is underway. I know its still early days, but how can I future proof any attempts I make? Also, is there any more documentation on the SVM module apart from the one included in the source? Cheers, Jose -- Psssst! Schon vom neuen GMX MultiMessenger geh?rt? Der kanns mit allen: http://www.gmx.net/de/go/multimessenger From millman at berkeley.edu Tue May 15 06:23:31 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 15 May 2007 03:23:31 -0700 Subject: [SciPy-user] scipy and svm In-Reply-To: <20070515100801.282130@gmx.net> References: <20070515100801.282130@gmx.net> Message-ID: On 5/15/07, Jose Luis Gomez Dans wrote: > I understand David will be dusting the SVM code in the sandbox. I have plans to use SVMs (for classification and regression, no less!) in the very near future (today and tomorrow). I have been toying with the svm module, and while I mostly understand how it works (mostly!), I still don't know how much it's going to change after the SoC project is underway. I know its still early days, but how can I future proof any attempts I make? Glad to hear that you are planning to use the code. David is still trying to determine what is involved in moving the svm code out of the sandbox. However, neither of us was expecting that it would involve extensive changes to the API. Some of the things I was imaging that David would do are: 1) upgrade from libsvm-2.82 to the newest release, libsvm-2.84 2) add some more documentation 3) add some more tests 4) convert the docstrings to the new scipy docstring standard > Also, is there any more documentation on the SVM module apart from the one included in the source? Not that I know of. It was developed as part of last years summer of code by Albert Strasheim. It didn't look like there was much useful information there, but here is his summer of code blog: http://2006.planet-soc.com/taxonomy/term/33/ It would be very helpful to have some feedback on your experience with the code to help determine what would be useful. In particular it would be very helpful, if you could point out where extra documentation is needed. Thanks, Jarrod From bldrake at adaptcs.com Tue May 15 09:56:25 2007 From: bldrake at adaptcs.com (Barry Drake) Date: Tue, 15 May 2007 06:56:25 -0700 (PDT) Subject: [SciPy-user] Presentation of pymachine, a python package for machine learning In-Reply-To: <464913F6.7020305@ar.media.kyoto-u.ac.jp> Message-ID: <269057.19155.qm@web407.biz.mail.mud.yahoo.com> David Cournapeau wrote: Karl Young wrote: > Sorry if I missed a reply where this was discussed but has anyone > explored the possibility of doing this in conjunction with the Orange > project ? There was a discussion about this a while back and the Orange > developers were fairly quick to adapt their code to support numpy > arrays. It would be great if users/developers didn't have to either > choose between sets of packages that had some subset of the > functionality they needed or install a bunch of different, incompatible > packages. It seems to me that a really important thing for a ML package > to embody, given the huge variety of available algorithms, is the > ability to easily add algorithms, either locally or to submit back to a > repository. This would presumably be better supported by having the > largest set of python ML users/developers focused on one package. > Short answer: orange is released under the GPL, hence code sharing is difficult with scipy. Long answer: orange, as I understand, is a C++ package with advanced scripting through python. My proposal is about implementing everything in python: this is not just to reimplement something with a new language. python is much better suited to understand algorithms, and hence improving them. Also, an important point of the proposal is to have two parts: the "high level" part where you can do things ala orange, and the "low level" part, which will be included in scipy (well, if the code is considered good enough, of course :) ). Now, I agree with your concern, and that's why part of the project is to implement an arff reader/writer to share data with weka : http://www.cs.waikato.ac.nz/~ml/weka/arff.html. Also, both parts (high level vs low level) would be as decoupled as possible, so that using the high level part with other algorithms would be possible. Cheers, David _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user David, We have a lot of ML code, new algorithms, currently implemented in Matlab. I've been wanting to write them in Python/Scipy for some time now. This project would certainly provide impetus to do that. We would definitely prefer to write them in Python code at a high level rather than using C/C++. The visualization tools would also aid us in algorithm design/improvement. Regards, B. L. Drake -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgio.luciano at chimica.unige.it Tue May 15 11:08:57 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Tue, 15 May 2007 17:08:57 +0200 Subject: [SciPy-user] call for chapeters in a book about freeware/opensource in chemometrics Message-ID: <4649CD09.7060309@chimica.unige.it> Dear All, me and other two colleagues will be interested in writing a tutorial book about the use of opensource/freeware software in chemometrics (mainly python oriented). I've contacted the editor at Blackwell publishing that has told me that can be interested in it and sent me the module for submitting the "official" proposal. I will be very glad to hear from everyone that would like to write a chapter on it. In my opinion it will be the best to have a book with lots of examples that covers "simple" task, but i will be also glad if anyone would like to write tutorial chapter of less commons subjects. for any feedback just write to me I guess this can be both a change for spread the word of opensouce/freeware togheter with chemometrics also to let know the audience that it's not necessary to invest a lot of money in software for working in chemometrics. Of course, since there are not too much freeware/opensource software in the field, it can be a chance to "advertise" personal built-in software and to "start" something new. I hope to receive help and I will be glad to talk with enthusiast all around Giorgio P.S.: (sorry for cross posting) -- -======================- Dr Giorgio Luciano Ph.D. Di.C.T.F.A. Dipartimento di Chimica e Tecnologie Farmaceutiche e Alimentari Via Brigata Salerno (ponte) - 16147 Genova (GE) - Italy email luciano at dictfa.unige.it www.chemometrics.it -======================- From chanley at stsci.edu Tue May 15 15:59:19 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 15 May 2007 15:59:19 -0400 Subject: [SciPy-user] PyFITS 1.1 "candidate" RELEASE 2 Message-ID: <464A1117.2080700@stsci.edu> ------------------ | PYFITS Release | ------------------ Space Telescope Science Institute is pleased to announce the second candidate release of PyFITS 1.1. This release includes support for both the NUMPY and NUMARRAY array packages. This software can be downloaded at: http://www.stsci.edu/resources/software_hardware/pyfits/Download If you encounter bugs, please send bug reports to "help at stsci.edu". We intend to support NUMARRAY and NUMPY simultaneously for a transition period of no less than 6 months. Eventually, however, support for NUMARRAY will disappear. During this period, it is likely that new features will appear only for NUMPY. The support for NUMARRAY will primarily be to fix serious bugs and handle platform updates. We plan to release the "final" PyFITS 1.1 version in a few weeks. ----------- | Version | ----------- Version 1.1rc2; May 15, 2007 ------------------------------- | Major Changes since v1.1rc1 | ------------------------------- * Fixes a bug in the creation of variable length column tables on little-endian platforms. * The NUMPY version of PyFITS now supports memory mapped FITS files. * PyFITS turns off the signal handling for keyboard interrupt when running in a multi-threaded application. * Adds a new StreamingHDU class which will allow data to be streamed to a FITS file a piece at a time, instead of all at once. * Many minor bug fixes. ------------------------- | Software Requirements | ------------------------- PyFITS Version 1.1rc1 REQUIRES: * Python 2.3 or later * NUMPY 1.0.1(or later) or NUMARRAY --------------------- | Installing PyFITS | --------------------- PyFITS 1.1rc1 is distributed as a Python distutils module. Installation simply involves unpacking the package and executing % python setup.py install to install it in Python's site-packages directory. Alternatively the command %python setup.py install --local="/destination/directory/" will install PyFITS in an arbitrary directory which should be placed on PYTHONPATH. Once numarray or numpy has been installed, then PyFITS should be available for use under Python. ----------------- | Download Site | ----------------- http://www.stsci.edu/resources/software_hardware/pyfits/Download ---------- | Usage | ---------- Users will issue an "import pyfits" command as in the past. However, the use of the NUMPY or NUMARRAY version of PyFITS will be controlled by an environment variable called NUMERIX. Set NUMERIX to 'numarray' for the NUMARRAY version of PyFITS. Set NUMERIX to 'numpy' for the NUMPY version of pyfits. If only one array package is installed, that package's version of PyFITS will be imported. If both packages are installed the NUMERIX value is used to decide which version to import. If no NUMERIX value is set then the NUMARRAY version of PyFITS will be imported. Anything else will raise an exception upon import. --------------- | Bug Reports | --------------- Please send all PyFITS bug reports to help at stsci.edu ------------------ | Advanced Users | ------------------ Users who would like the "bleeding" edge of PyFITS can retrieve the software from our SUBVERSION repository hosted at: http://astropy.scipy.org/svn/pyfits/trunk We also provide a Trac site at: http://projects.scipy.org/astropy/pyfits/wiki -- Christopher Hanley Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From S.Mientki at ru.nl Wed May 16 05:38:53 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Wed, 16 May 2007 11:38:53 +0200 Subject: [SciPy-user] What can be improved ? Message-ID: <464AD12D.1030708@ru.nl> hello, I've just written a function, (with a lot of trial and error, converting strings to float, reshaping arrays etc) to read a tab delimited file, exported from Excel, and I'm glad it's working ok now. But I've the unpleasant feeling, that this function is written in a very clumsy way, so may I ask some guru for some comment about improvements. thanks, Stef Mientki # ****************************************************************************** # ****************************************************************************** def Read_SenseWear_Tab_File (filename, Print_Info = False): from scipy import * from time import strptime # open the data file and read the column names (and print if desired) Datafile = open(filename,'r') line = Datafile.readline() column_names = line.rstrip('\n').split('\t') if Print_Info: for items in column_names: print items # initialize Number of columns and an empty sample-set N = len(column_names) zero_vals = N * [0] SR = 5 # read the first dataline, to determine the start time # (we forget this first sampleset) line = Datafile.readline() vals = line.rstrip('\n').split('\t') start = datetime(*strptime(vals[0][0:16], "%Y-%m-%d %H:%M")[0:6]) prev_tyd = 0 # time of the previous sample # create an empty array data = asarray([]) sample_reduction = asarray([]) # read and interpretate all lines in file for line in Datafile: # remove EOL, split the line on tabs vals = line.rstrip('\n').split('\t') # calculate number of minutes from start tyd = datetime(*strptime(vals[0][0:16], "%Y-%m-%d %H:%M")[0:6]) s = tyd - start tyd = s.seconds/60 + s.days*24*60 # if there are sample-sets missing, fill them empty sample-sets # (beware of sample reduction) if tyd - prev_tyd > 1: zero_vals = (( tyd - prev_tyd )/SR) * N * [0] data = r_[data, zero_vals] prev_tyd = tyd # remember the time of this sample-set vals[0] = tyd # replace the datetime with number of minutes # be sure all lines are of equal length # (sometimes Excel omits the last columns if they are empty) if len(vals) < N: vals = vals + ( N- len(vals) )*[0] # replace empty strings, otherwise float conversion raises an error for i in range(len(vals)): if vals[i] == '' : vals[i] = '0' # convert the string vector to a float vector # VERY STRANGE: the next 2 operation may not be done at once vals = asarray(vals) vals = vals.astype(float) # append new sampleset, with a sample reduction of 5 sample_reduction = r_ [ sample_reduction, vals ] if len(sample_reduction) == SR * N: # reshape sample array, for easy ensemble average sample_reduction = sample_reduction.reshape(SR, N) sample_reduction = sample_reduction.mean(0) # add mean value of SAMPLE_REDUCTION sample-sets to the total array # and clear the averaging sample-set data = r_[data, sample_reduction] sample_reduction = asarray([]) # reshape into N signal vectors data = data.reshape(size(data)/N,N) data = transpose(data) return data # ****************************************************************************** Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From emanuelez at gmail.com Wed May 16 05:45:50 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Wed, 16 May 2007 11:45:50 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464AD12D.1030708@ru.nl> References: <464AD12D.1030708@ru.nl> Message-ID: I'm no guru here but i think you should use the csv module: http://docs.python.org/lib/module-csv.html Emanuele On 5/16/07, Stef Mientki wrote: > > hello, > > I've just written a function, > (with a lot of trial and error, > converting strings to float, reshaping arrays etc) > to read a tab delimited file, exported from Excel, > and I'm glad it's working ok now. > > But I've the unpleasant feeling, that this function is written in a very > clumsy way, > so may I ask some guru for some comment about improvements. > > thanks, > Stef Mientki > > > # > > ****************************************************************************** > # > > ****************************************************************************** > def Read_SenseWear_Tab_File (filename, Print_Info = False): > from scipy import * > from time import strptime > > # open the data file and read the column names (and print if desired) > Datafile = open(filename,'r') > line = Datafile.readline() > column_names = line.rstrip('\n').split('\t') > if Print_Info: > for items in column_names: print items > > # initialize Number of columns and an empty sample-set > N = len(column_names) > zero_vals = N * [0] > SR = 5 > > # read the first dataline, to determine the start time > # (we forget this first sampleset) > line = Datafile.readline() > vals = line.rstrip('\n').split('\t') > start = datetime(*strptime(vals[0][0:16], "%Y-%m-%d %H:%M")[0:6]) > prev_tyd = 0 # time of the previous sample > > # create an empty array > data = asarray([]) > sample_reduction = asarray([]) > > # read and interpretate all lines in file > for line in Datafile: > # remove EOL, split the line on tabs > vals = line.rstrip('\n').split('\t') > > # calculate number of minutes from start > tyd = datetime(*strptime(vals[0][0:16], "%Y-%m-%d %H:%M")[0:6]) > s = tyd - start > tyd = s.seconds/60 + s.days*24*60 > > # if there are sample-sets missing, fill them empty sample-sets > # (beware of sample reduction) > if tyd - prev_tyd > 1: > zero_vals = (( tyd - prev_tyd )/SR) * N * [0] > data = r_[data, zero_vals] > > prev_tyd = tyd # remember the time of this sample-set > vals[0] = tyd # replace the datetime with number of minutes > > # be sure all lines are of equal length > # (sometimes Excel omits the last columns if they are empty) > if len(vals) < N: > vals = vals + ( N- len(vals) )*[0] > > # replace empty strings, otherwise float conversion raises an error > for i in range(len(vals)): > if vals[i] == '' : vals[i] = '0' > > # convert the string vector to a float vector > # VERY STRANGE: the next 2 operation may not be done at once > vals = asarray(vals) > vals = vals.astype(float) > > # append new sampleset, with a sample reduction of 5 > sample_reduction = r_ [ sample_reduction, vals ] > if len(sample_reduction) == SR * N: > > # reshape sample array, for easy ensemble average > sample_reduction = sample_reduction.reshape(SR, N) > sample_reduction = sample_reduction.mean(0) > > # add mean value of SAMPLE_REDUCTION sample-sets to the total array > # and clear the averaging sample-set > data = r_[data, sample_reduction] > sample_reduction = asarray([]) > > # reshape into N signal vectors > data = data.reshape(size(data)/N,N) > data = transpose(data) > > return data > # > > ****************************************************************************** > > > Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of > Commerce - trade register 41055629 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.kokot at po.opole.pl Wed May 16 06:06:40 2007 From: s.kokot at po.opole.pl (Seweryn Kokot) Date: Wed, 16 May 2007 12:06:40 +0200 Subject: [SciPy-user] What can be improved ? References: <464AD12D.1030708@ru.nl> Message-ID: <87lkfp84lr.fsf@poczta.po.opole.pl> Stef Mientki writes: > hello, > > I've just written a function, > (with a lot of trial and error, > converting strings to float, reshaping arrays etc) > to read a tab delimited file, exported from Excel, > and I'm glad it's working ok now. > > But I've the unpleasant feeling, that this function is written in a very > clumsy way, > so may I ask some guru for some comment about improvements. > > thanks, > Stef Mientki Just in case, for extracting data from .xls files there is a python module http://sourceforge.net/projects/pyexcelerator regards, Seweryn From gael.varoquaux at normalesup.org Wed May 16 07:52:11 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 16 May 2007 13:52:11 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464AD12D.1030708@ru.nl> References: <464AD12D.1030708@ru.nl> Message-ID: <20070516115210.GA20852@clipper.ens.fr> Hello Stef, First of all this is not a function you should be writting, there are a lot of ways to do this with existing functions (csv module for instance). However I will go over your code in order to help you improve your python skills (I wish some people had done this to me). > ****************************************************************************** > def Read_SenseWear_Tab_File (filename, Print_Info = False): > from scipy import * > from time import strptime > # open the data file and read the column names (and print if desired) > Datafile = open(filename,'r') > line = Datafile.readline() > column_names = line.rstrip('\n').split('\t') > if Print_Info: > for items in column_names: print items OK, are enough, nothing much to say here appart from the fact that I don't like the "for ... : ..." on one line, I find it less readable than one two lines with an indentation. > # initialize Number of columns and an empty sample-set > N = len(column_names) > zero_vals = N * [0] > SR = 5 > # read the first dataline, to determine the start time > # (we forget this first sampleset) > line = Datafile.readline() > vals = line.rstrip('\n').split('\t') > start = datetime(*strptime(vals[0][0:16], "%Y-%m-%d %H:%M")[0:6]) > prev_tyd = 0 # time of the previous sample > # create an empty array > data = asarray([]) > sample_reduction = asarray([]) > # read and interpretate all lines in file > for line in Datafile: > # remove EOL, split the line on tabs > vals = line.rstrip('\n').split('\t') > # calculate number of minutes from start > tyd = datetime(*strptime(vals[0][0:16], "%Y-%m-%d %H:%M")[0:6]) > s = tyd - start > tyd = s.seconds/60 + s.days*24*60 > # if there are sample-sets missing, fill them empty sample-sets > # (beware of sample reduction) > if tyd - prev_tyd > 1: > zero_vals = (( tyd - prev_tyd )/SR) * N * [0] > data = r_[data, zero_vals] I don't thing your strategie to create the array is very efficient. It is costly to add a line to an array, as the array is memory-contiguous (well, OK, I am not sure it really is, but it tries to be). On the contrary a list is very well suited for these purposes. A python list is a chained list. Adding term to a list is cheap. You could build up a list of lists, call it "data", and at the end do "data = array(data)". (same remark for sample_reduction). > prev_tyd = tyd # remember the time of this sample-set > vals[0] = tyd # replace the datetime with number of minutes > # be sure all lines are of equal length > # (sometimes Excel omits the last columns if they are empty) > if len(vals) < N: > vals = vals + ( N- len(vals) )*[0] > # replace empty strings, otherwise float conversion raises an error > for i in range(len(vals)): > if vals[i] == '' : vals[i] = '0' I prefer this construction: for i, val in enumerate(vals): if val == '': vals[i] = '0' > # convert the string vector to a float vector > # VERY STRANGE: the next 2 operation may not be done at once > vals = asarray(vals) > vals = vals.astype(float) How about "vals = array(vals, dtype='f') > # append new sampleset, with a sample reduction of 5 > sample_reduction = r_ [ sample_reduction, vals ] > if len(sample_reduction) == SR * N: > > # reshape sample array, for easy ensemble average > sample_reduction = sample_reduction.reshape(SR, N) > sample_reduction = sample_reduction.mean(0) > # add mean value of SAMPLE_REDUCTION sample-sets to the total array > # and clear the averaging sample-set > data = r_[data, sample_reduction] > sample_reduction = asarray([]) > # reshape into N signal vectors > data = data.reshape(size(data)/N,N) > data = transpose(data) > return data Appart from this I think you code is fine. Of course using an object that processes lines and builds up your array instead of all the code in the big for loop would be much more pretty (easier to test), but this does not matter for a simple application like yours. HTH, Ga?l From fredmfp at gmail.com Wed May 16 08:58:36 2007 From: fredmfp at gmail.com (fred) Date: Wed, 16 May 2007 14:58:36 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: <20070516115210.GA20852@clipper.ens.fr> References: <464AD12D.1030708@ru.nl> <20070516115210.GA20852@clipper.ens.fr> Message-ID: <464AFFFC.3090309@gmail.com> Gael Varoquaux a ?crit : > Hello Stef, > > > I prefer this construction: > for i, val in enumerate(vals): > if val == '': > vals[i] = '0' > My very 2 cents: if val == '': is equivalent to if not val: (string and list empty equal False) -- http://scipy.org/FredericPetit From waterbug at pangalactic.us Wed May 16 09:10:05 2007 From: waterbug at pangalactic.us (Stephen Waterbury) Date: Wed, 16 May 2007 09:10:05 -0400 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464AFFFC.3090309@gmail.com> References: <464AD12D.1030708@ru.nl> <20070516115210.GA20852@clipper.ens.fr> <464AFFFC.3090309@gmail.com> Message-ID: <464B02AD.9090705@pangalactic.us> fred wrote: > Gael Varoquaux a ?crit : >> Hello Stef, >> >> >> I prefer this construction: >> for i, val in enumerate(vals): >> if val == '': >> vals[i] = '0' >> > My very 2 cents: > > if val == '': > is equivalent to > if not val: > > (string and list empty equal False) ... which leads to a more efficient form, using a list comprehension: vals = [v or '0' for v in vals] Steve From gael.varoquaux at normalesup.org Wed May 16 10:24:02 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 16 May 2007 16:24:02 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464AFFFC.3090309@gmail.com> References: <464AD12D.1030708@ru.nl> <20070516115210.GA20852@clipper.ens.fr> <464AFFFC.3090309@gmail.com> Message-ID: <20070516142356.GA27959@clipper.ens.fr> On Wed, May 16, 2007 at 02:58:36PM +0200, fred wrote: > My very 2 cents: > if val == '': > is equivalent to > if not val: Well, no, not really equivalent. In some cases it is useful to make the distinction to catch errors. But here you are right. Ga?l From t_crane at mrl.uiuc.edu Wed May 16 11:23:14 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Wed, 16 May 2007 10:23:14 -0500 Subject: [SciPy-user] possibly stupid question Message-ID: <9EADC1E53F9C70479BF6559370369114142ECC@mrlnt6.mrl.uiuc.edu> Hi, OK this seems like a really dumb question, but here it is: I have an array: a = arrange(1,100) I want to split it up. I want b = a[0:15] c = a[15: to the end of a, inclusive] Obviously, for the c assignment, I can't actually write I did. If I do this instead: c = a[15:-1], I actually don't get the very last element. I can do this: c = a[15:len(a)] But that seems cumbersome. Is there a quick/easy or more elegant way of getting a slice of elements that extends to the end of the array but is inclusive of that last element? thanks, trevis ________________________________________________ Trevis Crane Postdoctoral Research Assoc. Department of Physics University of Ilinois 1110 W. Green St. Urbana, IL 61801 p: 217-244-8652 f: 217-244-2278 e: tcrane at uiuc.edu ________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed May 16 11:28:51 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 16 May 2007 16:28:51 +0100 Subject: [SciPy-user] possibly stupid question In-Reply-To: <9EADC1E53F9C70479BF6559370369114142ECC@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142ECC@mrlnt6.mrl.uiuc.edu> Message-ID: <1e2af89e0705160828v2248b79et94a0cd83949ded5c@mail.gmail.com> Hi, > c = a[15: to the end of a, inclusive] I think you just want c = a[15:] Matthew From t_crane at mrl.uiuc.edu Wed May 16 11:38:18 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Wed, 16 May 2007 10:38:18 -0500 Subject: [SciPy-user] possibly stupid question Message-ID: <9EADC1E53F9C70479BF6559370369114134444@mrlnt6.mrl.uiuc.edu> Yeah, I thought it would be something obvious that I was missing. thanks, trevis > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of Matthew Brett > Sent: Wednesday, May 16, 2007 10:29 AM > To: SciPy Users List > Subject: Re: [SciPy-user] possibly stupid question > > Hi, > > > c = a[15: to the end of a, inclusive] > > I think you just want > > c = a[15:] > > Matthew > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From elcorto at gmx.net Wed May 16 11:51:04 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 16 May 2007 17:51:04 +0200 Subject: [SciPy-user] possibly stupid question In-Reply-To: <9EADC1E53F9C70479BF6559370369114142ECC@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142ECC@mrlnt6.mrl.uiuc.edu> Message-ID: <464B2868.3040901@gmx.net> Trevis Crane wrote: > Is there a quick/easy or more elegant way of getting a slice of elements > that extends to the end of the array but is inclusive of that last element? a = arange(1,100) gives array([1, ..., 99]) a[15:] gives you array([16, ..., 99]) See also http://numpy.scipy.org//numpy.pdf. This is rather old (Numeric tut) but covers also the slicing stuff. -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From cookedm at physics.mcmaster.ca Wed May 16 11:54:09 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 16 May 2007 11:54:09 -0400 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464AD12D.1030708@ru.nl> References: <464AD12D.1030708@ru.nl> Message-ID: <20070516155409.GA8653@arbutus.physics.mcmaster.ca> On Wed, May 16, 2007 at 11:38:53AM +0200, Stef Mientki wrote: > hello, > > I've just written a function, > (with a lot of trial and error, > converting strings to float, reshaping arrays etc) > to read a tab delimited file, exported from Excel, > and I'm glad it's working ok now. > > But I've the unpleasant feeling, that this function is written in a very > clumsy way, > so may I ask some guru for some comment about improvements. First thing I would do would be to look at the csv module included with Python :) That'll handle most of the parsing of the file, and you just then have to worry about converting the columns from strings to arrays. Also, - from ... import * in functions is deprecated. You'll get a warning - instead of appending to arrays using r_, build a list and convert it to an array at the end. import csv import numpy as N def readSenseWearTabFile(filename, print_info=False): fo = open(filename, 'rb') reader = csv.reader(fo, dialect='excel_tab') column_names = reader.next() if print_info: for items in column_names: print items N = len(column_names) start_time = reader.next()[0] start = datetime(*strptime(start_time[0:16], "%Y-%m-%d %H:%M")[0:6]) prev_tyd = 0 # time of the previous sample data = [] for vals in reader: # calculate number of minutes from start tyd = datetime(*strptime(vals[0][0:16], "%Y-%m-%d %H:%M")[0:6]) s = tyd - start tyd = s.seconds/60 + s.days*24*60 # if there are sample-sets missing, fill them empty sample-sets # (beware of sample reduction) if tyd - prev_tyd > 1: d = (tyd - prev_tyd) // SR zero_vals = ([0] * N) * d data.extend(zero_values) prev_tyd = tyd # remember the time of this sample-set vals[0] = tyd # replace the datetime with number of minutes # be sure all lines are of equal length # (sometimes Excel omits the last columns if they are empty) while len(vals) < N: vals.append(0) def sfloat(s): if s == '': return 0.0 else: return float(s) data.append([sfloat(v) for v in vals]) data = N.array(data, len(data)dtype=float).T return data > > thanks, > Stef Mientki > > > # > ****************************************************************************** > # > ****************************************************************************** > def Read_SenseWear_Tab_File (filename, Print_Info = False): > from scipy import * > from time import strptime > > # open the data file and read the column names (and print if desired) > Datafile = open(filename,'r') > line = Datafile.readline() > column_names = line.rstrip('\n').split('\t') > if Print_Info: > for items in column_names: print items > > # initialize Number of columns and an empty sample-set > N = len(column_names) > zero_vals = N * [0] > SR = 5 > > # read the first dataline, to determine the start time > # (we forget this first sampleset) > line = Datafile.readline() > vals = line.rstrip('\n').split('\t') > start = datetime(*strptime(vals[0][0:16], "%Y-%m-%d %H:%M")[0:6]) > prev_tyd = 0 # time of the previous sample > > # create an empty array > data = asarray([]) > sample_reduction = asarray([]) > > # read and interpretate all lines in file > for line in Datafile: > # remove EOL, split the line on tabs > vals = line.rstrip('\n').split('\t') > > # calculate number of minutes from start > tyd = datetime(*strptime(vals[0][0:16], "%Y-%m-%d %H:%M")[0:6]) > s = tyd - start > tyd = s.seconds/60 + s.days*24*60 > > # if there are sample-sets missing, fill them empty sample-sets > # (beware of sample reduction) > if tyd - prev_tyd > 1: > zero_vals = (( tyd - prev_tyd )/SR) * N * [0] > data = r_[data, zero_vals] > > prev_tyd = tyd # remember the time of this sample-set > vals[0] = tyd # replace the datetime with number of minutes > > # be sure all lines are of equal length > # (sometimes Excel omits the last columns if they are empty) > if len(vals) < N: > vals = vals + ( N- len(vals) )*[0] > > # replace empty strings, otherwise float conversion raises an error > for i in range(len(vals)): > if vals[i] == '' : vals[i] = '0' > > # convert the string vector to a float vector > # VERY STRANGE: the next 2 operation may not be done at once > vals = asarray(vals) > vals = vals.astype(float) > > # append new sampleset, with a sample reduction of 5 > sample_reduction = r_ [ sample_reduction, vals ] > if len(sample_reduction) == SR * N: > > # reshape sample array, for easy ensemble average > sample_reduction = sample_reduction.reshape(SR, N) > sample_reduction = sample_reduction.mean(0) > > # add mean value of SAMPLE_REDUCTION sample-sets to the total array > # and clear the averaging sample-set > data = r_[data, sample_reduction] > sample_reduction = asarray([]) > > # reshape into N signal vectors > data = data.reshape(size(data)/N,N) > data = transpose(data) > > return data > # > ****************************************************************************** > > > Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From charles.hebert at gmail.com Wed May 16 11:36:52 2007 From: charles.hebert at gmail.com (Charles Hebert) Date: Wed, 16 May 2007 17:36:52 +0200 Subject: [SciPy-user] possibly stupid question In-Reply-To: <9EADC1E53F9C70479BF6559370369114142ECC@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142ECC@mrlnt6.mrl.uiuc.edu> Message-ID: <464B2514.5090008@gmail.com> Try : >>> import numpy >>> a = numpy.arange(100) >>> b = a[:14] >>> b array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]) >>> c = a[15:] >>> c array([15, 16, 17, .... 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99] C. Trevis Crane a ?crit : > > Hi, > > OK this seems like a really dumb question, but here it is: > > I have an array: > > a = arrange(1,100) > > I want to split it up. I want > > b = a[0:15] > > c = a[15: to the end of a, inclusive] > > Obviously, for the c assignment, I can?t actually write I did. If I do > this instead: > > c = a[15:-1], I actually don?t get the very last element. I can do this: > > c = a[15:len(a)] > > But that seems cumbersome. > > Is there a quick/easy or more elegant way of getting a slice of > elements that extends to the end of the array but is inclusive of that > last element? > > thanks, > > trevis > From mforbes at physics.ubc.ca Wed May 16 13:03:10 2007 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Wed, 16 May 2007 10:03:10 -0700 Subject: [SciPy-user] What can be improved ? In-Reply-To: <20070516115210.GA20852@clipper.ens.fr> References: <464AD12D.1030708@ru.nl> <20070516115210.GA20852@clipper.ens.fr> Message-ID: >> On 16 May 2007, at 4:52 AM, Gael Varoquaux wrote: ... > ... A python list is a chained list. Adding term to a list is > cheap. You could build up a list > of lists, call it "data", and at the end do "data = array(data)". > (same remark for sample_reduction). I don't think this is actually true with most python implementations. I believe that lists are implemented as contiguous arrays: http://www.python.org/doc/faq/general/#how-are-lists-implemented However, lists are implemented efficiently for amortized insertion, so this is probably still a good recommendation. Michael. From grumm at stsci.edu Wed May 16 15:31:02 2007 From: grumm at stsci.edu (David Grumm) Date: Wed, 16 May 2007 15:31:02 -0400 Subject: [SciPy-user] median filter with clipping Message-ID: <464B5BF6.6050007@stsci.edu> Hello - I'm interested in finding a Python routine (or C routine I can call from Python) that calculates a multi-dimensional** median filter of an array while allowing pixels below a specified value to be excluded. In iraf there is a function median() which has such a parameter "zloreject", which causes all pixels having values below zloreject to be ignored when calculating the median for a given pixel. Median_filter() in SciPy's ndimage calculates a median filter, but doesn't allow exclusion of pixels. Any thoughts are appreciated. Thanks, Dave Grumm From Glen.Mabey at swri.org Wed May 16 16:19:40 2007 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Wed, 16 May 2007 15:19:40 -0500 Subject: [SciPy-user] median filter with clipping In-Reply-To: <464B5BF6.6050007@stsci.edu> References: <464B5BF6.6050007@stsci.edu> Message-ID: <20070516201940.GB27364@bams.ccf.swri.edu> On Wed, May 16, 2007 at 03:31:02PM -0400, David Grumm wrote: > I'm interested in finding a Python routine (or C routine I can call from > Python) that calculates a multi-dimensional** median filter of an array > while allowing pixels below a specified value to be excluded. In iraf > there is a function median() which has such a parameter "zloreject", > which causes all pixels having values below zloreject to be ignored when > calculating the median for a given pixel. Median_filter() in SciPy's > ndimage calculates a median filter, but doesn't allow exclusion of > pixels. Any thoughts are appreciated. I have been doing precisely that with masked arrays (numpy.ma). There is also a maskedarray module in the scipy sandbox, but I haven't tried it out. data_ma = numpy.ma.array( data, mask=(data References: <464B5BF6.6050007@stsci.edu> <20070516201940.GB27364@bams.ccf.swri.edu> Message-ID: <200705161754.42813.pgmdevlist@gmail.com> On Wednesday 16 May 2007 16:19:40 Glen W. Mabey wrote: > On Wed, May 16, 2007 at 03:31:02PM -0400, David Grumm wrote: > > I'm interested in finding a Python routine (or C routine I can call from > > Python) that calculates a multi-dimensional** median filter of an array > I have been doing precisely that with masked arrays (numpy.ma). There > is also a maskedarray module in the scipy sandbox, but I haven't tried > it out. > > data_ma = numpy.ma.array( data, mask=(data med_res = numpy.median( data_ma.compressed() ) OK, this is down my alley: You should be careful w/ the med_res = numpy.median( data_ma.compressed() ) data_ma.compressed() will remove all the masked values of your array, but it will also flatten it: in other terms, if you're interested in computing the median per rows or columns, you should think twice (and use the combination of an intermediary solution that works w/ 1D data and the numpy.apply_along_axis function). The maskedarray package in the sandbox is still a work in progress, even if it's quite stable already. It should be easier to use than the original numpy.core.ma module, as it implements masked_arrays as a subclass of ndarrays. In this package, you'll find a mstats module: it's pretty empty right now, but it has already functions for quantiles and medians for 1D and 2D arrays. From Glen.Mabey at swri.org Wed May 16 18:04:35 2007 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Wed, 16 May 2007 17:04:35 -0500 Subject: [SciPy-user] median filter with clipping In-Reply-To: <200705161754.42813.pgmdevlist@gmail.com> References: <464B5BF6.6050007@stsci.edu> <20070516201940.GB27364@bams.ccf.swri.edu> <200705161754.42813.pgmdevlist@gmail.com> Message-ID: <20070516220435.GD27364@bams.ccf.swri.edu> On Wed, May 16, 2007 at 05:54:42PM -0400, Pierre GM wrote: > The maskedarray package in the sandbox is still a work in progress, even if > it's quite stable already. It should be easier to use than the original > numpy.core.ma module, as it implements masked_arrays as a subclass of > ndarrays. How far away is maskedarray from being able to replace numpy.ma? Glen From pgmdevlist at gmail.com Wed May 16 18:28:41 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 16 May 2007 18:28:41 -0400 Subject: [SciPy-user] median filter with clipping In-Reply-To: <20070516220435.GD27364@bams.ccf.swri.edu> References: <464B5BF6.6050007@stsci.edu> <200705161754.42813.pgmdevlist@gmail.com> <20070516220435.GD27364@bams.ccf.swri.edu> Message-ID: <200705161828.41791.pgmdevlist@gmail.com> > How far away is maskedarray from being able to replace numpy.ma? So far, it does everything that numpy.core.ma does, with I believe more flexibility and some additional features (hard/soft mask, easy subclassing...). Personally, I stopped using numpy.core.ma completely (unless for test purposes), and I even managed to convince another user to switch to this module for his own work (cf the TimeSeries package). Of course, I expect that there are still some bugs here and there, but I'm working on it (when I find them). It's a tad slower than numpy.core.ma, but that's a side effect of the flexibility. In the long term, there are some plans about porting the module to C, but we're talking in quarters more than in weeks. About when it'll be promoted outside the sandbox: well, we need more feedback from users, as usual. I guess that's the principal stopping block. I'd be quite grateful if you could try it and let me know what you think. I grew fond of this child born in pain (explaining to my bosses why I spent so much time on something which is only remotely connected to what I paid to do...), so I make sure that the baby behaves... Once again, please don't hesitate to contact me if you have any comments/complaints/suggestions. From amsd2013 at yahoo.com Thu May 17 00:03:32 2007 From: amsd2013 at yahoo.com (Ali Santacruz) Date: Wed, 16 May 2007 21:03:32 -0700 (PDT) Subject: [SciPy-user] Finding determinant Message-ID: <440828.5577.qm@web62311.mail.re1.yahoo.com> Hi, I am trying to find a determinant but I am having some problems, When I try: >>> import scipy as sc >>> from scipy import linalg >>> matr = sc.array([[1.1,1.9],[1.9,3.5]]) >>> matr array([[ 1.1, 1.9], [ 1.9, 3.5]]) >>> matr2 = linalg.det(matr) Python program is closed with the message: "PYTHON~1.EXE has encountered a problem and needs to close. ..." I have installed scipy-0.5.2.win32-py2.5 for Python 2.5 (although I also tried scipy 0.5.2 for Python 2.4 obtaining the same error), under Windows XP SP1, Is there any other way to find the determinant of a square matrix? any function in other package? Any help is appreciated, Ali S. __________________________________________________ Correo Yahoo! Espacio para todos tus mensajes, antivirus y antispam ?gratis! Reg?strate ya - http://correo.espanol.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Thu May 17 00:10:14 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 17 May 2007 13:10:14 +0900 Subject: [SciPy-user] Finding determinant In-Reply-To: <440828.5577.qm@web62311.mail.re1.yahoo.com> References: <440828.5577.qm@web62311.mail.re1.yahoo.com> Message-ID: My guess is you need to install a more recent version of Numpy too. --bb On 5/17/07, Ali Santacruz wrote: > > Hi, > > I am trying to find a determinant but I am having some problems, > > When I try: > > >>> import scipy as sc > >>> from scipy import linalg > >>> matr = sc.array([[1.1,1.9],[1.9,3.5]]) > >>> matr > array([[ 1.1, 1.9], > [ 1.9, 3.5]]) > >>> matr2 = linalg.det(matr) > > Python program is closed with the message: "PYTHON~1.EXE has encountered a > problem and needs to close. ..." > > I have installed scipy-0.5.2.win32-py2.5 for Python 2.5 (although I also > tried scipy 0.5.2 for Python 2.4 obtaining the same error), under Windows > XP SP1, > > Is there any other way to find the determinant of a square matrix? any > function in other package? > > Any help is appreciated, > > Ali S. > > > > > __________________________________________________ > Correo Yahoo! > Espacio para todos tus mensajes, antivirus y antispam ?gratis! > Reg?strate ya - http://correo.espanol.yahoo.com/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Thu May 17 00:40:44 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 16 May 2007 23:40:44 -0500 Subject: [SciPy-user] Finding determinant In-Reply-To: References: <440828.5577.qm@web62311.mail.re1.yahoo.com> Message-ID: That seems like a good guess. In my experience, most violent crashes from simple things come from incompatible versions of scipy and numpy. I have no problem executing your code. What do you get for In [1]: scipy.__version__ Out[1]: '0.5.2' In [2]: numpy.__version__ Out[2]: '1.0.2' Ryan On 5/16/07, Bill Baxter wrote: > My guess is you need to install a more recent version of Numpy too. > --bb > > > On 5/17/07, Ali Santacruz wrote: > > > > > > > > > > Hi, > > > > I am trying to find a determinant but I am having some problems, > > > > When I try: > > > > >>> import scipy as sc > > >>> from scipy import linalg > > >>> matr = sc.array([[1.1,1.9 ],[1.9,3.5]]) > > >>> matr > > array([[ 1.1, 1.9], > > [ 1.9, 3.5]]) > > >>> matr2 = linalg.det(matr) > > > > Python program is closed with the message: "PYTHON~1.EXE has encountered a > problem and needs to close. ..." > > > > I have installed scipy-0.5.2.win32-py2.5 for Python 2.5 (although I also > tried scipy 0.5.2 for Python 2.4 obtaining the same error), under Windows XP > SP1, > > > > Is there any other way to find the determinant of a square matrix? any > function in other package? > > > > Any help is appreciated, > > > > Ali S. > > > > > > > > > > __________________________________________________ > > Correo Yahoo! > > Espacio para todos tus mensajes, antivirus y antispam ?gratis! > > Reg?strate ya - http://correo.espanol.yahoo.com/ > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From robert.kern at gmail.com Thu May 17 01:08:36 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 May 2007 00:08:36 -0500 Subject: [SciPy-user] Finding determinant In-Reply-To: References: <440828.5577.qm@web62311.mail.re1.yahoo.com> Message-ID: <464BE354.6030201@gmail.com> Ryan Krauss wrote: > That seems like a good guess. In my experience, most violent crashes > from simple things come from incompatible versions of scipy and numpy. > I have no problem executing your code. What do you get for > In [1]: scipy.__version__ > Out[1]: '0.5.2' > > In [2]: numpy.__version__ > Out[2]: '1.0.2' Incompatible versions are checked at load time and have been for quite some time. They should no longer cause such crashes. Rather, the crash that the OP is seeing is likely to do with the ATLAS library that his scipy binary was compiled with. Some of the official Win32 binaries were compiled with an ATLAS library that uses SSE2 instructions, which are not available on all CPUs. This has caused crashes for other people, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Thu May 17 03:45:09 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 17 May 2007 01:45:09 -0600 Subject: [SciPy-user] median filter with clipping In-Reply-To: <200705161828.41791.pgmdevlist@gmail.com> References: <464B5BF6.6050007@stsci.edu> <200705161754.42813.pgmdevlist@gmail.com> <20070516220435.GD27364@bams.ccf.swri.edu> <200705161828.41791.pgmdevlist@gmail.com> Message-ID: <464C0805.1020308@ieee.org> Pierre GM wrote: >> How far away is maskedarray from being able to replace numpy.ma? >> > > So far, it does everything that numpy.core.ma does, with I believe more > flexibility and some additional features (hard/soft mask, easy > subclassing...). Personally, I stopped using numpy.core.ma completely (unless > for test purposes), and I even managed to convince another user to switch to > this module for his own work (cf the TimeSeries package). > > Of course, I expect that there are still some bugs here and there, but I'm > working on it (when I find them). It's a tad slower than numpy.core.ma, but > that's a side effect of the flexibility. In the long term, there are some > plans about porting the module to C, but we're talking in quarters more than > in weeks. > > About when it'll be promoted outside the sandbox: well, we need more feedback > from users, as usual. I guess that's the principal stopping block. I'd be > quite grateful if you could try it and let me know what you think. I grew > fond of this child born in pain (explaining to my bosses why I spent so much > time on something which is only remotely connected to what I paid to do...), > so I make sure that the baby behaves... > I'm inclined to move his masked array over to ma wholesale. The fact that Pierre sees it as his baby is very important to me. If it doesn't have significant compatibility issues than I'm all for it. I'm mainly interested in hearing how people actually using numpy.ma would respond. Perhaps we should move it over and start deprecating numpy.ma?? -Travis From s.mientki at ru.nl Thu May 17 06:11:27 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 17 May 2007 12:11:27 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464AD12D.1030708@ru.nl> References: <464AD12D.1030708@ru.nl> Message-ID: <464C2A4F.7070903@ru.nl> Thank you all, All your answers give me a better insight in a more "Pythonic" way of programming, but also raises some new questions / remarks. First what (I think) I've learned, - lists are a better object to continuously expand than arrays - alternatives to detect / convert empty strings into floats - iteration / enumeration techniques, coming from a "classical" language I've to get some more experience with it - library for direct reading of Excel files - and most important, I'm amazed about large amount of people, all gently willing to help, on this list, THANK YOU ALL !! And here are some aspects, for which I still have doubts or questions: - from .... import * the use of this construct is discouraged, and gives a warning (the only warning I get is from David ;-) Coming from Delphi, it's very common to include everything you've available, so you never have to worry missing something, and if included in the right order, you are guaranteed to have the best version of everything (due to overrides), while you're still able to use older/previous libraries, by explictly naming them. The compiler will sort out everything you don't need. So what's so different in Python, that I can't include everything that's on my PC ? - use CSV, yes but how should I've known ? I think (and it's mentioned before by others) this is one of the major problems of Python, "where should I find something ?". I've read the book "Learning Python", but nothing is mentioned about CSV :-( I sometimes look at the official Python guide of Guido van Rossum (not this time ;-), but there's no printable version of it. I wonder if there isn't some kind of program that can create an PDF file from such a site. It looks like the early days of Delphi, in that time you had about 10 different sites, each containing different kinds of information, none of them was complete. Nowadays, there is just 1 site for Delphi (http:\\www.torry.net\) where everything of Delphi can be found very easily (both freeware and commercial). btw MatLab also has 1 site, where everything can be found, but besides the annoying log in procedure, it can absolutely not be compared to Torry's Delphi site !! - better to use an object than a function Yes I've thought about that, but I had one major concern why I didn't choose for an object (besides the fact that in the beginning I assumed the function to be much smaller), and that concern is memory usage. I also never see something as a "closefile()" ?? But maybe my fear is totally wrong. I've never understood how objects are freed/destroyed in Python. Knowing that file reading/parsing should be eventually be performed on several hundred of files, I'm afraid that all files will be in memory. Using a function, I understand that after leaving the function, Python destroys all internal function objects and thus freeing memory (I don't know for sure if this is true). Thats why I choose for a function instead of an object. thanks again, for listening and answering all those questions of an enthusiastic newbie, cheers, Stef Mientki From elcorto at gmx.net Thu May 17 07:27:44 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 17 May 2007 13:27:44 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464C2A4F.7070903@ru.nl> References: <464AD12D.1030708@ru.nl> <464C2A4F.7070903@ru.nl> Message-ID: <464C3C30.5050308@gmx.net> Stef Mientki wrote: > > And here are some aspects, for which I still have doubts or questions: > - from .... import * > the use of this construct is discouraged, and gives a warning (the only > warning I get is from David ;-) > Coming from Delphi, it's very common to include everything you've available, > so you never have to worry missing something, > and if included in the right order, > you are guaranteed to have the best version of everything (due to > overrides), > while you're still able to use older/previous libraries, by explictly > naming them. > The compiler will sort out everything you don't need. > So what's so different in Python, that I can't include everything that's > on my PC ? In most cases, import * results in poorly readable code. import * inside functions and more general importing *anything* in functions isn't a good idea because: usually you write a function in order to execute a chunk of code more than once. So if you call the func N times, all the importing will also be done N times which is simply overhead (execution time + memory action). A simple-minded timing example is attached, which shows the effect clearly (although not utilizing some nifty tricks offerd by the timeit module). Running it (from Ipython or the cmd line), I see the warning David mentioned. Maybe this was intruduced with Python 2.4.x and you are on 2.3.x if you don't see it? In [16]: %run test.py test.py:5: SyntaxWarning: import * only allowed at module level def f_in(): f_in: 0.302762985229 f_out: 0.0510129928589 -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams -------------- next part -------------- A non-text attachment was scrubbed... Name: test.py Type: text/x-python Size: 745 bytes Desc: not available URL: From elcorto at gmx.net Thu May 17 07:43:57 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 17 May 2007 13:43:57 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464C3C30.5050308@gmx.net> References: <464AD12D.1030708@ru.nl> <464C2A4F.7070903@ru.nl> <464C3C30.5050308@gmx.net> Message-ID: <464C3FFD.3020202@gmx.net> Steve Schmerler wrote: > for i in xrange(nruns): > t0 = time() > for j in xrange(ncalls): > f_in() > dt[i] = time() - t0 > print "f_in: ", dt.min() > > for i in xrange(nruns): > t0 = time() > for j in xrange(ncalls): > f_out() > # overwrite old dt array > dt[i] = time() - t0 > print "f_out: ", dt.min() > Ups, something like x = f_in() x = f_out() would make some more sense. Not important for the timing stuff, but for completeness (without the assignment to x the log() function wouldn't get called, right?) A note on the "compiler sorts it out"-stuff: I'm not at all familiar with Delphi so I can't tell if it's a good practise to import "everything". In Python, you should only import what you actually need. -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From gael.varoquaux at normalesup.org Thu May 17 08:33:20 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 17 May 2007 14:33:20 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464C2A4F.7070903@ru.nl> References: <464AD12D.1030708@ru.nl> <464C2A4F.7070903@ru.nl> Message-ID: <20070517123320.GA15996@clipper.ens.fr> On Thu, May 17, 2007 at 12:11:27PM +0200, Stef Mientki wrote: > And here are some aspects, for which I still have doubts or questions: > - from .... import * > the use of this construct is discouraged, and gives a warning (the only > warning I get is from David ;-) > Coming from Delphi, it's very common to include everything you've available, > so you never have to worry missing something, > and if included in the right order, > you are guaranteed to have the best version of everything (due to > overrides), > while you're still able to use older/previous libraries, by explictly > naming them. > The compiler will sort out everything you don't need. > So what's so different in Python, that I can't include everything that's > on my PC ? Well I think that in Delphi if you already have a function "max" and you try to import a module with a function "max" defined in it, Delphi is going to complain. Am I right ? Well in Python, it is not the case, so if you have done a large number of "from ... import *", you have no way of knowing where "max" comes from. What people often do is "import numpy as N". They can call "N.max", afterwards. > - use CSV, yes but how should I've known ? > I think (and it's mentioned before by others) this is one of the major > problems of Python, > "where should I find something ?". I've read the book "Learning Python", > but nothing is mentioned about CSV :-( Google, I would say. Yes there is no good solution. People are starting to work on building a help database, but it is not an easy job, and it will be a few years before it works. > - better to use an object than a function > Yes I've thought about that, but I had one major concern why I didn't > choose for an object (besides the fact that in the beginning I assumed > the function to be much smaller), and that concern is memory usage. I > also never see something as a "closefile()" ?? You no what they say ? Premature optimization is the root of all evil. Worry about efficiency only if you have an efficiency problem, and if so do it in a scientific way: profile. I would be surprised to find that method lookup will be what slows down your code, in the case we are talking about. > I've never understood how objects are freed/destroyed in Python. Well there is a good article about that: http://www.informit.com/articles/article.asp?p=453682&rl=1 section 3 is what answers your question, but the whole article is worth reading. HTH, Ga?l From ashashiwa at gmail.com Thu May 17 08:44:19 2007 From: ashashiwa at gmail.com (Jean-Baptiste BUTET) Date: Thu, 17 May 2007 14:44:19 +0200 Subject: [SciPy-user] [noob inside]star detection Message-ID: <233976e40705170544k13c80beq624202d7cbeac66f@mail.gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all :) I'm beginning in python and in programming. Never had some image tratment lesson too. But i like when it's difficult ;))))))))))))) I have an astronomical picture and want to compute average FWHM on it. So i'm searching code that helps me in this task. I have some ideas how to begin this... but I think there are function in scipy that are dedicated to this. (scipy or numpy I use both) 1) find local minima 2) fit a gaussian/PSF on it 3) compute fwhm 1) or 2) are hot for me. Could you guide me ? NB : Here is my little software http://focuslinux.free.fr/pyfocus.tar.bz2 plug a USB camera (reflex, compact, as you want) and it will download and plot last picture (if you've done a movie, it will be an error) Objectif is to have a DSLRFocus-like software under linux. Clear skies, JB -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: http://firegpg.tuxfamily.org iD8DBQFGTE6BetOZWwsO2AERAjs+AKC7EaPYkRxy/tTLatPh9uKssKDJQQCfU0X8 fEWgkFCUvn8tX9sV3o6VLXc= =zqcA -----END PGP SIGNATURE----- -- http://astrolix.org -- http://astrolix.org association des linuxiens astronomes From steve at shrogers.com Thu May 17 08:54:11 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Thu, 17 May 2007 06:54:11 -0600 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464C2A4F.7070903@ru.nl> References: <464AD12D.1030708@ru.nl> <464C2A4F.7070903@ru.nl> Message-ID: <464C5073.9090903@shrogers.com> Stef Mientki wrote: > And here are some aspects, for which I still have doubts or questions: > - from .... import * > the use of this construct is discouraged, and gives a warning (the only > warning I get is from David ;-) > Coming from Delphi, it's very common to include everything you've available, > so you never have to worry missing something, > and if included in the right order, > you are guaranteed to have the best version of everything (due to > overrides), > while you're still able to use older/previous libraries, by explictly > naming them. > The compiler will sort out everything you don't need. > So what's so different in Python, that I can't include everything that's > on my PC ? > You can, it just isn't efficient. It pollutes your top level name space and increases the probability of naming collisions. It increases the memory footprint unnecessarily. > - use CSV, yes but how should I've known ? > I think (and it's mentioned before by others) this is one of the major > problems of Python, > "where should I find something ?". I've read the book "Learning Python", > but nothing is mentioned about CSV :-( > While an excellent introductory text, this isn't a very good reference IMHO. Take a look at David Beazley's `Python Essential Reference `_ which has brief but clear explanations and examples and is well organized for reference. I'm looking at adding a documentation search capability for IPython to augment it's interactive help. So far I have little but good intentions and a wiki page . Ideas about what would be most useful to users, search strategies, and how best to present the results are welcome. # Steve From gael.varoquaux at normalesup.org Thu May 17 09:17:56 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 17 May 2007 15:17:56 +0200 Subject: [SciPy-user] [noob inside]star detection In-Reply-To: <233976e40705170544k13c80beq624202d7cbeac66f@mail.gmail.com> References: <233976e40705170544k13c80beq624202d7cbeac66f@mail.gmail.com> Message-ID: <20070517131756.GB15996@clipper.ens.fr> On Thu, May 17, 2007 at 02:44:19PM +0200, Jean-Baptiste BUTET wrote: > I have some ideas how to begin this... but I think there are function > in scipy that are dedicated to this. (scipy or numpy I use both) > 1) find local minima > 2) fit a gaussian/PSF on it > 3) compute fwhm > 1) or 2) are hot for me. I think there is no need for fit when dealing with gaussian data: all you need to do is find the moments of the distribution. I will be lazy and steal some code I have already used to explain this, you can find it on: http://scipy.org/Cookbook/FittingData OK, this code is 1D, and you want 2D code. I'll try not to be to lazy and guive you some 2D code: +++++++++++++++++++ from pylab import * from numpy import * # Create the gaussian data gaussian = lambda x, y: 3*exp(-((30-x)**2+(20+y)**2)/20.) Xin, Yin = mgrid[-50:51, -50:51] data = gaussian(Xin, Yin) matshow(data, cmap=cm.gist_earth_r) # Now pretend we do not know how this data was created, and find the # parameters. total = sum(data) X, Y = indices(data.shape) x = sum(X*data)/total y = sum(Y*data)/total width_x = sqrt(abs(sum((X-x)**2*data)/total)) width_x = sqrt(abs(sum((Y-y)**2*data)/total)) max = data.max() fit = lambda u,v : max*exp(-((u-x)**2+(v-y)**2)/(2*width**2)) matshow(fit(X, Y), cmap=cm.gist_earth_r) show() +++++++++++++++++++ I hope this code is understandable to you and that you elaborate on it. If not, ask questions here. -- Ga?l From gael.varoquaux at normalesup.org Thu May 17 09:26:12 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 17 May 2007 15:26:12 +0200 Subject: [SciPy-user] [noob inside]star detection In-Reply-To: <20070517131756.GB15996@clipper.ens.fr> References: <233976e40705170544k13c80beq624202d7cbeac66f@mail.gmail.com> <20070517131756.GB15996@clipper.ens.fr> Message-ID: <20070517132612.GC15996@clipper.ens.fr> On Thu, May 17, 2007 at 03:17:56PM +0200, Gael Varoquaux wrote: > OK, this code is 1D, and you want 2D code. I'll try not to be to lazy and > guive you some 2D code: Well, there is a bug in this code. But you see the idea :->... Ga?l From ashashiwa at gmail.com Thu May 17 09:43:37 2007 From: ashashiwa at gmail.com (Jean-Baptiste BUTET) Date: Thu, 17 May 2007 15:43:37 +0200 Subject: [SciPy-user] [noob inside]star detection In-Reply-To: <20070517132612.GC15996@clipper.ens.fr> References: <233976e40705170544k13c80beq624202d7cbeac66f@mail.gmail.com> <20070517131756.GB15996@clipper.ens.fr> <20070517132612.GC15996@clipper.ens.fr> Message-ID: <233976e40705170643t3841b770of5af3305a6d7610c@mail.gmail.com> i've seen one bug... I've launch it and it plot 2 gaussians ;) have you a jabber ID ? to speak in french ;) (more comprehensive for me) >But you see the idea :->... Yes. I'm trying to implement :) Tks. JB -- http://astrolix.org association des linuxiens astronomes From gvenezian at yahoo.com Thu May 17 09:52:42 2007 From: gvenezian at yahoo.com (Giulio Venezian) Date: Thu, 17 May 2007 06:52:42 -0700 (PDT) Subject: [SciPy-user] Finding determinant In-Reply-To: <464BE354.6030201@gmail.com> Message-ID: <93934.14163.qm@web51002.mail.re2.yahoo.com> This seems to be the same problem that I've been having with the zeros of the Bessel functions. I bet that if Ali Santacruz tries >>>from scipy import special >>>special.jn_zeros(3,5) he will get the same error. Is there a Windows version of scipy that is compiled with a windows xp compatible ATLAS library? That would help us backward windows users. Giulio --- Robert Kern wrote: > Ryan Krauss wrote: > > That seems like a good guess. In my experience, > most violent crashes > > from simple things come from incompatible versions > of scipy and numpy. > > I have no problem executing your code. What do > you get for > > In [1]: scipy.__version__ > > Out[1]: '0.5.2' > > > > In [2]: numpy.__version__ > > Out[2]: '1.0.2' > > Incompatible versions are checked at load time and > have been for quite some > time. They should no longer cause such crashes. > > Rather, the crash that the OP is seeing is likely to > do with the ATLAS library > that his scipy binary was compiled with. Some of the > official Win32 binaries > were compiled with an ATLAS library that uses SSE2 > instructions, which are not > available on all CPUs. This has caused crashes for > other people, too. > > -- > Robert Kern > > "I have come to believe that the whole world is an > enigma, a harmless enigma > that is made terrible by our own mad attempt to > interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ____________________________________________________________________________________Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 From gael.varoquaux at normalesup.org Thu May 17 10:31:08 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 17 May 2007 16:31:08 +0200 Subject: [SciPy-user] [noob inside]star detection In-Reply-To: <233976e40705170643t3841b770of5af3305a6d7610c@mail.gmail.com> References: <233976e40705170544k13c80beq624202d7cbeac66f@mail.gmail.com> <20070517131756.GB15996@clipper.ens.fr> <20070517132612.GC15996@clipper.ens.fr> <233976e40705170643t3841b770of5af3305a6d7610c@mail.gmail.com> Message-ID: <20070517143107.GD15996@clipper.ens.fr> On Thu, May 17, 2007 at 03:43:37PM +0200, Jean-Baptiste BUTET wrote: > have you a jabber ID ? to speak in french ;) (more comprehensive for me) Hum, no, I am to old-fashion for that, but you can always email me in private. > >But you see the idea :->... > Yes. I'm trying to implement :) OK, here is a beginning: ++++++++++++++++++++++++ from pylab import * from numpy import * # A generator of gaussian function: def gaussian(height, center_x, center_y, width_x, width_y): width_x = float(width_x) width_y = float(width_y) return lambda x,y: height*exp( -(((center_x-x)/width_x)**2+((center_y-y)/width_y)**2)/2) # Create the gaussian data Xin, Yin = mgrid[0:201, 0:201] data = gaussian(3, 100, 100, 20, 40)(Xin, Yin) matshow(data, cmap=cm.gist_earth_r) # Now pretend we do not know how this data was created, and find the # parameters. total = sum(data) X, Y = indices(data.shape) x = sum(X*data)/total y = sum(Y*data)/total col = data[:, int(y)] width_x = sqrt(abs(sum((arange(col.size)-y)**2*col))/sum(col)) row = data[int(x), :] width_y = sqrt(abs(sum((arange(row.size)-x)**2*row))/sum(row)) height = data.max() fit = gaussian(height, x, y, width_x, width_y) matshow(fit(X, Y), cmap=cm.gist_earth_r) ax = gca() text(0.9, 0.1, """ x : %.1f y : %.1f width_x : %.1f width_y : %.1f""" %(x, y, width_x, width_y), fontsize = 16, horizontalalignment='right', verticalalignment='bottom', transform = ax.transAxes) show() ++++++++++++++++++++++++ Now this won't work when your gaussian spot is partly cropped out of the window. It won't work either if it is too large. Actually I think it can only be used to get the initial parameters for an optimization routine as described on http://scipy.org/Cookbook/FittingData. Ga?l From gael.varoquaux at normalesup.org Thu May 17 10:58:20 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 17 May 2007 16:58:20 +0200 Subject: [SciPy-user] [noob inside]star detection In-Reply-To: <20070517143107.GD15996@clipper.ens.fr> References: <233976e40705170544k13c80beq624202d7cbeac66f@mail.gmail.com> <20070517131756.GB15996@clipper.ens.fr> <20070517132612.GC15996@clipper.ens.fr> <233976e40705170643t3841b770of5af3305a6d7610c@mail.gmail.com> <20070517143107.GD15996@clipper.ens.fr> Message-ID: <20070517145819.GE15996@clipper.ens.fr> OK, I worked a bit more on that to add an optimisation routine on top of the moments method, in order to get a real sturdy fit. I am sending an example of code. Of course it work really well, now. Ga?l -------------- next part -------------- from numpy import * from scipy import optimize def gaussian(height, center_x, center_y, width_x, width_y): """Returns a gaussian function with the given parameters""" width_x = float(width_x) width_y = float(width_y) return lambda x,y: height*exp( -(((center_x-x)/width_x)**2+((center_y-y)/width_y)**2)/2) def moments(data): """Returns (height, x, y, width_x, width_y) the gaussian parameters of a 2D distribution by calculating its moments """ total = data.sum() X, Y = indices(data.shape) x = (X*data).sum()/total y = (Y*data).sum()/total col = data[:, int(y)] width_x = sqrt(abs((arange(col.size)-y)**2*col).sum()/col.sum()) row = data[int(x), :] width_y = sqrt(abs((arange(row.size)-x)**2*row).sum()/row.sum()) height = data.max() return height, x, y, width_x, width_y def fitgaussian(data): """Returns (height, x, y, width_x, width_y) the gaussian parameters of a 2D distribution found by a fit""" params = moments(data) errorfunction = lambda p: ravel(gaussian(*p)(*indices(data.shape)) - data) p, success = optimize.leastsq(errorfunction, params) return p from pylab import * # Create the gaussian data Xin, Yin = mgrid[0:201, 0:201] data = gaussian(3, 100, 100, 20, 40)(Xin, Yin) matshow(data, cmap=cm.gist_earth_r) params = moments(data) fit = gaussian(*params) matshow(fit(X, Y), cmap=cm.gist_earth_r) ax = gca() (height, x, y, width_x, width_y) = params text(0.9, 0.1, """ x : %.1f y : %.1f width_x : %.1f width_y : %.1f""" %(x, y, width_x, width_y), fontsize = 16, horizontalalignment='right', verticalalignment='bottom', transform = ax.transAxes) show() From amsd2013 at yahoo.com Thu May 17 12:00:34 2007 From: amsd2013 at yahoo.com (Ali Santacruz) Date: Thu, 17 May 2007 09:00:34 -0700 (PDT) Subject: [SciPy-user] Finding determinant Message-ID: <28503.80052.qm@web62309.mail.re1.yahoo.com> Hi, I updated to numpy 1.0.2 but the program still crashes, [...] I bet that if Ali Santacruz tries >>>from scipy import special >>>special.jn_zeros(3,5) he will get the same error. [...] Yes, I got the same error. > Some of the > official Win32 binaries > were compiled with an ATLAS library that uses SSE2 > instructions, which are not > available on all CPUs. Bad news. :-( Ali S. ----- Mensaje original ---- De: Giulio Venezian Para: SciPy Users List Enviado: jueves, 17 de mayo, 2007 8:52:42 Asunto: Re: [SciPy-user] Finding determinant This seems to be the same problem that I've been having with the zeros of the Bessel functions. I bet that if Ali Santacruz tries >>>from scipy import special >>>special.jn_zeros(3,5) he will get the same error. Is there a Windows version of scipy that is compiled with a windows xp compatible ATLAS library? That would help us backward windows users. Giulio --- Robert Kern wrote: > Ryan Krauss wrote: > > That seems like a good guess. In my experience, > most violent crashes > > from simple things come from incompatible versions > of scipy and numpy. > > I have no problem executing your code. What do > you get for > > In [1]: scipy.__version__ > > Out[1]: '0.5.2' > > > > In [2]: numpy.__version__ > > Out[2]: '1.0.2' > > Incompatible versions are checked at load time and > have been for quite some > time. They should no longer cause such crashes. > > Rather, the crash that the OP is seeing is likely to > do with the ATLAS library > that his scipy binary was compiled with. Some of the > official Win32 binaries > were compiled with an ATLAS library that uses SSE2 > instructions, which are not > available on all CPUs. This has caused crashes for > other people, too. > > -- > Robert Kern > > "I have come to believe that the whole world is an > enigma, a harmless enigma > that is made terrible by our own mad attempt to > interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ____________________________________________________________________________________Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user __________________________________________________ Correo Yahoo! Espacio para todos tus mensajes, antivirus y antispam ?gratis! Reg?strate ya - http://correo.espanol.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nahal at csb.utoronto.ca Thu May 17 12:04:47 2007 From: nahal at csb.utoronto.ca (Hardeep Nahal) Date: Thu, 17 May 2007 12:04:47 -0400 Subject: [SciPy-user] Scipy installation fails with error from dfftb1.f file Message-ID: <20070517120447.e9f5xifrg6tcs8og@webmail.utoronto.ca> Dear Pymc community, I'm trying to install Scipy but I get an error message that I can't seem to figure out ("CPU you selected does not support x86-64 instruction set"). I'm running Python 2.4 on Debian, and I have installed all the prerequisites (numpy, lapack library, the fftw library). After I run the "python setup.py install" command, everything goes well, until I get this error: f2py options: [] adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.4' to include_dirs. adding 'build/src.linux-x86_64-2.4/Lib/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building data_files sources running build_py copying build/src.linux-x86_64-2.4/scipy/__config__.py -> build/lib.linux-x86_64-2.4/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_clib building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: /usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer compile options: '-c' g77:f77: Lib/fftpack/dfftpack/dfftb1.f Lib/fftpack/dfftpack/dfftb1.f:0: error: CPU you selected does not support x86-64 instruction set Lib/fftpack/dfftpack/dfftb1.f:0: error: CPU you selected does not support x86-64 instruction set Lib/fftpack/dfftpack/dfftb1.f:0: error: CPU you selected does not support x86-64 instruction set Lib/fftpack/dfftpack/dfftb1.f:0: error: CPU you selected does not support x86-64 instruction set error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer -c -c Lib/fftpack/dfftpack/dfftb1.f -o build/temp.linux-x86_64-2.4/Lib/fftpack/dfftpack/dfftb1.o" failed with exit status 1 Please note, the actual output was fairly long, so I've posted the truncated version. But if you think looking at the whole output might help, please let me know and I'll post all of it. Any suggestions would be greatly appreciated! Regards, Hardeep Nahal From robert.kern at gmail.com Thu May 17 12:19:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 May 2007 11:19:50 -0500 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464C2A4F.7070903@ru.nl> References: <464AD12D.1030708@ru.nl> <464C2A4F.7070903@ru.nl> Message-ID: <464C80A6.2090501@gmail.com> Stef Mientki wrote: > - use CSV, yes but how should I've known ? > I think (and it's mentioned before by others) this is one of the major > problems of Python, > "where should I find something ?". I've read the book "Learning Python", > but nothing is mentioned about CSV :-( _Learning Python_ is an old book. If you want a dead-tree book that is more or less up-to-date and covers much of the standard library, go for _Python in a Nutshell_ or the third edition of _Python Essential Reference_. > I sometimes look at the official Python guide of Guido van Rossum (not > this time ;-), but there's no printable version of it. > I wonder if there isn't some kind of program that can create an PDF file > from such a site. > It looks like the early days of Delphi, in that time you had about 10 > different sites, each containing different kinds of information, > none of them was complete. Nowadays, there is just 1 site for Delphi > (http:\\www.torry.net\) where everything of Delphi can be found very > easily (both freeware and commercial). > btw MatLab also has 1 site, where everything can be found, but besides > the annoying log in procedure, it can absolutely not be compared to > Torry's Delphi site !! Google for "site:docs.python.org csv", for example. > - better to use an object than a function > Yes I've thought about that, but I had one major concern why I didn't > choose for an object (besides the fact that in the beginning I assumed > the function to be much smaller), and that concern is memory usage. I > also never see something as a "closefile()" ?? You should be seeing things like f.close(). Although with CPython, file closing should happen when the refcount of the file object goes to 0, it's good form to explicitly close the file. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu May 17 12:27:39 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 May 2007 11:27:39 -0500 Subject: [SciPy-user] Scipy installation fails with error from dfftb1.f file In-Reply-To: <20070517120447.e9f5xifrg6tcs8og@webmail.utoronto.ca> References: <20070517120447.e9f5xifrg6tcs8og@webmail.utoronto.ca> Message-ID: <464C827B.1000006@gmail.com> Hardeep Nahal wrote: > Dear Pymc community, > > I'm trying to install Scipy but I get an error message that I can't > seem to figure out ("CPU you selected does not support x86-64 > instruction set"). I'm running Python 2.4 on Debian, and I have > installed all the prerequisites (numpy, lapack library, the fftw > library). After I run the "python setup.py install" command, > everything goes well, until I get this error: > > f2py options: [] > adding 'build/src.linux-x86_64-2.4/fortranobject.c' to sources. > adding 'build/src.linux-x86_64-2.4' to include_dirs. > adding 'build/src.linux-x86_64-2.4/Lib/stats/mvn-f2pywrappers.f' to sources. > building extension "scipy.ndimage._nd_image" sources > building data_files sources > running build_py > copying build/src.linux-x86_64-2.4/scipy/__config__.py -> > build/lib.linux-x86_64-2.4/scipy > running build_clib > customize UnixCCompiler > customize UnixCCompiler using build_clib > customize GnuFCompiler > customize GnuFCompiler > customize GnuFCompiler using build_clib > building 'dfftpack' library > compiling Fortran sources > Fortran f77 compiler: /usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 > -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer > compile options: '-c' > g77:f77: Lib/fftpack/dfftpack/dfftb1.f > Lib/fftpack/dfftpack/dfftb1.f:0: error: CPU you selected does not > support x86-64 > instruction set What version of g77 do you have? What processor? The code that adds those flags won't put in -march=opteron or -march=athlon64 unless the g77 version is >= 3.4 (presumably because g77 does not support them in older versions). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Thu May 17 13:42:57 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 17 May 2007 13:42:57 -0400 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464C3C30.5050308@gmx.net> References: <464AD12D.1030708@ru.nl> <464C2A4F.7070903@ru.nl> <464C3C30.5050308@gmx.net> Message-ID: On 17/05/07, Steve Schmerler wrote: > In most cases, import * results in poorly readable code. import * inside functions > and more general importing *anything* in functions isn't a good idea because: usually > you write a function in order to execute a chunk of code more than once. So if you > call the func N times, all the importing will also be done N times which is simply > overhead (execution time + memory action). A simple-minded timing example is > attached, which shows the effect clearly (although not utilizing some nifty tricks > offerd by the timeit module). Running it (from Ipython or the cmd line), I see the > warning David mentioned. Maybe this was intruduced with Python 2.4.x and you are on > 2.3.x if you don't see it? A technical point: python imports are implemented to do nothing on re-import, so calling import many times is not actually a problem. What *is* a problem is "from whatever import *" in any namespace but the global one. This confuses python about namespaces (some technical problem that appeared when they introduced lexical scoping) and so python warns about it. The real problem with "from numpy import *" is that it just takes everything in numpy and plops it over top of what's already there. Interactively this is great, it saves you typing, but in a program you're going to keep around it can be a problem: suppose I write code using a local variable arange2d, debug my program and it's working fine. Now I install a new version of numpy which has a function "arange2d"; suddenly all the references to arange2d in my code mean something different! There are various alternatives: "include numpy" -> "numpy.sin" "include numpy as N" -> "N.sin" "from numpy import sin" -> "sin" More generally, "from whatever import *" is a problem because the python library was designed with the "import name" mechanism in mind. So functions in different modules often have the same name (for example md5.new and sha.new), and will crash if you do "from whatever import *". Even if they don't crash, "csv.reader: is a lot more intelligible than "reader". As for why not just import everything on your PC, there are lots of reasons. Python has a *lot* of standard libraries, and loading all the code for that takes time and memory and just isn't needed. Python also has a lot of third-party libraries, which are not always easy to locate automatically. More to the point, by saying "import numpy" you are telling the python interpreter and readers of your code "this program needs numpy to run"; it may mean they have to go out and install it, it may mean they'll need to adapt the code if a new version of numpy is released. Being explicit about dependencies is a very good idea. Anne Anne From elcorto at gmx.net Thu May 17 15:30:55 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 17 May 2007 21:30:55 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: References: <464AD12D.1030708@ru.nl> <464C2A4F.7070903@ru.nl> <464C3C30.5050308@gmx.net> Message-ID: <464CAD6F.2020907@gmx.net> Anne Archibald wrote: > On 17/05/07, Steve Schmerler wrote: > >> In most cases, import * results in poorly readable code. import * inside functions >> and more general importing *anything* in functions isn't a good idea because: usually >> you write a function in order to execute a chunk of code more than once. So if you >> call the func N times, all the importing will also be done N times which is simply >> overhead (execution time + memory action). A simple-minded timing example is >> attached, which shows the effect clearly (although not utilizing some nifty tricks >> offerd by the timeit module). Running it (from Ipython or the cmd line), I see the >> warning David mentioned. Maybe this was intruduced with Python 2.4.x and you are on >> 2.3.x if you don't see it? > > A technical point: python imports are implemented to do nothing on > re-import, so calling import many times is not actually a problem. Argh, yes. I was (partly) aware of that. Should have analysed my test times in more detail :) To be honest, I'm not a total expert in those nitfy technial details. Changing this test script of mine to compare (1) import * in a function, (2) import module in a function and (3) import nothing in a function, I get from_mod_import_all: dt: [ 1.32939887 0.31759715 0.32567406 0.32297516 0.32898188 0.33600211 0.33455801 0.32793999 0.3268559 0.32574892] min: 0.317597150803 max: 1.32939887047 mean: 0.427573204041 ----------------------- import_mod: dt: [ 0.05784607 0.05776787 0.0569551 0.05641389 0.05731797 0.0634551 0.06005907 0.05657482 0.05893087 0.05673099] min: 0.0564138889313 max: 0.0634551048279 mean: 0.0582051753998 ----------------------- no_import: dt: [ 0.05400395 0.05075312 0.051965 0.05418086 0.050946 0.05176306 0.05257511 0.05177307 0.05184007 0.05447507] min: 0.0507531166077 max: 0.054475069046 mean: 0.0524275302887 So, in the from_mod_import_all case, the first "from scipy import *" is slow, the others are faster (no re-import?), but are actually much slower than in the import_mod and no_import case. Hmm, why is that so? Or am I doing something wrong... -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams -------------- next part -------------- A non-text attachment was scrubbed... Name: test.py Type: text/x-python Size: 1568 bytes Desc: not available URL: From ryanlists at gmail.com Thu May 17 15:56:48 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 17 May 2007 14:56:48 -0500 Subject: [SciPy-user] Finding determinant In-Reply-To: <28503.80052.qm@web62309.mail.re1.yahoo.com> References: <28503.80052.qm@web62309.mail.re1.yahoo.com> Message-ID: What CPU are you using? On 5/17/07, Ali Santacruz wrote: > > Hi, > > I updated to numpy 1.0.2 but the program still crashes, > > [...] I bet that if Ali Santacruz tries > >>>from scipy import special > >>>special.jn_zeros(3,5) > he will get the same error. [...] > > Yes, I got the same error. > > > Some of the > > official Win32 binaries > > were compiled with an ATLAS library that uses SSE2 > > instructions, which are not > > available on all CPUs. > > Bad news. :-( > > Ali S. > > > ----- Mensaje original ---- > De: Giulio Venezian > Para: SciPy Users List > Enviado: jueves, 17 de mayo, 2007 8:52:42 > Asunto: Re: [SciPy-user] Finding determinant > > > This seems to be the same problem that I've been > having with the zeros of the Bessel functions. I bet > that if Ali Santacruz tries > >>>from scipy import special > >>>special.jn_zeros(3,5) > he will get the same error. > > Is there a Windows version of scipy that is compiled > with a windows xp compatible ATLAS library? That would > help us backward windows users. > > Giulio > > --- Robert Kern wrote: > > > Ryan Krauss wrote: > > > That seems like a good guess. In my experience, > > most violent crashes > > > from simple things come from incompatible versions > > of scipy and numpy. > > > I have no problem executing your code. What do > > you get for > > > In [1]: scipy.__version__ > > > Out[1]: '0.5.2' > > > > > > In [2]: numpy.__version__ > > > Out[2]: '1.0.2' > > > > Incompatible versions are checked at load time and > > have been for quite some > > time. They should no longer cause such crashes. > > > > Rather, the crash that the OP is seeing is likely to > > do with the ATLAS library > > that his scipy binary was compiled with. Some of the > > official Win32 binaries > > were compiled with an ATLAS library that uses SSE2 > > instructions, which are not > > available on all CPUs. This has caused crashes for > > other people, too. > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an > > enigma, a harmless enigma > > that is made terrible by our own mad attempt to > > interpret it as though it had > > an underlying truth." > > -- Umberto Eco > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > ____________________________________________________________________________________Shape > Yahoo! in your own image. Join our Network Research Panel today! > http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > __________________________________________________ > Correo Yahoo! > Espacio para todos tus mensajes, antivirus y antispam ?gratis! > Reg?strate ya - http://correo.espanol.yahoo.com/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From ashashiwa at gmail.com Thu May 17 16:40:49 2007 From: ashashiwa at gmail.com (Jean-Baptiste BUTET) Date: Thu, 17 May 2007 22:40:49 +0200 Subject: [SciPy-user] [noob inside]star detection In-Reply-To: <20070517145819.GE15996@clipper.ens.fr> References: <233976e40705170544k13c80beq624202d7cbeac66f@mail.gmail.com> <20070517131756.GB15996@clipper.ens.fr> <20070517132612.GC15996@clipper.ens.fr> <233976e40705170643t3841b770of5af3305a6d7610c@mail.gmail.com> <20070517143107.GD15996@clipper.ens.fr> <20070517145819.GE15996@clipper.ens.fr> Message-ID: <233976e40705171340u17c55d69mec286028742f6206@mail.gmail.com> [je le mets en priv?] Bonjour Ga?l et merci de tes r?ponses. J'ai r?ussi ? faire tourner tes premiers exemples. d'ailleurs ca m'a permis de voir plot... pas mal ;) Donc, sur le dernier code que tu as renvoy?, je n'ai pas r?ussi ? le faire tourner. J'avoue ne pas comprendre l'arrangement, car il me semble que tu as mis des pointeurs... et je n'en suis pas encore rendu l?. J'ai essay? d'inclure ton code dans mon code... je gal?re un peu ;) 22H. Apr?s une pause, je viens de m'y remettre. J'ai des erreurs bizarres de d?clarations de variables. (X) j'ai pris ton script et je l'ai lanc?: [jb at localhost Desktop]$ python ./fit_gaussian.py Traceback (most recent call last): File "./fit_gaussian.py", line 45, in matshow(fit(X, Y), cmap=cm.gist_earth_r) NameError: name 'X' is not defined ce qui est bizarre c'est que le bout de code qui fonctionnait dans le tout premier exemple, ne fonctionne plus. Ca doit etre un truc tout bete, mais j'ai pas le recul pour le voir. A+ et encore merci. JB -- http://astrolix.org association des linuxiens astronomes From s.mientki at ru.nl Thu May 17 18:39:23 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 18 May 2007 00:39:23 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: <20070517123320.GA15996@clipper.ens.fr> References: <464AD12D.1030708@ru.nl> <464C2A4F.7070903@ru.nl> <20070517123320.GA15996@clipper.ens.fr> Message-ID: <464CD99B.8080309@ru.nl> Gael Varoquaux wrote: > On Thu, May 17, 2007 at 12:11:27PM +0200, Stef Mientki wrote: > >> And here are some aspects, for which I still have doubts or questions: >> - from .... import * >> the use of this construct is discouraged, and gives a warning (the only >> warning I get is from David ;-) >> Coming from Delphi, it's very common to include everything you've available, >> so you never have to worry missing something, >> and if included in the right order, >> you are guaranteed to have the best version of everything (due to >> overrides), >> while you're still able to use older/previous libraries, by explictly >> naming them. >> The compiler will sort out everything you don't need. >> So what's so different in Python, that I can't include everything that's >> on my PC ? >> > > Well I think that in Delphi if you already have a function "max" and you > try to import a module with a function "max" defined in it, Delphi is > going to complain. Am I right ? Yes ;-) > Well in Python, it is not the case, so if > you have done a large number of "from ... import *", you have no way of > knowing where "max" comes from. What people often do is > "import numpy as N". They can call "N.max", afterwards. > > I get the picture, thanks. >> - use CSV, yes but how should I've known ? >> I think (and it's mentioned before by others) this is one of the major >> problems of Python, >> "where should I find something ?". I've read the book "Learning Python", >> but nothing is mentioned about CSV :-( >> > > Google, I would say. Yes there is no good solution. People are starting > to work on building a help database, but it is not an easy job, and it > will be a few years before it works. > > Yes I agree, it's a hard job, and certainly not the most funniest part. >> - better to use an object than a function >> Yes I've thought about that, but I had one major concern why I didn't >> choose for an object (besides the fact that in the beginning I assumed >> the function to be much smaller), and that concern is memory usage. I >> also never see something as a "closefile()" ?? >> > > You no what they say ? Premature optimization is the root of all evil. > Worry about efficiency only if you have an efficiency problem, and if so > do it in a scientific way: profile. I would be surprised to find that > method lookup will be what slows down your code, in the case we are > talking about. > Thanks, I'll look at the profiler later on. But indeed you triggered me, even without profiler I can do it a bit more scientific, I've 200 files of about 1 MB each, so that's only 200 MB, no problem at all ! > >> I've never understood how objects are freed/destroyed in Python. >> > > Well there is a good article about that: > > http://www.informit.com/articles/article.asp?p=453682&rl=1 > > section 3 is what answers your question, but the whole article is worth > reading. > very much worth reading, thanks again, cheers, Stef > HTH, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From s.mientki at ru.nl Thu May 17 18:47:27 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 18 May 2007 00:47:27 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464C80A6.2090501@gmail.com> References: <464AD12D.1030708@ru.nl> <464C2A4F.7070903@ru.nl> <464C80A6.2090501@gmail.com> Message-ID: <464CDB7F.6050309@ru.nl> Robert Kern wrote: > Stef Mientki wrote: > >> - use CSV, yes but how should I've known ? >> I think (and it's mentioned before by others) this is one of the major >> problems of Python, >> "where should I find something ?". I've read the book "Learning Python", >> but nothing is mentioned about CSV :-( >> > > _Learning Python_ is an old book. If you want a dead-tree book that is more or > less up-to-date and covers much of the standard library, go for _Python in a > Nutshell_ or the third edition of _Python Essential Reference_. > aha, the same remark as Gael. I'll try to get my hands on one of the new books. > >> I sometimes look at the official Python guide of Guido van Rossum (not >> this time ;-), but there's no printable version of it. >> I wonder if there isn't some kind of program that can create an PDF file >> from such a site. >> It looks like the early days of Delphi, in that time you had about 10 >> different sites, each containing different kinds of information, >> none of them was complete. Nowadays, there is just 1 site for Delphi >> (http:\\www.torry.net\) where everything of Delphi can be found very >> easily (both freeware and commercial). >> btw MatLab also has 1 site, where everything can be found, but besides >> the annoying log in procedure, it can absolutely not be compared to >> Torry's Delphi site !! >> > > Google for "site:docs.python.org csv", for example. > thanks for the tip, I'll remember that. cheers, Stef From s.mientki at ru.nl Thu May 17 18:50:18 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 18 May 2007 00:50:18 +0200 Subject: [SciPy-user] What can be improved ? In-Reply-To: <464C5073.9090903@shrogers.com> References: <464AD12D.1030708@ru.nl> <464C2A4F.7070903@ru.nl> <464C5073.9090903@shrogers.com> Message-ID: <464CDC2A.7000808@ru.nl> Steven H. Rogers wrote: > Stef Mientki wrote: > >> And here are some aspects, for which I still have doubts or questions: >> - from .... import * >> the use of this construct is discouraged, and gives a warning (the only >> warning I get is from David ;-) >> Coming from Delphi, it's very common to include everything you've available, >> so you never have to worry missing something, >> and if included in the right order, >> you are guaranteed to have the best version of everything (due to >> overrides), >> while you're still able to use older/previous libraries, by explictly >> naming them. >> The compiler will sort out everything you don't need. >> So what's so different in Python, that I can't include everything that's >> on my PC ? >> >> > You can, it just isn't efficient. It pollutes your top level name space > and increases the probability of naming collisions. Ok that convinces me, certainly if I want to have a workspace viewer a la MatLab. > It increases the > memory footprint unnecessarily. > >> - use CSV, yes but how should I've known ? >> I think (and it's mentioned before by others) this is one of the major >> problems of Python, >> "where should I find something ?". I've read the book "Learning Python", >> but nothing is mentioned about CSV :-( >> >> > While an excellent introductory text, this isn't a very good reference > IMHO. Take a look at David Beazley's > `Python Essential Reference `_ > which has brief but clear explanations and examples and is well > organized for reference. > I'll try to get my hands on that book. > I'm looking at adding a documentation search capability for IPython to > augment it's interactive help. So far I have little but good intentions > and a wiki page > . Ideas about > what would be most useful to users, search strategies, and how best to > present the results are welcome. > Good plan. But as all information is already in a standard way the source files, wouldn't it be possible, to automatically create a something like a chm-helpfile ? cheers, Stef From gvenezian at yahoo.com Thu May 17 21:40:32 2007 From: gvenezian at yahoo.com (Giulio Venezian) Date: Thu, 17 May 2007 18:40:32 -0700 (PDT) Subject: [SciPy-user] Finding determinant In-Reply-To: Message-ID: <169179.40733.qm@web51008.mail.re2.yahoo.com> My processor is an AMD Athlon XP 3000+ version x86 family 6 model 10 andI'm running in Windows XP version 5.1.2600. I installed the exact same files in an Intel-based computer at work and the commands run fine. I guess the fault is in the AMD chip. Is there a different version of scipy for the AMD? Or is it going to entail a gruesome step by step installation of separate components? Giulio --- Ryan Krauss wrote: > What CPU are you using? > > On 5/17/07, Ali Santacruz > wrote: > > > > Hi, > > > > I updated to numpy 1.0.2 but the program still > crashes, > > > > [...] I bet that if Ali Santacruz tries > > >>>from scipy import special > > >>>special.jn_zeros(3,5) > > he will get the same error. [...] > > > > Yes, I got the same error. > > > > > Some of the > > > official Win32 binaries > > > were compiled with an ATLAS library that uses > SSE2 > > > instructions, which are not > > > available on all CPUs. > > > > Bad news. :-( > > > > Ali S. > > > > > > ----- Mensaje original ---- > > De: Giulio Venezian > > Para: SciPy Users List > > Enviado: jueves, 17 de mayo, 2007 8:52:42 > > Asunto: Re: [SciPy-user] Finding determinant > > > > > > This seems to be the same problem that I've been > > having with the zeros of the Bessel functions. I > bet > > that if Ali Santacruz tries > > >>>from scipy import special > > >>>special.jn_zeros(3,5) > > he will get the same error. > > > > Is there a Windows version of scipy that is > compiled > > with a windows xp compatible ATLAS library? That > would > > help us backward windows users. > > > > Giulio > > > > --- Robert Kern wrote: > > > > > Ryan Krauss wrote: > > > > That seems like a good guess. In my > experience, > > > most violent crashes > > > > from simple things come from incompatible > versions > > > of scipy and numpy. > > > > I have no problem executing your code. What > do > > > you get for > > > > In [1]: scipy.__version__ > > > > Out[1]: '0.5.2' > > > > > > > > In [2]: numpy.__version__ > > > > Out[2]: '1.0.2' > > > > > > Incompatible versions are checked at load time > and > > > have been for quite some > > > time. They should no longer cause such crashes. > > > > > > Rather, the crash that the OP is seeing is > likely to > > > do with the ATLAS library > > > that his scipy binary was compiled with. Some of > the > > > official Win32 binaries > > > were compiled with an ATLAS library that uses > SSE2 > > > instructions, which are not > > > available on all CPUs. This has caused crashes > for > > > other people, too. > > > > > > -- > > > Robert Kern > > > > > > "I have come to believe that the whole world is > an > > > enigma, a harmless enigma > > > that is made terrible by our own mad attempt to > > > interpret it as though it had > > > an underlying truth." > > > -- Umberto Eco > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > ____________________________________________________________________________________Shape > > Yahoo! in your own image. Join our Network > Research Panel today! > > > http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > __________________________________________________ > > Correo Yahoo! > > Espacio para todos tus mensajes, antivirus y > antispam ?gratis! > > Reg?strate ya - http://correo.espanol.yahoo.com/ > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ____________________________________________________________________________________Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase. http://farechase.yahoo.com/ From amsd2013 at yahoo.com Thu May 17 21:57:06 2007 From: amsd2013 at yahoo.com (Ali Santacruz) Date: Thu, 17 May 2007 18:57:06 -0700 (PDT) Subject: [SciPy-user] Finding determinant Message-ID: <223255.66194.qm@web62302.mail.re1.yahoo.com> > What CPU are you using? It is a mobile AMD Athlon XP2500+ The Windows version is Version 5.1 (Build 2600.xpsp2.050301-1526: Service Pack 1) Ali S. ----- Mensaje original ---- De: Ryan Krauss Para: SciPy Users List Enviado: jueves, 17 de mayo, 2007 14:56:48 Asunto: Re: [SciPy-user] Finding determinant > What CPU are you using? On 5/17/07, Ali Santacruz wrote: > > Hi, > > I updated to numpy 1.0.2 but the program still crashes, > > [...] I bet that if Ali Santacruz tries > >>>from scipy import special > >>>special.jn_zeros(3,5) > he will get the same error. [...] > > Yes, I got the same error. > > > Some of the > > official Win32 binaries > > were compiled with an ATLAS library that uses SSE2 > > instructions, which are not > > available on all CPUs. > > Bad news. :-( > > Ali S. > > > ----- Mensaje original ---- > De: Giulio Venezian > Para: SciPy Users List > Enviado: jueves, 17 de mayo, 2007 8:52:42 > Asunto: Re: [SciPy-user] Finding determinant > > > This seems to be the same problem that I've been > having with the zeros of the Bessel functions. I bet > that if Ali Santacruz tries > >>>from scipy import special > >>>special.jn_zeros(3,5) > he will get the same error. > > Is there a Windows version of scipy that is compiled > with a windows xp compatible ATLAS library? That would > help us backward windows users. > > Giulio > > --- Robert Kern wrote: > > > Ryan Krauss wrote: > > > That seems like a good guess. In my experience, > > most violent crashes > > > from simple things come from incompatible versions > > of scipy and numpy. > > > I have no problem executing your code. What do > > you get for > > > In [1]: scipy.__version__ > > > Out[1]: '0.5.2' > > > > > > In [2]: numpy.__version__ > > > Out[2]: '1.0.2' > > > > Incompatible versions are checked at load time and > > have been for quite some > > time. They should no longer cause such crashes. > > > > Rather, the crash that the OP is seeing is likely to > > do with the ATLAS library > > that his scipy binary was compiled with. Some of the > > official Win32 binaries > > were compiled with an ATLAS library that uses SSE2 > > instructions, which are not > > available on all CPUs. This has caused crashes for > > other people, too. > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an > > enigma, a harmless enigma > > that is made terrible by our own mad attempt to > > interpret it as though it had > > an underlying truth." > > -- Umberto Eco > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > ____________________________________________________________________________________Shape > Yahoo! in your own image. Join our Network Research Panel today! > http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > __________________________________________________ > Correo Yahoo! > Espacio para todos tus mensajes, antivirus y antispam ?gratis! > Reg?strate ya - http://correo.espanol.yahoo.com/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user __________________________________________________ Correo Yahoo! Espacio para todos tus mensajes, antivirus y antispam ?gratis! Reg?strate ya - http://correo.espanol.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Fri May 18 00:52:37 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 17 May 2007 23:52:37 -0500 Subject: [SciPy-user] Finding determinant In-Reply-To: <223255.66194.qm@web62302.mail.re1.yahoo.com> References: <223255.66194.qm@web62302.mail.re1.yahoo.com> Message-ID: O.K., I think this is fairly strange. I can duplicate this problem on my wife's laptop with an Athlon XP M processor, but only if I use either a standard python prompt (i.e. type python in a cmd.exe window) or if I use IPython with no command line switches. Using IPython with -pylab -p sci makes this problem go away. Executing the same steps as the sci profile works fine from a regular Python prompt. This crashes: >>> from scipy import * C:\Python25\lib\site-packages\numpy\testing\numpytest.py:634: DeprecationWarning : ScipyTest is now called NumpyTest; please update your code DeprecationWarning) >>> matr=array([[1.1,1.9],[1.9,3.5]]) >>> linalg.det(matr) But this works fine: >>> import scipy C:\Python25\lib\site-packages\numpy\testing\numpytest.py:634: DeprecationWarning : ScipyTest is now called NumpyTest; please update your code DeprecationWarning) >>> import numpy >>> from scipy import * >>> from numpy import * >>> matr=array([[1.1,1.9],[1.9,3.5]]) >>> linalg.det(matr) 0.23999999999999988 Basically, >>> numpy.linalg.det(matr) 0.23999999999999988 works fine but >>> scipy.linalg.det(matr) crashes So what does scipy.linalg.det do differently from numpy.linalg.det? Ryan On 5/17/07, Ali Santacruz wrote: > > > What CPU are you using? > > It is a mobile AMD Athlon XP2500+ > The Windows version is Version 5.1 (Build 2600.xpsp2.050301-1526: Service > Pack 1) > > > Ali S. > > ----- Mensaje original ---- > De: Ryan Krauss > Para: SciPy Users List > Enviado: jueves, 17 de mayo, 2007 14:56:48 > > Asunto: Re: [SciPy-user] Finding determinant > > > What CPU are you using? > > On 5/17/07, Ali Santacruz wrote: > > > > Hi, > > > > I updated to numpy 1.0.2 but the program still crashes, > > > > [...] I bet that if Ali Santacruz tries > > >>>from scipy import special > > >>>special.jn_zeros(3,5) > > he will get the same error. [...] > > > > Yes, I got the same error. > > > > > Some of the > > > official Win32 binaries > > > were compiled with an ATLAS library that uses SSE2 > > > instructions, which are not > > > available on all CPUs. > > > > Bad news. :-( > > > > Ali S. > > > > > > ----- Mensaje original ---- > > De: Giulio Venezian > > Para: SciPy Users List > > Enviado: jueves, 17 de mayo, 2007 8:52:42 > > Asunto: Re: [SciPy-user] Finding determinant > > > > > > This seems to be the same problem that I've been > > having with the zeros of the Bessel functions. I bet > > that if Ali Santacruz tries > > >>>from scipy import special > > >>>special.jn_zeros(3,5) > > he will get the same error. > > > > Is there a Windows version of scipy that is compiled > > with a windows xp compatible ATLAS library? That would > > help us backward windows users. > > > > Giulio > > > > --- Robert Kern wrote: > > > > > Ryan Krauss wrote: > > > > That seems like a good guess. In my experience, > > > most violent crashes > > > > from simple things come from incompatible versions > > > of scipy and numpy. > > > > I have no problem executing your code. What do > > > you get for > > > > In [1]: scipy.__version__ > > > > Out[1]: '0.5.2' > > > > > > > > In [2]: numpy.__version__ > > > > Out[2]: '1.0.2' > > > > > > Incompatible versions are checked at load time and > > > have been for quite some > > > time. They should no longer cause such crashes. > > > > > > Rather, the crash that the OP is seeing is likely to > > > do with the ATLAS library > > > that his scipy binary was compiled with. Some of the > > > official Win32 binaries > > > were compiled with an ATLAS library that uses SSE2 > > > instructions, which are not > > > available on all CPUs. This has caused crashes for > > > other people, too. > > > > > > -- > > > Robert Kern > > > > > > "I have come to believe that the whole world is an > > > enigma, a harmless enigma > > > that is made terrible by our own mad attempt to > > > interpret it as though it had > > > an underlying truth." > > > -- Umberto Eco > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > ____________________________________________________________________________________Shape > > Yahoo! in your own image. Join our Network Research Panel today! > > > http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > __________________________________________________ > > Correo Yahoo! > > Espacio para todos tus mensajes, antivirus y antispam ?gratis! > > Reg?strate ya - http://correo.espanol.yahoo.com/ > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > __________________________________________________ > Correo Yahoo! > Espacio para todos tus mensajes, antivirus y antispam ?gratis! > Reg?strate ya - http://correo.espanol.yahoo.com/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From robert.kern at gmail.com Fri May 18 01:01:10 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 May 2007 00:01:10 -0500 Subject: [SciPy-user] Finding determinant In-Reply-To: References: <223255.66194.qm@web62302.mail.re1.yahoo.com> Message-ID: <464D3316.1050406@gmail.com> Ryan Krauss wrote: > So what does scipy.linalg.det do differently from numpy.linalg.det? Most likely your scipy.linalg is linked to a different LAPACK library than your numpy.linalg. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Fri May 18 08:23:55 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 18 May 2007 07:23:55 -0500 Subject: [SciPy-user] Finding determinant In-Reply-To: <464D3316.1050406@gmail.com> References: <223255.66194.qm@web62302.mail.re1.yahoo.com> <464D3316.1050406@gmail.com> Message-ID: I just installed http://prdownloads.sourceforge.net/numpy/numpy-1.0.2.win32-py2.5.exe?download and http://prdownloads.sourceforge.net/scipy/scipy-0.5.2.win32-py2.5.exe?download on a fresh install. Are we saying that they link to different LAPACK libraries? Ryan On 5/18/07, Robert Kern wrote: > Ryan Krauss wrote: > > > So what does scipy.linalg.det do differently from numpy.linalg.det? > > Most likely your scipy.linalg is linked to a different LAPACK library than your > numpy.linalg. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Fri May 18 08:56:45 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 18 May 2007 07:56:45 -0500 Subject: [SciPy-user] Finding determinant In-Reply-To: References: <223255.66194.qm@web62302.mail.re1.yahoo.com> <464D3316.1050406@gmail.com> Message-ID: This has me fairly concerned. I am about to require Python for a class I am teaching this summer that starts on Tuesday (5/22). I can't live with Scipy without LAPACK. But I can't tell my students they can't use AMD processors. What would it take to build a scipy executable for AMD (non sse2?)? Do executables for LAPACK and ATLAS exist for windows so that I just have to build Scipy itself from source on Windows? I have never done that before. I assume it has to be done on a computer with an older AMD processor. Can it be done with mingw or do I need a computer with an AMD processor and VisualStudio or whatever the right Microsoft compiler is? Can anyone else out there easily build this executable for me? Is there another solution? Most of my students will be reluctant Python converts, so I need to make this as easy and painless for them as possible if it is going to work. Oh, and I am leaving for the weekend and may not be able to respond until late Sunday. Thanks, Ryan On 5/18/07, Ryan Krauss wrote: > I just installed > http://prdownloads.sourceforge.net/numpy/numpy-1.0.2.win32-py2.5.exe?download > and > http://prdownloads.sourceforge.net/scipy/scipy-0.5.2.win32-py2.5.exe?download > on a fresh install. Are we saying that they link to different LAPACK libraries? > > Ryan > > On 5/18/07, Robert Kern wrote: > > Ryan Krauss wrote: > > > > > So what does scipy.linalg.det do differently from numpy.linalg.det? > > > > Most likely your scipy.linalg is linked to a different LAPACK library than your > > numpy.linalg. > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an enigma, a harmless enigma > > that is made terrible by our own mad attempt to interpret it as though it had > > an underlying truth." > > -- Umberto Eco > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From giorgio.luciano at chimica.unige.it Fri May 18 09:04:35 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Fri, 18 May 2007 15:04:35 +0200 Subject: [SciPy-user] help in introduction of statistics scipy.stas for analytical purpose Message-ID: <464DA463.8060501@chimica.unige.it> Dear All, I'm happy to write that 8 authors contacted for a proposal of a book about freeware/opensource Chemometrics books have positively replied. I 'm writing down a provisory list of chapter, and I will be glad if someone would like to help in wirting the basic statistic introduction (use of scipy.stats for example for simple statistic on laboratory measurements). I guess it can be also a chance for teacher/people that work in chemist field to make python widespread. I'm sure and i wil lbe glad to receive your contribute. For any other info just write me Giorgio From chanley at stsci.edu Fri May 18 09:14:25 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 18 May 2007 09:14:25 -0400 Subject: [SciPy-user] PyFITS 1.1 "candidtate" RELEASE 3 Message-ID: <464DA6B1.8010509@stsci.edu> ------------------ | PYFITS Release | ------------------ Space Telescope Science Institute is pleased to announce the third and hopefully final candidate release of PyFITS 1.1. This release includes support for both the NUMPY and NUMARRAY array packages. This software can be downloaded at: http://www.stsci.edu/resources/software_hardware/pyfits/Download If you encounter bugs, please send bug reports to "help at stsci.edu". We intend to support NUMARRAY and NUMPY simultaneously for a transition period of no less than 6 months. Eventually, however, support for NUMARRAY will disappear. During this period, it is likely that new features will appear only for NUMPY. The support for NUMARRAY will primarily be to fix serious bugs and handle platform updates. We plan to release the "official" PyFITS 1.1 version in a few weeks. ----------- | Version | ----------- Version 1.1rc3; May 17, 2007 ------------------------------- | Major Changes since v1.1rc2 | ------------------------------- * Fixes a bug in the creation of binary FITS tables on little endian machines introduced in release candidate 2. ------------------------- | Software Requirements | ------------------------- PyFITS Version 1.1rc3 REQUIRES: * Python 2.3 or later * NUMPY 1.0.1(or later) or NUMARRAY --------------------- | Installing PyFITS | --------------------- PyFITS 1.1rc1 is distributed as a Python distutils module. Installation simply involves unpacking the package and executing % python setup.py install to install it in Python's site-packages directory. Alternatively the command %python setup.py install --local="/destination/directory/" will install PyFITS in an arbitrary directory which should be placed on PYTHONPATH. Once numarray or numpy has been installed, then PyFITS should be available for use under Python. ----------------- | Download Site | ----------------- http://www.stsci.edu/resources/software_hardware/pyfits/Download ---------- | Usage | ---------- Users will issue an "import pyfits" command as in the past. However, the use of the NUMPY or NUMARRAY version of PyFITS will be controlled by an environment variable called NUMERIX. Set NUMERIX to 'numarray' for the NUMARRAY version of PyFITS. Set NUMERIX to 'numpy' for the NUMPY version of pyfits. If only one array package is installed, that package's version of PyFITS will be imported. If both packages are installed the NUMERIX value is used to decide which version to import. If no NUMERIX value is set then the NUMARRAY version of PyFITS will be imported. Anything else will raise an exception upon import. --------------- | Bug Reports | --------------- Please send all PyFITS bug reports to help at stsci.edu ------------------ | Advanced Users | ------------------ Users who would like the "bleeding" edge of PyFITS can retrieve the software from our SUBVERSION repository hosted at: http://astropy.scipy.org/svn/pyfits/trunk We also provide a Trac site at: http://projects.scipy.org/astropy/pyfits/wiki From nwagner at iam.uni-stuttgart.de Fri May 18 11:00:04 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 18 May 2007 17:00:04 +0200 Subject: [SciPy-user] petsc4py Message-ID: <464DBF74.4080808@iam.uni-stuttgart.de> Hi all, My inquiry is not directly related to numpy/scipy but has someone used petsc4py and can send me some examples off-list ? Cheers, Nils Reference: http://code.google.com/p/petsc4py/ From anand at soe.ucsc.edu Fri May 18 11:23:48 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Fri, 18 May 2007 08:23:48 -0700 Subject: [SciPy-user] Pymachine - PyMC interoperability Message-ID: <464DC504.7010503@cse.ucsc.edu> David and co., In the nascent 2.0 release of PyMC, we're trying to separate declaration of probability models from the classes that do the actual fitting. We're wondering whether the PyMachine developers would be willing to keep to a similar design for methods that are explicitly based on probability models for the sake of interoperability. As PyMC's name implies, the fitting algorithms we focus on are under the umbrella of MCMC, but there will be cases where our users prefer to fit their model using another method instead or in addition. If Pymachine also separates model declaration and fitting, and we're able to come up with a declaration scheme that works for both of us, it should often be feasible for users to code up a probability model and fit it using either project with minimal rewrite. The advantages seem pretty clear: we'd both give our users more methods to choose from while decreasing our respective development and maintenance commitments. The common probability model 'declaration scheme' could be as thin as a common syntax or as thick as a common set of basic classes representing nodes or variables. Much of the effort we've devoted to the 2.0 release so far has gone toward designing such basic classes, and we'd be happy to write our solution up to seed the discussion. Or we could just compare notes after you've had the chance to hammer out your own design. Whatever your preference is, please let us know your thoughts on interoperability and keep us in the loop regarding your basic class structure for probability models. Cheers, the PyMC developers, Chris Fonnesbeck, David Huard and Anand Patil From t_crane at mrl.uiuc.edu Fri May 18 16:40:37 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Fri, 18 May 2007 15:40:37 -0500 Subject: [SciPy-user] help finding Matlab equivalent Message-ID: <9EADC1E53F9C70479BF6559370369114142ED1@mrlnt6.mrl.uiuc.edu> Hi, In programming in Matlab, I often used expressions such as this: a = 200:100:1000; This gives a = [200,300,400,...1000] What's the best way to accomplish the same task with scipy/numpy? I can use linspace, but it's a little awkward. For example, in order to get the same value for a I have to do this: import scipy a = scipy.linspace(200,1000,(1000-200)/100 + 1) Then, a = [200,300,400...1000] Is there a more efficient/readable way to do this? thanks, trevis ________________________________________________ Trevis Crane Postdoctoral Research Assoc. Department of Physics University of Ilinois 1110 W. Green St. Urbana, IL 61801 p: 217-244-8652 f: 217-244-2278 e: tcrane at uiuc.edu ________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri May 18 16:46:13 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 May 2007 15:46:13 -0500 Subject: [SciPy-user] help finding Matlab equivalent In-Reply-To: <9EADC1E53F9C70479BF6559370369114142ED1@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142ED1@mrlnt6.mrl.uiuc.edu> Message-ID: <464E1095.3000504@gmail.com> Trevis Crane wrote: > Hi, > > In programming in Matlab, I often used expressions such as this: > > a = 200:100:1000; > > This gives > > a = [200,300,400,?1000] > > What?s the best way to accomplish the same task with scipy/numpy? a = arange(200, 1001, 100) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tom.denniston at alum.dartmouth.org Fri May 18 16:46:58 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Fri, 18 May 2007 15:46:58 -0500 Subject: [SciPy-user] help finding Matlab equivalent In-Reply-To: <9EADC1E53F9C70479BF6559370369114142ED1@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142ED1@mrlnt6.mrl.uiuc.edu> Message-ID: Don't you just want the following? In [3]: numpy.arange(200, 1000+100, 100) Out[3]: array([ 200, 300, 400, 500, 600, 700, 800, 900, 1000]) On 5/18/07, Trevis Crane wrote: > > > > Hi, > > > > In programming in Matlab, I often used expressions such as this: > > > > a = 200:100:1000; > > > > This gives > > > > a = [200,300,400,?1000] > > > > What's the best way to accomplish the same task with scipy/numpy? I can use > linspace, but it's a little awkward. For example, in order to get the same > value for a I have to do this: > > > > import scipy > > a = scipy.linspace(200,1000,(1000-200)/100 + 1) > > > > Then, > > > > a = [200,300,400?1000] > > > > Is there a more efficient/readable way to do this? > > > > thanks, > > trevis > > > > ________________________________________________ > > > > Trevis Crane > > Postdoctoral Research Assoc. > > Department of Physics > > University of Ilinois > > 1110 W. Green St. > > Urbana, IL 61801 > > > > p: 217-244-8652 > > f: 217-244-2278 > > e: tcrane at uiuc.edu > > ________________________________________________ > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From t_crane at mrl.uiuc.edu Fri May 18 16:47:59 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Fri, 18 May 2007 15:47:59 -0500 Subject: [SciPy-user] help finding Matlab equivalent Message-ID: <9EADC1E53F9C70479BF6559370369114134456@mrlnt6.mrl.uiuc.edu> > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of Robert Kern > Sent: Friday, May 18, 2007 3:46 PM > To: SciPy Users List > Subject: Re: [SciPy-user] help finding Matlab equivalent > > Trevis Crane wrote: > > Hi, > > > > In programming in Matlab, I often used expressions such as this: > > > > a = 200:100:1000; > > > > This gives > > > > a = [200,300,400,...1000] > > > > What's the best way to accomplish the same task with scipy/numpy? > > a = arange(200, 1001, 100) [Trevis Crane] OK, I feel dumb. I didn't even consider the possibility that arange would do that. thanks > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From Karl.Young at ucsf.edu Fri May 18 15:43:45 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Fri, 18 May 2007 12:43:45 -0700 Subject: [SciPy-user] help finding Matlab equivalent In-Reply-To: <9EADC1E53F9C70479BF6559370369114142ED1@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142ED1@mrlnt6.mrl.uiuc.edu> Message-ID: <464E01F1.3010803@ucsf.edu> I believe a = scipy.arange(200,1100,100) should do the trick > Hi, > > In programming in Matlab, I often used expressions such as this: > > a = 200:100:1000; > > This gives > > a = [200,300,400,?1000] > > What?s the best way to accomplish the same task with scipy/numpy? I > can use linspace, but it?s a little awkward. For example, in order to > get the same value for a I have to do this: > > import scipy > > a = scipy.linspace(200,1000,(1000-200)/100 + 1) > > Then, > > a = [200,300,400?1000] > > Is there a more efficient/readable way to do this? > > thanks, > > trevis > > ________________________________________________ > > Trevis Crane > > Postdoctoral Research Assoc. > > Department of Physics > > University of Ilinois > > 1110 W. Green St. > > Urbana, IL 61801 > > p: 217-244-8652 > > f: 217-244-2278 > > e: tcrane at uiuc.edu > > ________________________________________________ > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From gael.varoquaux at normalesup.org Fri May 18 18:11:53 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 19 May 2007 00:11:53 +0200 Subject: [SciPy-user] help finding Matlab equivalent In-Reply-To: <9EADC1E53F9C70479BF6559370369114134456@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114134456@mrlnt6.mrl.uiuc.edu> Message-ID: <20070518221153.GA28083@clipper.ens.fr> On Fri, May 18, 2007 at 03:47:59PM -0500, Trevis Crane wrote: > > > In programming in Matlab, I often used expressions such as this: > > > a = 200:100:1000; > > > This gives > > > a = [200,300,400,...1000] > > > What's the best way to accomplish the same task with scipy/numpy? > > a = arange(200, 1001, 100) And if you want something more compact, and more similar to matlab: In [3]: r_[200:1001:100] Out[3]: array([ 200, 300, 400, 500, 600, 700, 800, 900, 1000]) Ga?l From nvf at uwm.edu Sat May 19 00:57:50 2007 From: nvf at uwm.edu (Nick Fotopoulos) Date: Fri, 18 May 2007 23:57:50 -0500 Subject: [SciPy-user] Scipy vs Numpy rfft Message-ID: Dear all, I was very surprised to find that on my machine (Intel Mac), a call to numpy.fft.rfft is many times faster (10-20x) than scipy.fftpack.rfft. I understand that their outputs require different unpacking, but I really only care about the speed here. Is this a common problem? Scipy is linked against a universally built FFTW. My input data length is a power of 2. In [41]: scipy.show_config() umfpack_info: NOT AVAILABLE lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec'] define_macros = [('NO_ATLAS_INFO', 3)] blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] djbfft_info: NOT AVAILABLE fftw3_info: libraries = ['fftw3'] library_dirs = ['/opt/lscsoft/non-lsc/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/opt/lscsoft/non-lsc/include'] mkl_info: NOT AVAILABLE In [42]: numpy.show_config() lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec'] define_macros = [('NO_ATLAS_INFO', 3)] blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] In [43]: numpy.__version__ Out[43]: '1.0.3.dev3725' In [44]: scipy.__version__ Out[44]: '0.5.3.dev2935' Any ideas? I'm happy to continue using numpy's rfft, but I am curious. Also, has there been any thought to wrapping the single-precision versions of the FFTW calls? Thanks, Nick From nogradi at gmail.com Sat May 19 10:00:22 2007 From: nogradi at gmail.com (Daniel Nogradi) Date: Sat, 19 May 2007 16:00:22 +0200 Subject: [SciPy-user] installation problem with dfftpack on linux Message-ID: <5f56302b0705190700o4b29dc1ay38cd8f3e729ab0ce@mail.gmail.com> Hi list, I have the following installed on a Fedora 3 box: gcc-3.4.4-2.fc3 gcc-c++-3.4.4-2.fc3 gcc-g77-3.4.4-2.fc3 libf2c-3.4.4-2.fc3 libgcc-3.4.4-2.fc3 compat-gcc-8-3.3.4.2 compat-gcc-c++-8-3.3.4.2 gcc4-4.0.0-0.41.fc3 gcc4-gfortran-4.0.0-0.41.fc3 gcc4-gfortran-4.0.0-0.41.fc3 libgfortran-4.0.0-0.41.fc3 and am trying to do a fresh install of scipy-0.5.2 but "python setup.py build_ext" is complaining about a missing f95 executable and ld about missing dfftpack: /usr/bin/g77 -g -Wall -shared build/temp.linux-i686-2.5/build/src.linux-i686-2.5/Lib/fftpack/_fftpackmodule.o build/temp.linux-i686-2.5/Lib/fftpack/src/zfft.o build/temp.linux-i686-2.5/Lib/fftpack/src/drfft.o build/temp.linux-i686-2.5/Lib/fftpack/src/zrfft.o build/temp.linux-i686-2.5/Lib/fftpack/src/zfftnd.o build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o -L/usr/local/lib -Lbuild/temp.linux-i686-2.5 -ldfftpack -lfftw3 -lg2c -o build/lib.linux-i686-2.5/scipy/fftpack/_fftpack.so /usr/bin/ld: cannot find -ldfftpack collect2: ld returned 1 exit status Shouldn't dfftpack come with the scipy distribution? I tried "python setup.py config_fc --fcompiler=gnu build_ext" and also "python setup.py config_fc --fcompiler=gnu95 build_ext" but no luck. Daniel From nogradi at gmail.com Sat May 19 10:14:37 2007 From: nogradi at gmail.com (Daniel Nogradi) Date: Sat, 19 May 2007 16:14:37 +0200 Subject: [SciPy-user] installation problem with dfftpack on linux In-Reply-To: <5f56302b0705190700o4b29dc1ay38cd8f3e729ab0ce@mail.gmail.com> References: <5f56302b0705190700o4b29dc1ay38cd8f3e729ab0ce@mail.gmail.com> Message-ID: <5f56302b0705190714s3c050469w945e82008a22944e@mail.gmail.com> I forgot to say that python is 2.5. Daniel From chanley at stsci.edu Sat May 19 11:04:38 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Sat, 19 May 2007 11:04:38 -0400 Subject: [SciPy-user] PyFITS 1.1 "candidtate" RELEASE 3 In-Reply-To: <464DA6B1.8010509@stsci.edu> References: <464DA6B1.8010509@stsci.edu> Message-ID: <464F1206.8040302@stsci.edu> My apologizes. There was a bug introduced into Friday morning's PyFITS 1.1 "candidate release 3". We are currently working to fix this error. We will post a new candidate release as soon as we have corrected this problem. Chris Hanley Christopher Hanley wrote: > ------------------ > | PYFITS Release | > ------------------ > > Space Telescope Science Institute is pleased to announce the > third and hopefully final candidate release of PyFITS 1.1. This > release includes support for both the NUMPY and NUMARRAY array > packages. This software can be downloaded at: > > http://www.stsci.edu/resources/software_hardware/pyfits/Download > > If you encounter bugs, please send bug reports to "help at stsci.edu". > > We intend to support NUMARRAY and NUMPY simultaneously for a > transition period of no less than 6 months. Eventually, however, > support for NUMARRAY will disappear. During this period, it is > likely that new features will appear only for NUMPY. The > support for NUMARRAY will primarily be to fix serious bugs and > handle platform updates. > > We plan to release the "official" PyFITS 1.1 version in a few weeks. > > ----------- > | Version | > ----------- > > Version 1.1rc3; May 17, 2007 > > ------------------------------- > | Major Changes since v1.1rc2 | > ------------------------------- > > * Fixes a bug in the creation of binary FITS tables > on little endian machines introduced in release > candidate 2. > > ------------------------- > | Software Requirements | > ------------------------- > > PyFITS Version 1.1rc3 REQUIRES: > > * Python 2.3 or later > * NUMPY 1.0.1(or later) or NUMARRAY > > --------------------- > | Installing PyFITS | > --------------------- > PyFITS 1.1rc1 is distributed as a Python distutils module. > Installation simply involves unpacking the package > and executing > > % python setup.py install > > to install it in Python's site-packages directory. > > Alternatively the command > > %python setup.py install --local="/destination/directory/" > > will install PyFITS in an arbitrary directory which should > be placed on PYTHONPATH. Once numarray or numpy has been > installed, then PyFITS should be available for use under > Python. > > ----------------- > | Download Site | > ----------------- > > http://www.stsci.edu/resources/software_hardware/pyfits/Download > > ---------- > | Usage | > ---------- > > Users will issue an "import pyfits" command as in the past. > However, the use of the NUMPY or NUMARRAY version of PyFITS > will be controlled by an environment variable called NUMERIX. > > Set NUMERIX to 'numarray' for the NUMARRAY version of PyFITS. > Set NUMERIX to 'numpy' for the NUMPY version of pyfits. > > If only one array package is installed, that package's version > of PyFITS will be imported. If both packages are installed > the NUMERIX value is used to decide which version to import. > If no NUMERIX value is set then the NUMARRAY version of PyFITS > will be imported. > > Anything else will raise an exception upon import. > > --------------- > | Bug Reports | > --------------- > > Please send all PyFITS bug reports to help at stsci.edu > > ------------------ > | Advanced Users | > ------------------ > > Users who would like the "bleeding" edge of PyFITS can retrieve > the software from our SUBVERSION repository hosted at: > > http://astropy.scipy.org/svn/pyfits/trunk > > We also provide a Trac site at: > > http://projects.scipy.org/astropy/pyfits/wiki > > From pearu at cens.ioc.ee Sat May 19 11:45:33 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 19 May 2007 18:45:33 +0300 (EEST) Subject: [SciPy-user] installation problem with dfftpack on linux In-Reply-To: <5f56302b0705190700o4b29dc1ay38cd8f3e729ab0ce@mail.gmail.com> References: <5f56302b0705190700o4b29dc1ay38cd8f3e729ab0ce@mail.gmail.com> Message-ID: <49709.84.202.199.60.1179589533.squirrel@cens.ioc.ee> On Sat, May 19, 2007 5:00 pm, Daniel Nogradi wrote: > Hi list, > > I have the following installed on a Fedora 3 box: > > gcc-3.4.4-2.fc3 > gcc-c++-3.4.4-2.fc3 > gcc-g77-3.4.4-2.fc3 > libf2c-3.4.4-2.fc3 > libgcc-3.4.4-2.fc3 > compat-gcc-8-3.3.4.2 > compat-gcc-c++-8-3.3.4.2 > > gcc4-4.0.0-0.41.fc3 > gcc4-gfortran-4.0.0-0.41.fc3 > gcc4-gfortran-4.0.0-0.41.fc3 > libgfortran-4.0.0-0.41.f > > and am trying to do a fresh install of scipy-0.5.2 but "python > setup.py build_ext" is complaining about a missing f95 executable and > ld about missing dfftpack ..... You can ignore messages about missing f95 executable, they are just parts of compiler detection tools. Libraries dfftpack and others are built with build_clib command, so sole build_ext is not enough. Use python build to build scipy. HTH, Pearu From nogradi at gmail.com Sat May 19 13:03:49 2007 From: nogradi at gmail.com (Daniel Nogradi) Date: Sat, 19 May 2007 19:03:49 +0200 Subject: [SciPy-user] installation problem with dfftpack on linux In-Reply-To: <49709.84.202.199.60.1179589533.squirrel@cens.ioc.ee> References: <5f56302b0705190700o4b29dc1ay38cd8f3e729ab0ce@mail.gmail.com> <49709.84.202.199.60.1179589533.squirrel@cens.ioc.ee> Message-ID: <5f56302b0705191003k7bbca5f1sedbd0e27dae9a00c@mail.gmail.com> > > I have the following installed on a Fedora 3 box: > > > > gcc-3.4.4-2.fc3 > > gcc-c++-3.4.4-2.fc3 > > gcc-g77-3.4.4-2.fc3 > > libf2c-3.4.4-2.fc3 > > libgcc-3.4.4-2.fc3 > > compat-gcc-8-3.3.4.2 > > compat-gcc-c++-8-3.3.4.2 > > > > gcc4-4.0.0-0.41.fc3 > > gcc4-gfortran-4.0.0-0.41.fc3 > > gcc4-gfortran-4.0.0-0.41.fc3 > > libgfortran-4.0.0-0.41.f > > > > and am trying to do a fresh install of scipy-0.5.2 but "python > > setup.py build_ext" is complaining about a missing f95 executable and > > ld about missing dfftpack ..... > > You can ignore messages about missing f95 executable, they are just > parts of compiler detection tools. > > Libraries dfftpack and others are built with build_clib command, > so sole build_ext is not enough. Use > > python build > > to build scipy. Thanks very much, indeed, build_clib, build_ext, build works (in this order). Daniel From robert.kern at gmail.com Sat May 19 14:52:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 19 May 2007 13:52:01 -0500 Subject: [SciPy-user] installation problem with dfftpack on linux In-Reply-To: <5f56302b0705191003k7bbca5f1sedbd0e27dae9a00c@mail.gmail.com> References: <5f56302b0705190700o4b29dc1ay38cd8f3e729ab0ce@mail.gmail.com> <49709.84.202.199.60.1179589533.squirrel@cens.ioc.ee> <5f56302b0705191003k7bbca5f1sedbd0e27dae9a00c@mail.gmail.com> Message-ID: <464F4751.4070603@gmail.com> Daniel Nogradi wrote: >Pearu Peterson wrote: >> Libraries dfftpack and others are built with build_clib command, >> so sole build_ext is not enough. Use >> >> python build >> >> to build scipy. > > Thanks very much, indeed, build_clib, build_ext, build works (in this order). Note that I've fixed this issue in numpy after 1.0.2 was released. When numpy 1.0.3 comes out (or if you use numpy from SVN), a plain "python setup.py build_ext" should work just fine. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pearu at cens.ioc.ee Sat May 19 16:02:53 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 19 May 2007 23:02:53 +0300 (EEST) Subject: [SciPy-user] installation problem with dfftpack on linux In-Reply-To: <464F4751.4070603@gmail.com> References: <5f56302b0705190700o4b29dc1ay38cd8f3e729ab0ce@mail.gmail.com> <49709.84.202.199.60.1179589533.squirrel@cens.ioc.ee> <5f56302b0705191003k7bbca5f1sedbd0e27dae9a00c@mail.gmail.com> <464F4751.4070603@gmail.com> Message-ID: <43875.84.202.199.60.1179604973.squirrel@cens.ioc.ee> On Sat, May 19, 2007 9:52 pm, Robert Kern wrote: > Daniel Nogradi wrote: >>Pearu Peterson wrote: >>> Libraries dfftpack and others are built with build_clib command, >>> so sole build_ext is not enough. Use >>> >>> python build >>> >>> to build scipy. >> >> Thanks very much, indeed, build_clib, build_ext, build works (in this >> order). > > Note that I've fixed this issue in numpy after 1.0.2 was released. When > numpy > 1.0.3 comes out (or if you use numpy from SVN), a plain "python setup.py > build_ext" should work just fine. I think `build_ext` is not needed either, a plain `python setup.py build` should be sufficient as starting from 1.0.3 one can specify Fortran compiler as an option --fcompiler to the `build` command (when necessary). Hmm, only --inplace is not an option to `build` yet but that would be easy to add. Do we need it? Pearu From robert.kern at gmail.com Sat May 19 16:09:02 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 19 May 2007 15:09:02 -0500 Subject: [SciPy-user] installation problem with dfftpack on linux In-Reply-To: <43875.84.202.199.60.1179604973.squirrel@cens.ioc.ee> References: <5f56302b0705190700o4b29dc1ay38cd8f3e729ab0ce@mail.gmail.com> <49709.84.202.199.60.1179589533.squirrel@cens.ioc.ee> <5f56302b0705191003k7bbca5f1sedbd0e27dae9a00c@mail.gmail.com> <464F4751.4070603@gmail.com> <43875.84.202.199.60.1179604973.squirrel@cens.ioc.ee> Message-ID: <464F595E.30808@gmail.com> Pearu Peterson wrote: > I think `build_ext` is not needed either, a plain `python setup.py build` > should be sufficient as starting from 1.0.3 one can specify Fortran > compiler > as an option --fcompiler to the `build` command (when necessary). Right, it's not necessary. However, since some things do just call "build_ext" and not "build" (notably easy_install), just invoking "build_ext" should also work. > Hmm, only --inplace is not an option to `build` yet but that would > be easy to add. Do we need it? I don't think so. We should avoid messing with the semantics of the official commands more than necessary. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Sun May 20 02:54:34 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 20 May 2007 15:54:34 +0900 Subject: [SciPy-user] Scipy vs Numpy rfft In-Reply-To: References: Message-ID: <464FF0AA.4090005@ar.media.kyoto-u.ac.jp> Nick Fotopoulos wrote: > Dear all, > > I was very surprised to find that on my machine (Intel Mac), a call to > numpy.fft.rfft is many times faster (10-20x) than scipy.fftpack.rfft. > I understand that their outputs require different unpacking, but I > really only care about the speed here. Is this a common problem? > Scipy is linked against a universally built FFTW. My input data > length is a power of 2. Does this happen for the first call only, or for many calls ? Could you give us a script which shows this behaviour ? David From asefu at fooie.net Sun May 20 20:03:30 2007 From: asefu at fooie.net (Fahd Sultan) Date: Sun, 20 May 2007 20:03:30 -0400 Subject: [SciPy-user] Scipy Message-ID: <4650E1D2.1030604@fooie.net> Hello, I installed scipy 0.5.2 on RedHat AS 4 (intel 64 bit) both from release tar.gz and svn and I get the error below when I try a test computation. Python ver is 2.3.4, lapack 3.0-25 (recompiled from src rpm lapack-3.0-25.1.src.rpm). I've posted the install log at http://www.fooie.net/logs/scipy_install_log_052.txt , since it seems too long to email out. Am I missing something ? thanks, Fahd Python 2.3.4 (#1, Jan 9 2007, 16:40:09) [GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> import scipy as S >>> Asp = sparse.lil_matrix((1000,1000)) Traceback (most recent call last): File "", line 1, in ? NameError: name 'sparse' is not defined >>> from numpy import allclose, arange, eye, linalg, ones >>> from scipy import linsolve, sparse Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.3/site-packages/scipy/linsolve/__init__.py", line 5, in ? import umfpack File "/usr/lib64/python2.3/site-packages/scipy/linsolve/umfpack/__init__.py", line 3, in ? from umfpack import * File "/usr/lib64/python2.3/site-packages/scipy/linsolve/umfpack/umfpack.py", line 11, in ? import scipy.sparse as sp File "/usr/lib64/python2.3/site-packages/scipy/sparse/__init__.py", line 5, in ? from sparse import * File "/usr/lib64/python2.3/site-packages/scipy/sparse/sparse.py", line 15, in ? from scipy.sparse.sparsetools import densetocsr, csrtocsc, csrtodense, \ ImportError: cannot import name densetocsr Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.3/site-packages/scipy/linsolve/__init__.py", line 5, in ? import umfpack File "/usr/lib64/python2.3/site-packages/scipy/linsolve/umfpack/__init__.py", line 3, in ? from umfpack import * File "/usr/lib64/python2.3/site-packages/scipy/linsolve/umfpack/umfpack.py", line 11, in ? import scipy.sparse as sp File "/usr/lib64/python2.3/site-packages/scipy/sparse/__init__.py", line 5, in ? from sparse import * File "/usr/lib64/python2.3/site-packages/scipy/sparse/sparse.py", line 15, in ? from scipy.sparse.sparsetools import densetocsr, csrtocsc, csrtodense, \ ImportError: cannot import name densetocsr From asefu at fooie.net Sun May 20 20:05:49 2007 From: asefu at fooie.net (Fahd Sultan) Date: Sun, 20 May 2007 20:05:49 -0400 Subject: [SciPy-user] Scipy ImportError: cannot import name densetocsr In-Reply-To: <4650E1D2.1030604@fooie.net> References: <4650E1D2.1030604@fooie.net> Message-ID: <4650E25D.7010602@fooie.net> Gads, I sent the previous email with the subject line of just "scipy". So sorry, fahd Fahd Sultan wrote: > Hello, > > I installed scipy 0.5.2 on RedHat AS 4 (intel 64 bit) both from release > tar.gz and svn and I get the error below when I try a test computation. > Python ver is 2.3.4, lapack 3.0-25 (recompiled from src rpm > lapack-3.0-25.1.src.rpm). I've posted the install log at > http://www.fooie.net/logs/scipy_install_log_052.txt , since it seems too > long to email out. > Am I missing something ? > > thanks, > > Fahd > > > Python 2.3.4 (#1, Jan 9 2007, 16:40:09) > [GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy as N > >>> import scipy as S > >>> Asp = sparse.lil_matrix((1000,1000)) > Traceback (most recent call last): > File "", line 1, in ? > NameError: name 'sparse' is not defined > >>> from numpy import allclose, arange, eye, linalg, ones > >>> from scipy import linsolve, sparse > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib64/python2.3/site-packages/scipy/linsolve/__init__.py", > line 5, in ? > import umfpack > File > "/usr/lib64/python2.3/site-packages/scipy/linsolve/umfpack/__init__.py", > line 3, in ? > from umfpack import * > File > "/usr/lib64/python2.3/site-packages/scipy/linsolve/umfpack/umfpack.py", > line 11, in ? > import scipy.sparse as sp > File "/usr/lib64/python2.3/site-packages/scipy/sparse/__init__.py", > line 5, in ? > from sparse import * > File "/usr/lib64/python2.3/site-packages/scipy/sparse/sparse.py", line > 15, in ? > from scipy.sparse.sparsetools import densetocsr, csrtocsc, csrtodense, \ > ImportError: cannot import name densetocsr > > > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib64/python2.3/site-packages/scipy/linsolve/__init__.py", > line 5, in ? > import umfpack > File > "/usr/lib64/python2.3/site-packages/scipy/linsolve/umfpack/__init__.py", > line 3, in ? > from umfpack import * > File > "/usr/lib64/python2.3/site-packages/scipy/linsolve/umfpack/umfpack.py", > line 11, in ? > import scipy.sparse as sp > File "/usr/lib64/python2.3/site-packages/scipy/sparse/__init__.py", > line 5, in ? > from sparse import * > File "/usr/lib64/python2.3/site-packages/scipy/sparse/sparse.py", line > 15, in ? > from scipy.sparse.sparsetools import densetocsr, csrtocsc, csrtodense, \ > ImportError: cannot import name densetocsr > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From fredmfp at gmail.com Mon May 21 12:37:13 2007 From: fredmfp at gmail.com (fred) Date: Mon, 21 May 2007 18:37:13 +0200 Subject: [SciPy-user] f2py with i686 vs x86_64 arch... Message-ID: <4651CAB9.7020202@gmail.com> Hi, I got a (strange ?) problem using f2py on x86_64 arch (linux box) The short code attached works fine on i686 arch, but gives bad results on x86_64 (each are debian etch). 1) On i686 arch gcc -Wall funcs.c -c -o funcs.o f2py -c foo.f funcs.o -m foo [pts/4]:~/tmp/{10}/> ipython Python 2.4.4 (#2, Apr 5 2007, 20:11:18) Type "copyright", "credits" or "license" for more information. IPython 0.7.2 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. %magic -> Information about IPython's 'magic' % functions. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: import foo In [2]: print foo.foo(2) 0 2. 4. 1 2. 8. None 2) On x86_64 arch gcc -Wall -fPIC funcs.c -c -o funcs.o f2py -c foo.f funcs.o -m foo [pts/8]:~/tmp/{62}/> ipython Python 2.4.4 (#2, Apr 5 2007, 18:43:10) Type "copyright", "credits" or "license" for more information. IPython 0.7.2 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. %magic -> Information about IPython's 'magic' % functions. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: import foo In [2]: print foo.foo(2) 0 2. 0. 1 2. 0. None Any suggestion ? Thanks in advance. Cheers, -- http://scipy.org/FredericPetit -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: foo.f URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: funcs.c Type: text/x-csrc Size: 308 bytes Desc: not available URL: From ryanlists at gmail.com Mon May 21 12:39:33 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 May 2007 11:39:33 -0500 Subject: [SciPy-user] Finding determinant In-Reply-To: References: <223255.66194.qm@web62302.mail.re1.yahoo.com> <464D3316.1050406@gmail.com> Message-ID: Is it possible that the Numpy and Scipy windows executables are linked to different LAPACK libraries? On 5/18/07, Ryan Krauss wrote: > This has me fairly concerned. I am about to require Python for a > class I am teaching this summer that starts on Tuesday (5/22). I > can't live with Scipy without LAPACK. But I can't tell my students > they can't use AMD processors. What would it take to build a scipy > executable for AMD (non sse2?)? Do executables for LAPACK and ATLAS > exist for windows so that I just have to build Scipy itself from > source on Windows? I have never done that before. I assume it has to > be done on a computer with an older AMD processor. Can it be done > with mingw or do I need a computer with an AMD processor and > VisualStudio or whatever the right Microsoft compiler is? Can anyone > else out there easily build this executable for me? Is there another > solution? Most of my students will be reluctant Python converts, so I > need to make this as easy and painless for them as possible if it is > going to work. > > Oh, and I am leaving for the weekend and may not be able to respond > until late Sunday. > > Thanks, > > Ryan > > On 5/18/07, Ryan Krauss wrote: > > I just installed > > http://prdownloads.sourceforge.net/numpy/numpy-1.0.2.win32-py2.5.exe?download > > and > > http://prdownloads.sourceforge.net/scipy/scipy-0.5.2.win32-py2.5.exe?download > > on a fresh install. Are we saying that they link to different LAPACK libraries? > > > > Ryan > > > > On 5/18/07, Robert Kern wrote: > > > Ryan Krauss wrote: > > > > > > > So what does scipy.linalg.det do differently from numpy.linalg.det? > > > > > > Most likely your scipy.linalg is linked to a different LAPACK library than your > > > numpy.linalg. > > > > > > -- > > > Robert Kern > > > > > > "I have come to believe that the whole world is an enigma, a harmless enigma > > > that is made terrible by our own mad attempt to interpret it as though it had > > > an underlying truth." > > > -- Umberto Eco > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > From robert.kern at gmail.com Mon May 21 13:17:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 12:17:23 -0500 Subject: [SciPy-user] Finding determinant In-Reply-To: References: <223255.66194.qm@web62302.mail.re1.yahoo.com> <464D3316.1050406@gmail.com> Message-ID: <4651D423.3090709@gmail.com> Ryan Krauss wrote: > Is it possible that the Numpy and Scipy windows executables are linked > to different LAPACK libraries? Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Mon May 21 14:03:29 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 May 2007 13:03:29 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) Message-ID: I am trying to build from source on Windows to get a scipy that works with older AMD processors. I am having some troubles. Everything builds easily enough, but if I build from the source tarball, I get lots of errors related to loadmat. If I build from svn, Python crashes during scipy.test() >>> import scipy >>> scipy.test() Found 4 tests for scipy.io.array_import Found 7 tests for scipy.cluster.vq Found 128 tests for scipy.linalg.fblas Found 399 tests for scipy.ndimage Found 10 tests for scipy.integrate.quadpack Found 98 tests for scipy.stats.stats Found 53 tests for scipy.linalg.decomp Found 3 tests for scipy.integrate.quadrature Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 6 tests for scipy.linalg.iterative Found 6 tests for scipy.interpolate.fitpack Found 6 tests for scipy.interpolate Found 12 tests for scipy.io.mmio Found 1 tests for scipy.integrate Found 4 tests for scipy.linalg.lapack Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.io.recaster Found 4 tests for scipy.optimize.zeros Warning: FAILURE importing tests for C:\Python25\Lib\site-packages\scipy\sparse\sparse.py:15: ImportError: cannot imp ort name densetocsr (in ) Found 41 tests for scipy.linalg.basic Found 358 tests for scipy.special.basic Found 128 tests for scipy.lib.blas.fblas Found 7 tests for scipy.linalg.matfuncs Found 42 tests for scipy.lib.lapack Found 1 tests for scipy.optimize.cobyla Found 16 tests for scipy.lib.blas Found 10 tests for scipy.stats.morestats Found 14 tests for scipy.linalg.blas Found 70 tests for scipy.stats.distributions Found 5 tests for scipy.odr Found 10 tests for scipy.optimize.nonlin Found 4 tests for scipy.fftpack.helper Found 5 tests for scipy.io.npfile Found 4 tests for scipy.signal.signaltools Found 0 tests for __main__ Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ..............caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ................................................................................ .........................C:\Python25\lib\site-packages\scipy\ndimage\interpolati on.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' ................................................................................ .............E and it crashes at that point. Restarting Python and trying from scipy import * gives >>> import scipy >>> from scipy import * Traceback (most recent call last): File "", line 1, in File "C:\Python25\Lib\site-packages\scipy\linsolve\__init__.py", line 5, in import umfpack File "C:\Python25\Lib\site-packages\scipy\linsolve\umfpack\__init__.py", line 3, in from umfpack import * File "C:\Python25\Lib\site-packages\scipy\linsolve\umfpack\umfpack.py", line 1 1, in import scipy.sparse as sp File "C:\Python25\Lib\site-packages\scipy\sparse\__init__.py", line 5, in from sparse import * File "C:\Python25\Lib\site-packages\scipy\sparse\sparse.py", line 15, in from scipy.sparse.sparsetools import densetocsr, csrtocsc, csrtodense, \ ImportError: cannot import name densetocsr Can someone help me through this? Thanks, Ryan From t_crane at mrl.uiuc.edu Mon May 21 14:55:59 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Mon, 21 May 2007 13:55:59 -0500 Subject: [SciPy-user] index arrangement standard Message-ID: <9EADC1E53F9C70479BF6559370369114142ED2@mrlnt6.mrl.uiuc.edu> Hi all, As I understand it, an m x n array has m rows, and n columns. So that ones([2,3]) results in an array of ones that has two rows and three columns. Why, when we want to add a third dimension, does that index go first? That is, ones([2,3,4]) doesn't give me 2 rows, 3 columns and 4 pages (what is the right word for this?). Instead, I get 2 pages with 3 rows and 4 columns each. For me this seems counterintuitive, but obviously it was designed like this for a reason - what's that reason? just curious, trevis ________________________________________________ Trevis Crane Postdoctoral Research Assoc. Department of Physics University of Ilinois 1110 W. Green St. Urbana, IL 61801 p: 217-244-8652 f: 217-244-2278 e: tcrane at uiuc.edu ________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Mon May 21 15:05:48 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 21 May 2007 22:05:48 +0300 Subject: [SciPy-user] index arrangement standard In-Reply-To: <9EADC1E53F9C70479BF6559370369114142ED2@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142ED2@mrlnt6.mrl.uiuc.edu> Message-ID: <4651ED8C.7040500@ukr.net> BTW I think it would be nice to have possibility to set ones(2,3) (=ones([2,3])) zeros(2,3), array(2,3,4,5) etc However, this topic is better for numpy, not scipy mailing lists Regards, D. Trevis Crane wrote: > > Hi all, > > As I understand it, an m x n array has m rows, and n columns. So that > > ones([2,3]) > > results in an array of ones that has two rows and three columns. > > Why, when we want to add a third dimension, does that index go first? > That is, > > ones([2,3,4]) > > doesn?t give me 2 rows, 3 columns and 4 pages (what is the right word > for this?). Instead, I get 2 pages with 3 rows and 4 columns each. For > me this seems counterintuitive, but obviously it was designed like > this for a reason ? what?s that reason? > > just curious, > > trevis > > ________________________________________________ > > Trevis Crane > > Postdoctoral Research Assoc. > > Department of Physics > > University of Ilinois > > 1110 W. Green St. > > Urbana, IL 61801 > > p: 217-244-8652 > > f: 217-244-2278 > > e: tcrane at uiuc.edu > > ________________________________________________ > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon May 21 15:08:47 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 14:08:47 -0500 Subject: [SciPy-user] index arrangement standard In-Reply-To: <9EADC1E53F9C70479BF6559370369114142ED2@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142ED2@mrlnt6.mrl.uiuc.edu> Message-ID: <4651EE3F.9030609@gmail.com> Trevis Crane wrote: > Hi all, > > As I understand it, an m x n array has m rows, and n columns. So that > > ones([2,3]) > > results in an array of ones that has two rows and three columns. > > Why, when we want to add a third dimension, does that index go first? > That is, > > ones([2,3,4]) > > doesn?t give me 2 rows, 3 columns and 4 pages (what is the right word > for this?). Instead, I get 2 pages with 3 rows and 4 columns each. For > me this seems counterintuitive, but obviously it was designed like this > for a reason ? what?s that reason? You always have to add new dimensions "at the beginning" so to speak. a[i] is always equivalent to a[i,...]. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gnchen at cortechs.net Mon May 21 15:13:53 2007 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 21 May 2007 12:13:53 -0700 Subject: [SciPy-user] (no subject) Message-ID: <5A8B0B13-5973-45D3-AF20-FFE5D0FA2E57@cortechs.net> Hi! I need to build a distribution OS X package from source for numpy and scipy . But setup.py in both of them seems lacking that option. Can anyone tell me how to do that? Thanks! Gen -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon May 21 15:14:03 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 14:14:03 -0500 Subject: [SciPy-user] index arrangement standard In-Reply-To: <4651ED8C.7040500@ukr.net> References: <9EADC1E53F9C70479BF6559370369114142ED2@mrlnt6.mrl.uiuc.edu> <4651ED8C.7040500@ukr.net> Message-ID: <4651EF7B.2010404@gmail.com> dmitrey wrote: > BTW I think it would be nice to have possibility to set > > ones(2,3) (=ones([2,3])) > zeros(2,3), > array(2,3,4,5) etc > However, this topic is better for numpy, not scipy mailing lists Yes, however, the suggestion has been made before and has been rejected. Each of those functions take other arguments. Having *args would interfere with them. Also, it would introduce multiple ways of doing the same thing for not much reason. That causes confusion for people trying to learn to use numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Mon May 21 15:34:01 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 21 May 2007 15:34:01 -0400 Subject: [SciPy-user] (no subject) In-Reply-To: <5A8B0B13-5973-45D3-AF20-FFE5D0FA2E57@cortechs.net> References: <5A8B0B13-5973-45D3-AF20-FFE5D0FA2E57@cortechs.net> Message-ID: <20070521193401.GA3524@arbutus.physics.mcmaster.ca> On Mon, May 21, 2007 at 12:13:53PM -0700, Gennan Chen wrote: > Hi! > > I need to build a distribution OS X package from source for numpy and > scipy . But setup.py in both of them seems lacking that option. Can > anyone tell me how to do that? In numpy, the setupegg.py script works the same as setup.py, except it import setuptools first so stuff like bdist_egg and bdist_mpkg should work (assuming you have the py2app stuff installed). You can do the same with scipy by importing setuptools first: $ python -c 'import setuptools; execfile("setup.py")' bdist_mpkg -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From barrywark at gmail.com Mon May 21 15:40:46 2007 From: barrywark at gmail.com (Barry Wark) Date: Mon, 21 May 2007 12:40:46 -0700 Subject: [SciPy-user] (no subject) In-Reply-To: <20070521193401.GA3524@arbutus.physics.mcmaster.ca> References: <5A8B0B13-5973-45D3-AF20-FFE5D0FA2E57@cortechs.net> <20070521193401.GA3524@arbutus.physics.mcmaster.ca> Message-ID: Or use the bdist_mpkg script that comes with py2app. On 5/21/07, David M. Cooke wrote: > On Mon, May 21, 2007 at 12:13:53PM -0700, Gennan Chen wrote: > > Hi! > > > > I need to build a distribution OS X package from source for numpy and > > scipy . But setup.py in both of them seems lacking that option. Can > > anyone tell me how to do that? > > In numpy, the setupegg.py script works the same as setup.py, except it > import setuptools first so stuff like bdist_egg and bdist_mpkg should > work (assuming you have the py2app stuff installed). You can do the same > with scipy by importing setuptools first: > > $ python -c 'import setuptools; execfile("setup.py")' bdist_mpkg > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From openopt at ukr.net Mon May 21 16:05:51 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 21 May 2007 23:05:51 +0300 Subject: [SciPy-user] GSoC project related to numerical optimization Message-ID: <4651FB9F.2080301@ukr.net> Hallo! Excuse my bad English. I'm 2nd from 2 students with scikits-related GSoC projects. My one deals with numerical optimization. The GSoC starts at May 28, and I'm very busy for now with my exams and a commercial project from my optimization department (instityte of cybernetics, Ukraine science academy), so I didn't made too much yet. So, can't you take a look at some my code and give your suggestions http://www.box.net/shared/5pv4k1imdp (tar.bz2 file) (first of all it is aimed to my GSoC mentors) see example.py for details. Currently all those files are in single directory, but later, of course, they will be structured in folders and connected to svn, when they will be ready enough Please pay attention to 1) format of assignment. Currently there are 3 prob classes: from OpenOpt import NLP, NSP, LP #NSP is NonSmooth Problem currently NLP is same as NSP, but in future it will be different in default xtol, contol, funtol, diffInt (note that MATLAB fminsearch has defaults 1e-4, while fminunc/fmincon 1e-6. however, my department uses 1e-6...1e-7 for non-smooth optimizer ralg, so I didn't decide yet which ones will be defaults) assignment to NLP can be made in 3 equivalent ways: p = NLP(objFun, x0, kwargs) p = NLP(objFun, x0 = my_x0, kwargs) p = NLP(f = objFun, x0 = my_x0, other_kwargs) (same to NSP) fro example x0 = [8,15,80] p = NLP(lambda x: sum(asarray(x)**2), x0, maxIter=100, diffInt = 1e-6) p = NLP(lambda x: sum(asarray(x)**2), x0=x0, maxIter=100, diffInt = 1e-6) p = NLP(f = lambda x: sum(asarray(x)**2), x0=x0, maxIter=100, diffInt = 1e-6) p = NLP(f = lambda x: sum(asarray(x)**2), maxIter=100, diffInt = 1e-6, x0=x0) (then: r = p.solve()) very usefull is parameter doPlot: p = NLP(lambda x: sum(asarray(x)**2), x0, doPlot = True) or the same p = NLP(lambda x: sum(asarray(x)**2), x0) p.doPlot = True as for LP, currently all I have done is connection to lpsolve (sourceforge.net/projects/lpsolve) BTW I was very surprised that scipy hasn't any LP & QP solver (or maybe I'm wrong?). Of course, all of us, even taken together, can't outstand to such teams like CPLEX, XA, XPRESS, or even free COIN-OR, glpk, etc. They have dozens of people with dozens years of LP/QP/MILP experience. But in many cases any feasible solver is good enough. For example, it would be faster for someone to spend 10 seconds for calculating his (maybe often not-big-scale) problem with python solver, than 2 hours of searching and downloading from internet other ones, commercial or no, that spend 3 seconds. So, I took a look at some recent papers denoted to LP solvers (2005 till now), and found 2 algorithms with rather precise descriptions. As for QP, I didn't look yet, but I think it's also could be done (however, LP, QP, NLP is not my specialty, and here in the territory of the former USSR there are not good LP-QP algorithms and solvers, the only one I know is glpk, but it seems to be not very good). In my GSoC tasks I have one related to NLP solver, so, it requires LP (and optionaly, but very desirebly, QP) solver to be present. So, currently, I can use lp_solve, but the problem is still open, because lp_solve has LGPL license. Waiting for your suggestions, with best regards, D. P.S. My server refuses to send letters to more than 2 recepients, but I have more mentors. Don't you mind if I will do my GSoC weekly reports in scipy-dev or other mailing list? Or if I will create a blog that I have been asked and publish in mailing lists weekly URL's to new reports in the one? From matthieu.brucher at gmail.com Mon May 21 16:17:27 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 21 May 2007 22:17:27 +0200 Subject: [SciPy-user] GSoC project related to numerical optimization In-Reply-To: <4651FB9F.2080301@ukr.net> References: <4651FB9F.2080301@ukr.net> Message-ID: Hi, BTW I was very surprised that scipy hasn't any LP & QP solver (or maybe > I'm wrong?). Quadratic Problems, like with leastsq ? > In my GSoC tasks I have one related to NLP solver, so, it requires LP > (and optionaly, but very desirebly, QP) solver to be present. > So, currently, I can use lp_solve, but the problem is still open, > because lp_solve has LGPL license. > What is exactly what you need in terms of LP or Q solvers ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon May 21 16:28:48 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 15:28:48 -0500 Subject: [SciPy-user] GSoC project related to numerical optimization In-Reply-To: <4651FB9F.2080301@ukr.net> References: <4651FB9F.2080301@ukr.net> Message-ID: <46520100.3090608@gmail.com> dmitrey wrote: > BTW I was very surprised that scipy hasn't any LP & QP solver (or maybe > I'm wrong?). You're not wrong. Mostly it's a matter of licenses. Most of the open source libraries I've seen for those classes of problems tend to be non-BSD. > So, I took a look at some recent papers denoted to LP solvers (2005 till > now), and found 2 algorithms with rather precise descriptions. Excellent. I'd like to see those references if you get the chance. > P.S. My server refuses to send letters to more than 2 recepients, but I > have more mentors. Don't you mind if I will do my GSoC weekly reports in > scipy-dev or other mailing list? Or if I will create a blog that I have > been asked and publish in mailing lists weekly URL's to new reports in > the one? I am happy for you to use the scipy lists if your mentors don't mind. scipy-dev is probably the more appropriate of the two, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon May 21 16:42:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 15:42:50 -0500 Subject: [SciPy-user] GSoC project related to numerical optimization In-Reply-To: References: <4651FB9F.2080301@ukr.net> Message-ID: <4652044A.8010206@gmail.com> Matthieu Brucher wrote: > Hi, > > BTW I was very surprised that scipy hasn't any LP & QP solver (or maybe > I'm wrong?). > > Quadratic Problems, like with leastsq ? Quadratic Programming, which is different: http://en.wikipedia.org/wiki/Quadratic_programming -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Mon May 21 16:56:42 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 May 2007 15:56:42 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: References: Message-ID: Sorry to whine about this, but I am really stuck and my class starts tomorrow. I was planning to require my students to use Python, but if I have any with AMD processors, I am in big trouble. I noticed this flying by on the cmd.exe window and wondered if it was part of my problem: building 'quadpack' library compiling Fortran sources Fortran f77 compiler: C:\MinGW\bin\g77.exe -g -Wall -fno-second-underscore -mno- cygwin -O3 -funroll-loops -mmmx -m3dnow -msse creating build\temp.win32-2.5\Lib\integrate\quadpack (it went by so fast I thought I saw sse2, now that I have it copied and pasted it doesn't look so bad - as long as an Athlon XP is sse). Basically, I have done nothing more than check out from svn, create a site.cfg file that contains [atlas] library_dirs = c:\path\to\BlasLapackLibs atlas_libs = lapack, f77blas, cblas, atlas and get ATLAS and LAPACK from here http://scipy.org/Installing_SciPy/Windows?action=AttachFile&do=get&target=atlas3.6.0_WinNT_P2.zip Am I missing anything obvious? I am using mingw for gcc and g77 and calling python setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst Thanks, Ryan On 5/21/07, Ryan Krauss wrote: > I am trying to build from source on Windows to get a scipy that works > with older AMD processors. I am having some troubles. Everything > builds easily enough, but if I build from the source tarball, I get > lots of errors related to loadmat. If I build from svn, Python > crashes during scipy.test() > > >>> import scipy > >>> scipy.test() > Found 4 tests for scipy.io.array_import > Found 7 tests for scipy.cluster.vq > Found 128 tests for scipy.linalg.fblas > Found 399 tests for scipy.ndimage > Found 10 tests for scipy.integrate.quadpack > Found 98 tests for scipy.stats.stats > Found 53 tests for scipy.linalg.decomp > Found 3 tests for scipy.integrate.quadrature > Found 20 tests for scipy.fftpack.pseudo_diffs > Found 6 tests for scipy.optimize.optimize > Found 6 tests for scipy.linalg.iterative > Found 6 tests for scipy.interpolate.fitpack > Found 6 tests for scipy.interpolate > Found 12 tests for scipy.io.mmio > Found 1 tests for scipy.integrate > Found 4 tests for scipy.linalg.lapack > Found 18 tests for scipy.fftpack.basic > Found 4 tests for scipy.io.recaster > Found 4 tests for scipy.optimize.zeros > Warning: FAILURE importing tests for es\\scipy\\io\\mio.pyc'> > C:\Python25\Lib\site-packages\scipy\sparse\sparse.py:15: ImportError: cannot imp > ort name densetocsr (in ) > Found 41 tests for scipy.linalg.basic > Found 358 tests for scipy.special.basic > Found 128 tests for scipy.lib.blas.fblas > Found 7 tests for scipy.linalg.matfuncs > Found 42 tests for scipy.lib.lapack > Found 1 tests for scipy.optimize.cobyla > Found 16 tests for scipy.lib.blas > Found 10 tests for scipy.stats.morestats > Found 14 tests for scipy.linalg.blas > Found 70 tests for scipy.stats.distributions > Found 5 tests for scipy.odr > Found 10 tests for scipy.optimize.nonlin > Found 4 tests for scipy.fftpack.helper > Found 5 tests for scipy.io.npfile > Found 4 tests for scipy.signal.signaltools > Found 0 tests for __main__ > > Don't worry about a warning regarding the number of bytes read. > Warning: 1000000 bytes requested, 20 bytes read. > ..............caxpy:n=4 > ..caxpy:n=3 > ....ccopy:n=4 > ..ccopy:n=3 > .............cscal:n=4 > ....cswap:n=4 > ..cswap:n=3 > .....daxpy:n=4 > ..daxpy:n=3 > ....dcopy:n=4 > ..dcopy:n=3 > .............dscal:n=4 > ....dswap:n=4 > ..dswap:n=3 > .....saxpy:n=4 > ..saxpy:n=3 > ....scopy:n=4 > ..scopy:n=3 > .............sscal:n=4 > ....sswap:n=4 > ..sswap:n=3 > .....zaxpy:n=4 > ..zaxpy:n=3 > ....zcopy:n=4 > ..zcopy:n=3 > .............zscal:n=4 > ....zswap:n=4 > ..zswap:n=3 > ................................................................................ > .........................C:\Python25\lib\site-packages\scipy\ndimage\interpolati > on.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. > Please use "mirror" instead. > warnings.warn('Mode "reflect" may yield incorrect results on ' > ................................................................................ > .............E > > and it crashes at that point. > > Restarting Python and trying from scipy import * gives > >>> import scipy > >>> from scipy import * > Traceback (most recent call last): > File "", line 1, in > File "C:\Python25\Lib\site-packages\scipy\linsolve\__init__.py", line 5, in odule> > import umfpack > File "C:\Python25\Lib\site-packages\scipy\linsolve\umfpack\__init__.py", line > 3, in > from umfpack import * > File "C:\Python25\Lib\site-packages\scipy\linsolve\umfpack\umfpack.py", line 1 > 1, in > import scipy.sparse as sp > File "C:\Python25\Lib\site-packages\scipy\sparse\__init__.py", line 5, in ule> > from sparse import * > File "C:\Python25\Lib\site-packages\scipy\sparse\sparse.py", line 15, in le> > from scipy.sparse.sparsetools import densetocsr, csrtocsc, csrtodense, \ > ImportError: cannot import name densetocsr > > Can someone help me through this? > > Thanks, > > Ryan > From robert.kern at gmail.com Mon May 21 17:12:52 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 16:12:52 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: References: Message-ID: <46520B54.4070105@gmail.com> Ryan Krauss wrote: > Sorry to whine about this, but I am really stuck and my class starts > tomorrow. I was planning to require my students to use Python, but if > I have any with AMD processors, I am in big trouble. > > I noticed this flying by on the cmd.exe window and wondered if it was > part of my problem: > > building 'quadpack' library > compiling Fortran sources > Fortran f77 compiler: C:\MinGW\bin\g77.exe -g -Wall -fno-second-underscore -mno- > cygwin -O3 -funroll-loops -mmmx -m3dnow -msse > creating build\temp.win32-2.5\Lib\integrate\quadpack That would have nothing to do with the error in the message you are responding to nor the previous problem you had. > (it went by so fast I thought I saw sse2, now that I have it copied > and pasted it doesn't look so bad - as long as an Athlon XP is sse). > > Basically, I have done nothing more than check out from svn, create a > site.cfg file that contains > [atlas] > library_dirs = c:\path\to\BlasLapackLibs > atlas_libs = lapack, f77blas, cblas, atlas > > and get ATLAS and LAPACK from here > http://scipy.org/Installing_SciPy/Windows?action=AttachFile&do=get&target=atlas3.6.0_WinNT_P2.zip Those are probably the libraries with the SSE2 instructions in them that was causing your problem. > Am I missing anything obvious? I am using mingw for gcc and g77 and calling > python setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst A colleague of mine has had the same scipy.sparse problem as you, but we don't know what the problem is. Can you look in your scipy/sparse/sparsetools.py file? Is it the same as Lib/sparse/sparsetools/sparsetools.py? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Mon May 21 17:59:02 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 May 2007 16:59:02 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: <46520B54.4070105@gmail.com> References: <46520B54.4070105@gmail.com> Message-ID: I think you are asking about the sparsetools in the svn directory and the one in C:\Python25, if so, I think the answer is they are the same: C:\Python25\Lib\site-packages\scipy\sparse>diff sparsetools.py C:\ryan\svn\scipy \Lib\sparse\sparsetools\sparsetools.py C:\Python25\Lib\site-packages\scipy\sparse> (I take that to mean no differences). I only have one sparetools in C:\Python25, and it is in C:\Python25\Lib\site-packages\scipy\sparse where the contents are 05/21/2007 01:27 PM . 05/21/2007 01:27 PM .. 05/21/2007 07:33 AM 1,650 info.py 05/21/2007 04:12 PM 1,763 info.pyc 05/21/2007 04:12 PM 1,763 info.pyo 05/21/2007 07:33 AM 1,359 setup.py 05/21/2007 04:12 PM 1,002 setup.pyc 05/21/2007 04:12 PM 1,002 setup.pyo 05/21/2007 07:33 AM 100,564 sparse.py 05/21/2007 04:12 PM 85,377 sparse.pyc 05/21/2007 04:12 PM 84,929 sparse.pyo 05/21/2007 07:33 AM 19,179 sparsetools.py 05/21/2007 04:12 PM 21,849 sparsetools.pyc 05/21/2007 04:12 PM 21,849 sparsetools.pyo 05/21/2007 01:27 PM tests 05/21/2007 11:08 AM 499,967 _sparsetools.pyd 05/21/2007 07:33 AM 207 __init__.py 05/21/2007 04:12 PM 577 __init__.pyc 05/21/2007 04:12 PM 577 __init__.pyo Ryan On 5/21/07, Robert Kern wrote: > Ryan Krauss wrote: > > Sorry to whine about this, but I am really stuck and my class starts > > tomorrow. I was planning to require my students to use Python, but if > > I have any with AMD processors, I am in big trouble. > > > > I noticed this flying by on the cmd.exe window and wondered if it was > > part of my problem: > > > > building 'quadpack' library > > compiling Fortran sources > > Fortran f77 compiler: C:\MinGW\bin\g77.exe -g -Wall -fno-second-underscore -mno- > > cygwin -O3 -funroll-loops -mmmx -m3dnow -msse > > creating build\temp.win32-2.5\Lib\integrate\quadpack > > That would have nothing to do with the error in the message you are responding > to nor the previous problem you had. > > > (it went by so fast I thought I saw sse2, now that I have it copied > > and pasted it doesn't look so bad - as long as an Athlon XP is sse). > > > > Basically, I have done nothing more than check out from svn, create a > > site.cfg file that contains > > [atlas] > > library_dirs = c:\path\to\BlasLapackLibs > > atlas_libs = lapack, f77blas, cblas, atlas > > > > and get ATLAS and LAPACK from here > > http://scipy.org/Installing_SciPy/Windows?action=AttachFile&do=get&target=atlas3.6.0_WinNT_P2.zip > > Those are probably the libraries with the SSE2 instructions in them that was > causing your problem. > > > Am I missing anything obvious? I am using mingw for gcc and g77 and calling > > python setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst > > A colleague of mine has had the same scipy.sparse problem as you, but we don't > know what the problem is. Can you look in your scipy/sparse/sparsetools.py file? > Is it the same as Lib/sparse/sparsetools/sparsetools.py? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Mon May 21 18:08:13 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 May 2007 17:08:13 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: References: <46520B54.4070105@gmail.com> Message-ID: I think I have made at least some progress. I can now do from scipy import * and calculate the determinant that got this whole problem started: >>> from scipy import * >>> matr = array([[1.1,1.9],[1.9,3.5]]) >>> linalg.det(matr) 0.23999999999999988 >>> import scipy >>> scipy.linalg.det(matr) 0.23999999999999988 I think the problem was that I linked against ATLAS for Pentium 2's instead of Pentium 3's. It was unclear which was best for this Athlon XP processor and I thought P2 would be safe either way. I think that was what kept me from being able to do from scipy import *. And I suspect it had to do with the sse flag being set when the fortran files were compiled using g77 - i.e. that the fortran files were compiled with flags that didn't correspond to the atlas and lapack libraries. However, scipy.test() still fails catastrophically at this point: ||A.x - b|| = 0.0067686763583 ||A.x - b|| = 9.02509125207e-005 ..F.....Result may be inaccurate, approximate err = 1.82697723188e-008 ...Result may be inaccurate, approximate err = 1.50259560743e-010 ................................................................................ ............................C:\Python25\lib\site-packages\scipy\ndimage\interpol ation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundari es. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' ................................................................................ .............E Is there an easy way to turn tests on and off so that I can understand which modules fail and if I need to worry about them? Also, how does the Celeron processor fit in to all this? Are they SSE2 compatible? At least SSE compatible? I am slightly afraid I am going to get myself in trouble if my students can't get this working easily. Ryan On 5/21/07, Ryan Krauss wrote: > I think you are asking about the sparsetools in the svn directory and > the one in C:\Python25, if so, I think the answer is they are the > same: > > C:\Python25\Lib\site-packages\scipy\sparse>diff sparsetools.py C:\ryan\svn\scipy > \Lib\sparse\sparsetools\sparsetools.py > > C:\Python25\Lib\site-packages\scipy\sparse> > (I take that to mean no differences). > > I only have one sparetools in C:\Python25, and it is in > C:\Python25\Lib\site-packages\scipy\sparse > > where the contents are > 05/21/2007 01:27 PM . > 05/21/2007 01:27 PM .. > 05/21/2007 07:33 AM 1,650 info.py > 05/21/2007 04:12 PM 1,763 info.pyc > 05/21/2007 04:12 PM 1,763 info.pyo > 05/21/2007 07:33 AM 1,359 setup.py > 05/21/2007 04:12 PM 1,002 setup.pyc > 05/21/2007 04:12 PM 1,002 setup.pyo > 05/21/2007 07:33 AM 100,564 sparse.py > 05/21/2007 04:12 PM 85,377 sparse.pyc > 05/21/2007 04:12 PM 84,929 sparse.pyo > 05/21/2007 07:33 AM 19,179 sparsetools.py > 05/21/2007 04:12 PM 21,849 sparsetools.pyc > 05/21/2007 04:12 PM 21,849 sparsetools.pyo > 05/21/2007 01:27 PM tests > 05/21/2007 11:08 AM 499,967 _sparsetools.pyd > 05/21/2007 07:33 AM 207 __init__.py > 05/21/2007 04:12 PM 577 __init__.pyc > 05/21/2007 04:12 PM 577 __init__.pyo > > > Ryan > > > On 5/21/07, Robert Kern wrote: > > Ryan Krauss wrote: > > > Sorry to whine about this, but I am really stuck and my class starts > > > tomorrow. I was planning to require my students to use Python, but if > > > I have any with AMD processors, I am in big trouble. > > > > > > I noticed this flying by on the cmd.exe window and wondered if it was > > > part of my problem: > > > > > > building 'quadpack' library > > > compiling Fortran sources > > > Fortran f77 compiler: C:\MinGW\bin\g77.exe -g -Wall -fno-second-underscore -mno- > > > cygwin -O3 -funroll-loops -mmmx -m3dnow -msse > > > creating build\temp.win32-2.5\Lib\integrate\quadpack > > > > That would have nothing to do with the error in the message you are responding > > to nor the previous problem you had. > > > > > (it went by so fast I thought I saw sse2, now that I have it copied > > > and pasted it doesn't look so bad - as long as an Athlon XP is sse). > > > > > > Basically, I have done nothing more than check out from svn, create a > > > site.cfg file that contains > > > [atlas] > > > library_dirs = c:\path\to\BlasLapackLibs > > > atlas_libs = lapack, f77blas, cblas, atlas > > > > > > and get ATLAS and LAPACK from here > > > http://scipy.org/Installing_SciPy/Windows?action=AttachFile&do=get&target=atlas3.6.0_WinNT_P2.zip > > > > Those are probably the libraries with the SSE2 instructions in them that was > > causing your problem. > > > > > Am I missing anything obvious? I am using mingw for gcc and g77 and calling > > > python setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst > > > > A colleague of mine has had the same scipy.sparse problem as you, but we don't > > know what the problem is. Can you look in your scipy/sparse/sparsetools.py file? > > Is it the same as Lib/sparse/sparsetools/sparsetools.py? > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an enigma, a harmless enigma > > that is made terrible by our own mad attempt to interpret it as though it had > > an underlying truth." > > -- Umberto Eco > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From robert.kern at gmail.com Mon May 21 18:11:44 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 17:11:44 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: References: <46520B54.4070105@gmail.com> Message-ID: <46521920.7010903@gmail.com> Ryan Krauss wrote: > However, scipy.test() still fails catastrophically at this point: > ||A.x - b|| = 0.0067686763583 > ||A.x - b|| = 9.02509125207e-005 > ..F.....Result may be inaccurate, approximate err = 1.82697723188e-008 > ...Result may be inaccurate, approximate err = 1.50259560743e-010 > ................................................................................ > ............................C:\Python25\lib\site-packages\scipy\ndimage\interpol > ation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundari > es. Please use "mirror" instead. > warnings.warn('Mode "reflect" may yield incorrect results on ' > ................................................................................ > .............E > > > Is there an easy way to turn tests on and off so that I can understand > which modules fail and if I need to worry about them? Use scipy.test(1, 2) and it will print the name of the test before running it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Mon May 21 18:19:55 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 May 2007 17:19:55 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: <46521920.7010903@gmail.com> References: <46520B54.4070105@gmail.com> <46521920.7010903@gmail.com> Message-ID: scipy.test(1,2) gives: black tophat 1 ... ok black tophat 2 ... ok boundary modesC:\Python25\lib\site-packages\scipy\ndimage\interpolation.py:41: U serWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' ... ok center of mass 1 ... ok center of mass 2 ... ok center of mass 3 ... ok center of mass 4 ... ok center of mass 5 ... ok center of mass 6 ... ok center of mass 7 ... ok center of mass 8 ... ok center of mass 9 ... ok correlation 1 ... ok correlation 2 ... ok correlation 3 ... ok correlation 4 ... ok correlation 5 ... ok correlation 6 ... ok correlation 7 ... ok correlation 8 ... ok correlation 9 ... ok correlation 10 ... ok correlation 11 ... ok correlation 12 ... ok correlation 13 ... ok correlation 14 ... ok correlation 15 ... ok correlation 16 ... ok correlation 17 ... ok correlation 18 ... ok correlation 19 ... ok correlation 20 ... ok correlation 21 ... ok correlation 22 ... ok correlation 23 ... ok correlation 24 ... ok correlation 25 ... ok brute force distance transform 1 ... ok brute force distance transform 2 ... ok brute force distance transform 3 ... ok brute force distance transform 4 ... ok brute force distance transform 5 ... ok brute force distance transform 6 ... ok chamfer type distance transform 1 ... ok chamfer type distance transform 2 ... ok chamfer type distance transform 3 ... ok euclidean distance transform 1 ... ok euclidean distance transform 2 ... ok euclidean distance transform 3 ... ok euclidean distance transform 4 ... ok line extension 1 ... ok line extension 2 ... ok line extension 3 ... ok line extension 4 ... ok line extension 5 ... ok line extension 6 ... ok line extension 7 ... ok line extension 8 ... ok line extension 9 ... ok line extension 10 ... ok extrema 1 ... ok extrema 2 ... ok extrema 3 ... ok extrema 4 ... ok find_objects 1 ... ok find_objects 2 ... ok find_objects 3 ... ok find_objects 4 ... ok find_objects 5 ... ok find_objects 6 ... ok find_objects 7 ... ok find_objects 8 ... ok find_objects 9 ... ok ellipsoid fourier filter for complex transforms 1 ... ok ellipsoid fourier filter for real transforms 1 ... ok gaussian fourier filter for complex transforms 1 ... ok gaussian fourier filter for real transforms 1 ... ok shift filter for complex transforms 1 ... ok shift filter for real transforms 1 ... ok uniform fourier filter for complex transforms 1 ... ok uniform fourier filter for real transforms 1 ... ok gaussian filter 1 ... ok gaussian filter 2 ... ok gaussian filter 3 ... ok gaussian filter 4 ... ok gaussian filter 5 ... ok gaussian filter 6 ... ok gaussian gradient magnitude filter 1 ... ok gaussian gradient magnitude filter 2 ... ok gaussian laplace filter 1 ... ok gaussian laplace filter 2 ... ok generation of a binary structure 1 ... ok generation of a binary structure 2 ... ok generation of a binary structure 3 ... ok generation of a binary structure 4 ... ok generic filter 1 ... ERROR generic 1d filter 1 That isn't all of it, some of it scrolls beyond what cmd.exe allows me to go back to, but that is where it fails. Ryan On 5/21/07, Robert Kern wrote: > Ryan Krauss wrote: > > > However, scipy.test() still fails catastrophically at this point: > > ||A.x - b|| = 0.0067686763583 > > ||A.x - b|| = 9.02509125207e-005 > > ..F.....Result may be inaccurate, approximate err = 1.82697723188e-008 > > ...Result may be inaccurate, approximate err = 1.50259560743e-010 > > ................................................................................ > > ............................C:\Python25\lib\site-packages\scipy\ndimage\interpol > > ation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundari > > es. Please use "mirror" instead. > > warnings.warn('Mode "reflect" may yield incorrect results on ' > > ................................................................................ > > .............E > > > > > > Is there an easy way to turn tests on and off so that I can understand > > which modules fail and if I need to worry about them? > > Use scipy.test(1, 2) and it will print the name of the test before running it. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon May 21 18:25:41 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 17:25:41 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: References: <46520B54.4070105@gmail.com> <46521920.7010903@gmail.com> Message-ID: <46521C65.3010506@gmail.com> Ryan Krauss wrote: > generic filter 1 ... ERROR > generic 1d filter 1 It looks like the problem is in scipy.ndimage: def test_generic_filter1d01(self): "generic 1d filter 1" weights = numpy.array([1.1, 2.2, 3.3]) def _filter_func(input, output, fltr, total): fltr = fltr / total for ii in range(input.shape[0] - 2): output[ii] = input[ii] * fltr[0] output[ii] += input[ii + 1] * fltr[1] output[ii] += input[ii + 2] * fltr[2] for type in self.types: a = numpy.arange(12, dtype = type) a.shape = (3,4) r1 = ndimage.correlate1d(a, weights / weights.sum(), 0, origin = -1) r2 = ndimage.generic_filter1d(a, _filter_func, 3, axis = 0, origin = -1, extra_arguments = (weights,), extra_keywords = {'total': weights.sum()}) self.failUnless(diff(r1, r2) < eps) If you don't need scipy.ndimage for your class, ignore the error. You can test the other package independently like so: >>> from numpy import NumpyTest >>> t = NumpyTest('scipy.ndimage') >>> t.test(1, 2) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gnchen at cortechs.net Mon May 21 21:30:15 2007 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 21 May 2007 18:30:15 -0700 Subject: [SciPy-user] [SciPy-dev] (no subject) In-Reply-To: <20070521193401.GA3524@arbutus.physics.mcmaster.ca> References: <5A8B0B13-5973-45D3-AF20-FFE5D0FA2E57@cortechs.net> <20070521193401.GA3524@arbutus.physics.mcmaster.ca> Message-ID: <7652F73B-EE59-49B6-971C-1C9870B61786@cortechs.net> Thanks for all info. It worked... Gen On May 21, 2007, at 12:34 PM, David M. Cooke wrote: > On Mon, May 21, 2007 at 12:13:53PM -0700, Gennan Chen wrote: >> Hi! >> >> I need to build a distribution OS X package from source for numpy and >> scipy . But setup.py in both of them seems lacking that option. Can >> anyone tell me how to do that? > > In numpy, the setupegg.py script works the same as setup.py, except it > import setuptools first so stuff like bdist_egg and bdist_mpkg should > work (assuming you have the py2app stuff installed). You can do the > same > with scipy by importing setuptools first: > > $ python -c 'import setuptools; execfile("setup.py")' bdist_mpkg > > -- > |>|\/|< > /--------------------------------------------------------------------- > -----\ > |David M. Cooke http:// > arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Mon May 21 22:34:48 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 May 2007 21:34:48 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: <46521C65.3010506@gmail.com> References: <46520B54.4070105@gmail.com> <46521920.7010903@gmail.com> <46521C65.3010506@gmail.com> Message-ID: Is there an easy way to test every package except ndimage? Because it fails and completely crashes Python, I don't know any tests that run after ndimage also fail. On 5/21/07, Robert Kern wrote: > Ryan Krauss wrote: > > generic filter 1 ... ERROR > > generic 1d filter 1 > > It looks like the problem is in scipy.ndimage: > > def test_generic_filter1d01(self): > "generic 1d filter 1" > weights = numpy.array([1.1, 2.2, 3.3]) > def _filter_func(input, output, fltr, total): > fltr = fltr / total > for ii in range(input.shape[0] - 2): > output[ii] = input[ii] * fltr[0] > output[ii] += input[ii + 1] * fltr[1] > output[ii] += input[ii + 2] * fltr[2] > for type in self.types: > a = numpy.arange(12, dtype = type) > a.shape = (3,4) > r1 = ndimage.correlate1d(a, weights / weights.sum(), 0, > origin = -1) > r2 = ndimage.generic_filter1d(a, _filter_func, 3, > axis = 0, origin = -1, extra_arguments = (weights,), > extra_keywords = {'total': weights.sum()}) > self.failUnless(diff(r1, r2) < eps) > > > If you don't need scipy.ndimage for your class, ignore the error. You can test > the other package independently like so: > > >>> from numpy import NumpyTest > >>> t = NumpyTest('scipy.ndimage') > >>> t.test(1, 2) > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon May 21 22:37:46 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 21:37:46 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: References: <46520B54.4070105@gmail.com> <46521920.7010903@gmail.com> <46521C65.3010506@gmail.com> Message-ID: <4652577A.20006@gmail.com> Ryan Krauss wrote: > Is there an easy way to test every package except ndimage? Because it > fails and completely crashes Python, I don't know any tests that run > after ndimage also fail. from numpy import NumpyTest packages = """ scipy.cluster scipy.fftpack scipy.integrate scipy.interpolate scipy.io scipy.lib scipy.linalg scipy.linsolve scipy.maxentropy scipy.misc scipy.odr scipy.optimize scipy.signal scipy.sparse scipy.special scipy.stats scipy.stsci scipy.weave """.strip().split() for subpkg in packages: print subpkg t = NumpyTest(subpkg) t.test(1, 2) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Mon May 21 23:00:25 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 May 2007 22:00:25 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: <46521C65.3010506@gmail.com> References: <46520B54.4070105@gmail.com> <46521920.7010903@gmail.com> <46521C65.3010506@gmail.com> Message-ID: I've got failures in linalg and weave. I can live without weave, but linalg is pretty important (though I don't need all of its capabilities). Linalg: ====================================================================== FAIL: check_bicgstab (scipy.linalg.tests.test_iterative.test_iterative_solvers) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\linalg\tests\test_iterative.py", lin e 61, in check_bicgstab assert norm(dot(self.A, x) - self.b) < 5*self.tol AssertionError ====================================================================== FAIL: check_qmr (scipy.linalg.tests.test_iterative.test_iterative_solvers) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\linalg\tests\test_iterative.py", lin e 69, in check_qmr assert norm(dot(self.A, x) - self.b) < 5*self.tol AssertionError ---------------------------------------------------------------------- Ran 253 tests in 0.761s FAILED (failures=2) Out[18]: Weave: ====================================================================== FAIL: check_1d_3 (scipy.weave.tests.test_size_check.test_dummy_array_indexing) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", lin e 168, in check_1d_3 self.generic_1d('a[-11:]') File "C:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", lin e 135, in generic_1d self.generic_wrap(a,expr) File "C:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", lin e 127, in generic_wrap self.generic_test(a,expr,desired) File "C:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", lin e 123, in generic_test assert_array_equal(actual,desired, expr) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 223, in asse rt_array_equal verbose=verbose, header='Arrays are not equal') File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 215, in asse rt_array_compare assert cond, msg AssertionError: Arrays are not equal a[-11:] (mismatch 100.0%) x: array([1]) y: array([10]) ====================================================================== FAIL: check_1d_6 (scipy.weave.tests.test_size_check.test_dummy_array_indexing) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", lin e 174, in check_1d_6 self.generic_1d('a[:-11]') File "C:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", lin e 135, in generic_1d self.generic_wrap(a,expr) File "C:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", lin e 127, in generic_wrap self.generic_test(a,expr,desired) File "C:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", lin e 123, in generic_test assert_array_equal(actual,desired, expr) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 223, in asse rt_array_equal verbose=verbose, header='Arrays are not equal') File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 215, in asse rt_array_compare assert cond, msg AssertionError: Arrays are not equal a[:-11] (mismatch 100.0%) x: array([9]) y: array([0]) ---------------------------------------------------------------------- Ran 132 tests in 3.044s FAILED (failures=2) Out[29]: Are the older processors not very well supported? I understand that this may not be an area where we want to spend tons of time and there may simply not be that many testers out there, but I am a little surprised by this. I use Scipy almost everyday and think of it as very reliable software. I am afraid a student with a non-SSE2 processor might get a very different impression. Ryan On 5/21/07, Robert Kern wrote: > Ryan Krauss wrote: > > generic filter 1 ... ERROR > > generic 1d filter 1 > > It looks like the problem is in scipy.ndimage: > > def test_generic_filter1d01(self): > "generic 1d filter 1" > weights = numpy.array([1.1, 2.2, 3.3]) > def _filter_func(input, output, fltr, total): > fltr = fltr / total > for ii in range(input.shape[0] - 2): > output[ii] = input[ii] * fltr[0] > output[ii] += input[ii + 1] * fltr[1] > output[ii] += input[ii + 2] * fltr[2] > for type in self.types: > a = numpy.arange(12, dtype = type) > a.shape = (3,4) > r1 = ndimage.correlate1d(a, weights / weights.sum(), 0, > origin = -1) > r2 = ndimage.generic_filter1d(a, _filter_func, 3, > axis = 0, origin = -1, extra_arguments = (weights,), > extra_keywords = {'total': weights.sum()}) > self.failUnless(diff(r1, r2) < eps) > > > If you don't need scipy.ndimage for your class, ignore the error. You can test > the other package independently like so: > > >>> from numpy import NumpyTest > >>> t = NumpyTest('scipy.ndimage') > >>> t.test(1, 2) > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon May 21 23:06:00 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 22:06:00 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: References: <46520B54.4070105@gmail.com> <46521920.7010903@gmail.com> <46521C65.3010506@gmail.com> Message-ID: <46525E18.7040100@gmail.com> Ryan Krauss wrote: > I've got failures in linalg and weave. I can live without weave, but > linalg is pretty important (though I don't need all of its > capabilities). > > Linalg: > > ====================================================================== > FAIL: check_bicgstab (scipy.linalg.tests.test_iterative.test_iterative_solvers) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "C:\Python25\Lib\site-packages\scipy\linalg\tests\test_iterative.py", lin > e 61, in check_bicgstab > assert norm(dot(self.A, x) - self.b) < 5*self.tol > AssertionError > > ====================================================================== > FAIL: check_qmr (scipy.linalg.tests.test_iterative.test_iterative_solvers) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "C:\Python25\Lib\site-packages\scipy\linalg\tests\test_iterative.py", lin > e 69, in check_qmr > assert norm(dot(self.A, x) - self.b) < 5*self.tol > AssertionError > > ---------------------------------------------------------------------- > Ran 253 tests in 0.761s > > FAILED (failures=2) > Out[18]: If you don't use the iterative solvers, then there is no problem for you. > Are the older processors not very well supported? I don't know where the problem lies, so I couldn't say if this is the case. If you would like to help us with debugging on your machine after your class, we might be able to pin it down. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Mon May 21 23:11:10 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 May 2007 22:11:10 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: <46525E18.7040100@gmail.com> References: <46520B54.4070105@gmail.com> <46521920.7010903@gmail.com> <46521C65.3010506@gmail.com> <46525E18.7040100@gmail.com> Message-ID: OK, it sounds like the problems are in packages I don't use, so I will go forward with my little experiment and see how it goes. I will come up with a few simple tests related to things we are going to do in this robotics class and have a little install fest and see happens. I am willing to help and may have drafted some new testers with all kinds of interesting problems by this time tomorrow. This is actually my wife's computer, so I don't know how excited she is to loan it to me too often. But if it makes Scipy better, I will help when I can. Thanks, Ryan On 5/21/07, Robert Kern wrote: > Ryan Krauss wrote: > > I've got failures in linalg and weave. I can live without weave, but > > linalg is pretty important (though I don't need all of its > > capabilities). > > > > Linalg: > > > > ====================================================================== > > FAIL: check_bicgstab (scipy.linalg.tests.test_iterative.test_iterative_solvers) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "C:\Python25\Lib\site-packages\scipy\linalg\tests\test_iterative.py", lin > > e 61, in check_bicgstab > > assert norm(dot(self.A, x) - self.b) < 5*self.tol > > AssertionError > > > > ====================================================================== > > FAIL: check_qmr (scipy.linalg.tests.test_iterative.test_iterative_solvers) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "C:\Python25\Lib\site-packages\scipy\linalg\tests\test_iterative.py", lin > > e 69, in check_qmr > > assert norm(dot(self.A, x) - self.b) < 5*self.tol > > AssertionError > > > > ---------------------------------------------------------------------- > > Ran 253 tests in 0.761s > > > > FAILED (failures=2) > > Out[18]: > > If you don't use the iterative solvers, then there is no problem for you. > > > Are the older processors not very well supported? > > I don't know where the problem lies, so I couldn't say if this is the case. If > you would like to help us with debugging on your machine after your class, we > might be able to pin it down. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From openopt at ukr.net Tue May 22 01:40:46 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 22 May 2007 08:40:46 +0300 Subject: [SciPy-user] GSoC project related to numerical optimization In-Reply-To: <46520100.3090608@gmail.com> References: <4651FB9F.2080301@ukr.net> <46520100.3090608@gmail.com> Message-ID: <4652825E.6010707@ukr.net> Robert Kern wrote: > dmitrey wrote: > >> So, I took a look at some recent papers denoted to LP solvers (2005 till >> now), and found 2 algorithms with rather precise descriptions. >> > > Excellent. I'd like to see those references if you get the chance. > > take a look at LP1104.pdf & LP1116.pdf from http://www.box.net/shared/3jmi2hj8lk note that one of the 2 articles allows not only LP -> min (subjected to linear constraints) but nlp f(x) -> min (subjected to linear constraints) both these articles I found in 'LP' section. There are much more ones but only a small amount provide precisely written algorithms, and almost noone provides pseudocode. I would answer you earlier but it was time to go sleep here. Regards, D. >> P.S. My server refuses to send letters to more than 2 recepients, but I >> have more mentors. Don't you mind if I will do my GSoC weekly reports in >> scipy-dev or other mailing list? Or if I will create a blog that I have >> been asked and publish in mailing lists weekly URL's to new reports in >> the one? >> > > I am happy for you to use the scipy lists if your mentors don't mind. scipy-dev > is probably the more appropriate of the two, though. > > From david at ar.media.kyoto-u.ac.jp Tue May 22 04:09:03 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 22 May 2007 17:09:03 +0900 Subject: [SciPy-user] Pymachine - PyMC interoperability In-Reply-To: <464DC504.7010503@cse.ucsc.edu> References: <464DC504.7010503@cse.ucsc.edu> Message-ID: <4652A51F.5020205@ar.media.kyoto-u.ac.jp> Anand Patil wrote: > David and co., > > In the nascent 2.0 release of PyMC, we're trying to separate declaration > of probability models from the classes that do the actual fitting. We're > wondering whether the PyMachine developers would be willing to keep to a > similar design for methods that are explicitly based on probability > models for the sake of interoperability. Hi PyMC developers, Interoperability is always good :) I have never really used PyMC though, and I don't know much about MCMC, though I may ask really stupid questions below. > > As PyMC's name implies, the fitting algorithms we focus on are under the > umbrella of MCMC, but there will be cases where our users prefer to fit > their model using another method instead or in addition. If Pymachine > also separates model declaration and fitting, and we're able to come up > with a declaration scheme that works for both of us, it should often be > feasible for users to code up a probability model and fit it using > either project with minimal rewrite. The advantages seem pretty clear: > we'd both give our users more methods to choose from while decreasing > our respective development and maintenance commitments. > > The common probability model 'declaration scheme' could be as thin as a > common syntax or as thick as a common set of basic classes representing > nodes or variables. Much of the effort we've devoted to the 2.0 release > so far has gone toward designing such basic classes, and we'd be happy > to write our solution up to seed the discussion. Or we could just > compare notes after you've had the chance to hammer out your own design. The basic goal of pymachine is to clean existing code, and provide high level interface, that is adding new models will be done at the end of the project, and only if everything else is finished. This means I probably won't have a lot of time to think about this particular problem for the SoC. Now, as the author of pyem (EM for mixture of Gaussian), and soon to be released variational based approximation for EM for Gaussian mixtures, I am very much interested in working together on those things (I see on PyMC's doc that I could have avoided the pain of implementing and debugging some function related to Wishart pdf :) ). From a user point of view, it would be interesting for example to be able to compare VBEM against MCMC for clustering, etc... The way pyem is designed for now is the following: 1 : Mixture classes. The idea is that Mixture classes are simples classes, which are mainly useful as a container of the model parameters (number of mixtures, etc...). They also provide basic facilities such as sampling and plot. 2 : Mixture Model classes. Those are built by passing Mixture instances, and implements non trivial things such as computing Sufficient Statistics, the log likelihood, things like BIC, etc... 3 : Classifier classes. Those are built by passing Mixture Models instances. The only implemented for now is the standard EM. I am not sure whether this is good design or not, because I only implemented one kind of classifier (EM), and mainly one kind of Mixture Model (finite Gaussian mixtures). Is there any tasks oriented examples for PyMC2 ? I quickly read through the code at code.google.com, but not knowing much about MCMC, I couldn't really understand the global scheme of PyMC's design... David From millman at berkeley.edu Tue May 22 04:44:37 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 22 May 2007 01:44:37 -0700 Subject: [SciPy-user] Pymachine - PyMC interoperability In-Reply-To: <4652A51F.5020205@ar.media.kyoto-u.ac.jp> References: <464DC504.7010503@cse.ucsc.edu> <4652A51F.5020205@ar.media.kyoto-u.ac.jp> Message-ID: On 5/22/07, David Cournapeau wrote: > Anand Patil wrote: > > David and co., > > > > In the nascent 2.0 release of PyMC, we're trying to separate declaration > > of probability models from the classes that do the actual fitting. We're > > wondering whether the PyMachine developers would be willing to keep to a > > similar design for methods that are explicitly based on probability > > models for the sake of interoperability. > Hi PyMC developers, > > Interoperability is always good :) I have never really used PyMC [....] Hey Anand, I am mentoring David's SoC project and just want to agree with you both that it would be great to ensure interoperability. Would you be interested in integrating PyMC into Scipy and/or Scikits? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From ziouboudou at yahoo.fr Tue May 22 04:47:34 2007 From: ziouboudou at yahoo.fr (=?iso-8859-1?q?HUA=20Minh-T=FFffffffffffe2m?=) Date: Tue, 22 May 2007 10:47:34 +0200 (CEST) Subject: [SciPy-user] Python crashes systematically with UnivariateSpline Message-ID: <22981.6862.qm@web26301.mail.ukl.yahoo.com> Hello everyone, I'm using Python in some physics experiments, but for the first time it doesn't work !!! Each time I type in the following : Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] Type "copyright", "credits" or "license" for more information. IPython 0.6.13 -- An enhanced Interactive Python. >from scipy.interpolate import UnivariateSpline >b=UnivariateSpline(range(10),range(10),k=3,s=0) and Python crashes, showing the typical Windows error message. I've tried with different values for k and different arrays but the result is always the same. Does someone know a solution? Thanks and best regards, Uni. _____________________________________________________________________________ Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail From nwagner at iam.uni-stuttgart.de Tue May 22 04:58:31 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 May 2007 10:58:31 +0200 Subject: [SciPy-user] Python crashes systematically with UnivariateSpline In-Reply-To: <22981.6862.qm@web26301.mail.ukl.yahoo.com> References: <22981.6862.qm@web26301.mail.ukl.yahoo.com> Message-ID: <4652B0B7.5010903@iam.uni-stuttgart.de> HUA Minh-T?ffffffffffe2m wrote: > Hello everyone, > > I'm using Python in some physics experiments, but > for the first time it doesn't work !!! > > Each time I type in the following : > > Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC > v.1310 32 bit (Intel)] > Type "copyright", "credits" or "license" for more > information. > > IPython 0.6.13 -- An enhanced Interactive Python. > > >from scipy.interpolate import UnivariateSpline > >b=UnivariateSpline(range(10),range(10),k=3,s=0) > > and Python crashes, showing the typical Windows > error message. > > I've tried with different values for k and > different arrays but the result is always the same. > Does someone know a solution? > Thanks and best regards, > > Uni. > > > _____________________________________________________________________________ > Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Works for me. >>> from scipy.interpolate import UnivariateSpline >>> b=UnivariateSpline(range(10),range(10),k=3,s=0) >>> b >>> import scipy >>> scipy.__version__ '0.5.3.dev3020' Nils From ziouboudou at yahoo.fr Tue May 22 05:03:27 2007 From: ziouboudou at yahoo.fr (=?iso-8859-1?q?HUA=20Minh-T=FFffffffffffe2m?=) Date: Tue, 22 May 2007 11:03:27 +0200 (CEST) Subject: [SciPy-user] RE : Re: Python crashes systematically with UnivariateSpline In-Reply-To: <4652B0B7.5010903@iam.uni-stuttgart.de> Message-ID: <335996.46200.qm@web26310.mail.ukl.yahoo.com> Hello, thanks for the quick answer. I personally have the following version of scipy : scipy.__version__ '0.5.2' Best regards, Uni. _____________________________________________________________________________ Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail From fredmfp at gmail.com Tue May 22 08:52:15 2007 From: fredmfp at gmail.com (fred) Date: Tue, 22 May 2007 14:52:15 +0200 Subject: [SciPy-user] f2py with i686 vs x86_64 arch... Message-ID: <4652E77F.90705@gmail.com> > Hi, > >I got a (strange ?) problem using f2py on x86_64 arch (linux box) > >The short code attached works fine on i686 arch, but gives bad results >on x86_64 >(each are debian etch). Hi, Nobody has any clues ? I think I need some help on this issue... Cheers, PS : scipy 0.5.2, numpy 1.0.2, python 2.4.4 -- http://scipy.org/FredericPetit From jerome-r.colin at laposte.net Tue May 22 08:51:30 2007 From: jerome-r.colin at laposte.net (J.Colin) Date: Tue, 22 May 2007 12:51:30 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?RE_=3A_Re=3A_Python_crashes_systematically?= =?utf-8?q?_with=09UnivariateSpline?= References: <4652B0B7.5010903@iam.uni-stuttgart.de> <335996.46200.qm@web26310.mail.ukl.yahoo.com> Message-ID: HUA Minh-T?ffffffffffe2m yahoo.fr> writes: > > Hello, > > thanks for the quick answer. > I personally have the following version of scipy : > scipy.__version__ > '0.5.2' > > Best regards, > Uni. > > _____________________________________________________________________________ > Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail > Hello, I get the same crashes with scipy.interpolate.splprep (python.exe crashes and generates an error report). I initially though my code was wrong, but the result is identical with codes from the scipy cookbook. I use scipy 0.5.2 on windows XP SP2. Jerome From matthieu.brucher at gmail.com Tue May 22 09:01:38 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 May 2007 15:01:38 +0200 Subject: [SciPy-user] RE : Re: Python crashes systematically with UnivariateSpline In-Reply-To: References: <4652B0B7.5010903@iam.uni-stuttgart.de> <335996.46200.qm@web26310.mail.ukl.yahoo.com> Message-ID: Hi, What is your processor ? There is an issue with some installers with processors that does not support SSE2. Matthieu 2007/5/22, J. Colin : > > HUA Minh-T?ffffffffffe2m yahoo.fr> writes: > > > > > Hello, > > > > thanks for the quick answer. > > I personally have the following version of scipy : > > scipy.__version__ > > '0.5.2' > > > > Best regards, > > Uni. > > > > > > _____________________________________________________________________________ > > Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! > Mail > > > > Hello, > I get the same crashes with scipy.interpolate.splprep (python.exe crashes > and > generates an error report). I initially though my code was wrong, but the > result > is identical with codes from the scipy cookbook. > I use scipy 0.5.2 on windows XP SP2. > > Jerome > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pearu at cens.ioc.ee Tue May 22 09:12:50 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 22 May 2007 15:12:50 +0200 Subject: [SciPy-user] f2py with i686 vs x86_64 arch... In-Reply-To: <4652E77F.90705@gmail.com> References: <4652E77F.90705@gmail.com> Message-ID: <4652EC52.6030801@cens.ioc.ee> Hi, I don't see how that this problem is related to f2py, or to numpy in general. The results you get are in C or Fortran level. Eg. using #include #include float foo(float x) { printf("foo(x=%f)\n", x); return(x*x); } float bar(float x) { printf("bar(x=%f)\n", x); return(x*x*x); } float funcs_(int *i, float *x) { float (*pfunc[])(float) = {foo, bar}; float result = (*pfunc[*i])(*x); printf("result=%f\n",result); return result; } I have in Python >>> import foo >>> foo.foo(2) foo(x=2.000000) result=4.000000 0 2. 0. bar(x=2.000000) result=8.000000 1 2. 0. The wrong result 0. is printed in Fortran. So i would say that this is g77 compiler issue since when using gfortran, I get correct results: f2py -c foo.f funcs.c -m foo --fcompiler=gnu95 >>> import foo >>> foo.foo(2) foo(x=2.000000) result=4.000000 0 2.000000 4.000000 bar(x=2.000000) result=8.000000 1 2.000000 8.000000 Regards, Pearu fred wrote: > > Hi, > > > >I got a (strange ?) problem using f2py on x86_64 arch (linux box) > > > >The short code attached works fine on i686 arch, but gives bad results > >on x86_64 > >(each are debian etch). > > Hi, > > Nobody has any clues ? > > I think I need some help on this issue... > > > Cheers, > > PS : scipy 0.5.2, numpy 1.0.2, python 2.4.4 > From jerome-r.colin at laposte.net Tue May 22 09:29:33 2007 From: jerome-r.colin at laposte.net (J.Colin) Date: Tue, 22 May 2007 13:29:33 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?RE_=3A_Re=3A_Python_crashes_systematically?= =?utf-8?q?_with=09UnivariateSpline?= References: <4652B0B7.5010903@iam.uni-stuttgart.de> <335996.46200.qm@web26310.mail.ukl.yahoo.com> Message-ID: Matthieu Brucher gmail.com> writes: > > > Hi,What is your processor ?There is an issue with some installers with processors that does not support SSE2.Matthieu > An AMD Duron 850MHz. Jerome From gvenezian at yahoo.com Tue May 22 09:43:30 2007 From: gvenezian at yahoo.com (Giulio Venezian) Date: Tue, 22 May 2007 06:43:30 -0700 (PDT) Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: <4652577A.20006@gmail.com> Message-ID: <995824.800.qm@web51007.mail.re2.yahoo.com> excuse me if i keep barging into this conversation. Ryan: Do I gather from all this that you have succeeded in constructing an AMD-compatible version of SciPy? Would you please share what you've learned? As I mentioned before I'm trying to get the functions special.jn_zeros and special.jnp_zeros to work on my AMD-based machine. Other than using the self-installing version, I don't understand how one goes about constructing a version of SciPy. I'd appreciate it very much if you would email me whatever instructions you prepare for your students. Giulio --- Robert Kern wrote: > Ryan Krauss wrote: > > Is there an easy way to test every package except > ndimage? Because it > > fails and completely crashes Python, I don't know > any tests that run > > after ndimage also fail. > > from numpy import NumpyTest > > packages = """ > scipy.cluster > scipy.fftpack > scipy.integrate > scipy.interpolate > scipy.io > scipy.lib > scipy.linalg > scipy.linsolve > scipy.maxentropy > scipy.misc > scipy.odr > scipy.optimize > scipy.signal > scipy.sparse > scipy.special > scipy.stats > scipy.stsci > scipy.weave > """.strip().split() > > for subpkg in packages: > print subpkg > t = NumpyTest(subpkg) > t.test(1, 2) > > -- > Robert Kern > > "I have come to believe that the whole world is an > enigma, a harmless enigma > that is made terrible by our own mad attempt to > interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ____________________________________________________________________________________ Sucker-punch spam with award-winning protection. Try the free Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/features_spam.html From matthieu.brucher at gmail.com Tue May 22 10:07:48 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 May 2007 16:07:48 +0200 Subject: [SciPy-user] RE : Re: Python crashes systematically with UnivariateSpline In-Reply-To: References: <4652B0B7.5010903@iam.uni-stuttgart.de> <335996.46200.qm@web26310.mail.ukl.yahoo.com> Message-ID: That must be the explaination. You used the Enthought packages ? Matthieu 2007/5/22, J. Colin : > > Matthieu Brucher gmail.com> writes: > > > > > > > Hi,What is your processor ?There is an issue with some installers with > processors that does not support SSE2.Matthieu > > > An AMD Duron 850MHz. > > Jerome > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Tue May 22 10:13:55 2007 From: fredmfp at gmail.com (fred) Date: Tue, 22 May 2007 16:13:55 +0200 Subject: [SciPy-user] f2py with i686 vs x86_64 arch... In-Reply-To: <4652EC52.6030801@cens.ioc.ee> References: <4652E77F.90705@gmail.com> <4652EC52.6030801@cens.ioc.ee> Message-ID: <4652FAA3.5010809@gmail.com> Pearu Peterson a ?crit : > The wrong result 0. is printed in Fortran. So i would say that this is > g77 compiler issue since when using gfortran, I get correct results: > > f2py -c foo.f funcs.c -m foo --fcompiler=gnu95 > _This_ is the clue I was looking for !! Thanks a lot ;-) Cheers, -- http://scipy.org/FredericPetit From ziouboudou at yahoo.fr Tue May 22 10:14:56 2007 From: ziouboudou at yahoo.fr (=?iso-8859-1?Q?Minh-T=E2m?=) Date: Tue, 22 May 2007 16:14:56 +0200 Subject: [SciPy-user] RE : Re: Python crashes systematically with UnivariateSpline References: <4652B0B7.5010903@iam.uni-stuttgart.de><335996.46200.qm@web26310.mail.ukl.yahoo.com> Message-ID: <008b01c79c7b$93de0520$eb942286@Tatam> Hi, here we have an AMD Athlon XP 2200+ Is this one also a problem for Scipy? MT ----- Original Message ----- From: "J.Colin" To: Sent: Tuesday, May 22, 2007 3:29 PM Subject: Re: [SciPy-user]RE : Re: Python crashes systematically with UnivariateSpline > Matthieu Brucher gmail.com> writes: > >> >> >> Hi,What is your processor ?There is an issue with some installers with > processors that does not support SSE2.Matthieu >> > An AMD Duron 850MHz. > > Jerome > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From jerome-r.colin at laposte.net Tue May 22 10:21:41 2007 From: jerome-r.colin at laposte.net (J.Colin) Date: Tue, 22 May 2007 14:21:41 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?RE_=3A_Re=3A_Python_crashes_systematically?= =?utf-8?q?_with=09UnivariateSpline?= References: <4652B0B7.5010903@iam.uni-stuttgart.de> <335996.46200.qm@web26310.mail.ukl.yahoo.com> Message-ID: Matthieu Brucher gmail.com> writes: > > > That must be the explaination. You used the Enthought packages ?Matthieu > > 2007/5/22, J. Colin laposte.net>: > Matthieu Brucher gmail.com > > writes:>>> Hi,What is your processor ?There is an issue with some installers withprocessors that does not support SSE2.Matthieu>An AMD Duron 850MHz.Jerome_______________________________________________ > SciPy-user mailing listSciPy-user scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user > I installed scipy from the 'scipy-0.5.2.win32-py2.5.exe' (scipy.org download link), and python from the binary available at python.org. Is there a solution (other than changing my laptop :-) ? Jerome From ryanlists at gmail.com Tue May 22 11:36:08 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 22 May 2007 10:36:08 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: <995824.800.qm@web51007.mail.re2.yahoo.com> References: <4652577A.20006@gmail.com> <995824.800.qm@web51007.mail.re2.yahoo.com> Message-ID: I am going to give my students an executable that is built for P3's and AMD's if the one from scipy.org for SSE2 chips doesn't work for them. I mainly followed the instructions from here http://scipy.org/Installing_SciPy/Windows ignoring the section about Intel MKL stuff. So basically I did 5 things: 1. Check out Numpy and Scipy from SVN (download tortoisesvn http://tortoisesvn.tigris.org/) 2. Setup a basic MinGW compiler system (I find this to be more painful than the MinGW people seem to think it is. You need to download almost all of the bin packages they provide and unpack them into a folder on your path so that you can type "gcc -v" in a cmd.exe window and get some sensible response. You need to make sure you get gcc-core, gcc-g77, mingw-runtime, and w32-api. The mingw-runtime contains things like stdio.h.) 3. Download the precomiled ATLAS binaries (follow link on Installing_Scipy/Windows parge) 4. Create a site.cfg file that contains [atlas] library_dirs = c:\path\to\BlasLapackLibs atlas_libs = lapack, f77blas, cblas, atlas 5. type c:\path\to\python.exe setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst at a cmd.exe prompt. The exe file created isn't perfect, there are some test failures. But I think it meets my needs. If your email can handle 2 meg attachments, I can send you my exe file to try. I don't know if you need the numpy and the scipy one or if my scipy will work with your numpy. Ryan On 5/22/07, Giulio Venezian wrote: > excuse me if i keep barging into this conversation. > > Ryan: Do I gather from all this that you have > succeeded in constructing an AMD-compatible version of > SciPy? > > Would you please share what you've learned? As I > mentioned before I'm trying to get the functions > special.jn_zeros and special.jnp_zeros to work on my > AMD-based machine. Other than using the > self-installing version, I don't understand how one > goes about constructing a version of SciPy. > > I'd appreciate it very much if you would email me > whatever instructions you prepare for your students. > > Giulio > --- Robert Kern wrote: > > > Ryan Krauss wrote: > > > Is there an easy way to test every package except > > ndimage? Because it > > > fails and completely crashes Python, I don't know > > any tests that run > > > after ndimage also fail. > > > > from numpy import NumpyTest > > > > packages = """ > > scipy.cluster > > scipy.fftpack > > scipy.integrate > > scipy.interpolate > > scipy.io > > scipy.lib > > scipy.linalg > > scipy.linsolve > > scipy.maxentropy > > scipy.misc > > scipy.odr > > scipy.optimize > > scipy.signal > > scipy.sparse > > scipy.special > > scipy.stats > > scipy.stsci > > scipy.weave > > """.strip().split() > > > > for subpkg in packages: > > print subpkg > > t = NumpyTest(subpkg) > > t.test(1, 2) > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an > > enigma, a harmless enigma > > that is made terrible by our own mad attempt to > > interpret it as though it had > > an underlying truth." > > -- Umberto Eco > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > ____________________________________________________________________________________ > Sucker-punch spam with award-winning protection. > Try the free Yahoo! Mail Beta. > http://advision.webevents.yahoo.com/mailbeta/features_spam.html > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From jerome-r.colin at laposte.net Tue May 22 11:37:47 2007 From: jerome-r.colin at laposte.net (J.Colin) Date: Tue, 22 May 2007 15:37:47 +0000 (UTC) Subject: [SciPy-user] RE : Re: Python crashes systematically with UnivariateSpline References: <4652B0B7.5010903@iam.uni-stuttgart.de> <335996.46200.qm@web26310.mail.ukl.yahoo.com> Message-ID: I have made another test on 3 different PC with the same script "Interpolation of an N-D curve" from the scipy cookbook (see http://www.scipy.org/Cookbook/Interpolation) - AMD Duron 850MHz, winXP Pro SP2, Python 2.5, numpy 1.0.2, scipy 0.5.2 : python crashes - Intel Core Duo 2,33GHz, winXP Pro SP2, Python 2.5, numpy 1.0.2, scipy 0.5.2 : works fine - Intel Pentium 2, Fedora Core 4, Python 2.4.3, numpy 0.9.5, scipy 0.4.8 : works fine I hope this information is usefull. I may perfom additional tests if suggested. Regards, J.Colin From wolobah at aims.ac.za Tue May 22 18:46:45 2007 From: wolobah at aims.ac.za (Wolobah B Sali) Date: Wed, 23 May 2007 00:46:45 +0200 Subject: [SciPy-user] Digital Terrain Model in geophysics Message-ID: <465372D5.3000809@aims.ac.za> Hi all, I will appreciate your help concerning the following problem about Digital terrain model. You are to sum out rectangular prism using Plouff's formula. This method was describe by Talwani and Ewing. The program is a geophysical program in three dimension. That the plane will fly at constant height above the earth and measure gravitational gradients. The Latex code of the formula is below The dimension of the prismby the limit(x_1\leq x\leq x_2,y_1\leq y\leq y_2,z_1\leq z\eq z_2). the equation has been modify and is given by: \begin{equation} g_{zz}=\gamma\rho s_{m}\sum_{i=1}^{M}\left[\arctan\frac{z_{+}d_{i}}{PR_{i,i+1}}-\arctan\frac{z_{+}d_{i+1}}{PR_{i+1,i+1}}-\left( \arctan\frac{z_{-}d_{i}}{PR_{ii}}-\arctan\frac{{z_{-}d_{i+1}}}{{PR_{i+1,i}}}\right) \right] \end{equation} the parameter Z+ is the upper boundary and the z_ is the lower boundary of polygon in space. the other parameter are defind below begin{equation*} \Delta s=\sqrt{(\Delta x)2+(\Delta y)2}\quad:S=\frac{\Delta x}{\Delta s,}\quad:C=\frac{\Delta y}{\Delta s},\quad:d_k=x_{k}S+y_{k}C, \end{equation*} \begin{equation*} P=x_{k}C+y_{k}S,\quad:R_{jk}=\sqrt{r2_{k}+z2_{j}}. \quad:r_{k}=\sqrt{x2_{k}+y2_{k}. } \end{equation*} \Delta y=y_{i+1}-y_{i},\quad:\Delta x=x_{i+1}-x_{i} Thanks sali From robert.kern at gmail.com Tue May 22 18:51:28 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 22 May 2007 17:51:28 -0500 Subject: [SciPy-user] Digital Terrain Model in geophysics In-Reply-To: <465372D5.3000809@aims.ac.za> References: <465372D5.3000809@aims.ac.za> Message-ID: <465373F0.3040003@gmail.com> Wolobah B Sali wrote: > Hi all, > I will appreciate your help concerning the following problem about > Digital terrain model. I'm sorry, but this is not an appropriate forum for getting help on homework. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wolobah at aims.ac.za Tue May 22 19:29:26 2007 From: wolobah at aims.ac.za (Wolobah B Sali) Date: Wed, 23 May 2007 01:29:26 +0200 Subject: [SciPy-user] Digital Terrain Model in geophysics In-Reply-To: <465373F0.3040003@gmail.com> References: <465372D5.3000809@aims.ac.za> <465373F0.3040003@gmail.com> Message-ID: <46537CD6.2080700@aims.ac.za> Robert Kern wrote: > Wolobah B Sali wrote: >> Hi all, >> I will appreciate your help concerning the following problem about >> Digital terrain model. > > I'm sorry, but this is not an appropriate forum for getting help on homework. > Hi Robert, This is my code for the program but my result is not corresponding to the analytical solution that I have solved. I will appreciate if you run the program you will see where the analytical and the numerical solution diverge. Your help to have convergence will be grateful ######################################################################## from __future__ import division from scipy import * import MA import Gnuplot gp = Gnuplot.Gnuplot(persist = 1) sM = 1 gamma = 1 rho = 1 # polygon vertices filename = raw_input("Enter filename with DTM data: ") #the name of the data is "square.dat" data = io.array_import.read_array(filename) xP = data[1:,0] # a raw matrix ([ 0.,-1., 0., 1., 0.,]) yP = data[1:,1] # a raw matrix( [ 1., 0.,-1., 0., 1.,]) zT = data[0,0] #it is a single number (0.0) zB = data[0,1] #it is a single number( -0.5) zP = array([zT, zB]) #a list of two elements ([ 0. ,-0.5,]) N = len(xP) x = 0 y = 0 z = 0 obsPoint = array([x, y, z]) def compute_gzz(obsPoint): x = xP - obsPoint[0] # Since x, y and z are zero this is the same as above y = yP - obsPoint[1] z = zP - obsPoint[2] #dx values [-1., 1., 1.,-1.,-1.,] dx = concatenate((x[1:] - x[0:-1], [x[1] - x[0]])) #dy values [-1., -1., 1.,1.,-1.,] dy = concatenate((y[1:] - y[0:-1], [y[1] - y[0]])) ds=sqrt(dx**2 + dy**2) r = sqrt(x**2 + y**2) #R values [[ 1. , 1. , 1. , 1. , 1.,] #[ 1.11803399, 1.11803399, 1.11803399, 1.11803399,1.11803399,]] R = sqrt(r**2 + reshape(zP, (2,1))**2) KK = (x * dx + y * dy)/ds # this is dk # numer values [ 6.36396103,-0.70710678,-7.77817459,-0.70710678,6.36396103,] PP = (x * dy - y * dx)/ds # this is Our p his M # denom values [0.70710678, 7.77817459, 0.70710678,-6.36396103,0.70710678,] p = KK[0:-1] # dk with out the last element = dki his p # P1 = numer[1:] p1 = x[0:-1] * dx[0:-1] + y[1:] * dy[1:]/ds[1:] #let dk+1 #p1=KK[1:] M = PP[0:-1] # p with out the last element = Our pi # Ri,i+1= R[0, 1:] # Ri+1,i= R[1, 0:-1] # Ri+1,i+1= R[1, 1:] # Ri,i=R[0,0:-1] AA=(z[1]*p)/(M*R[0, 1:]) BB=(z[0]*p1)/(M*R[1, 1:]) CC=(z[1]*p)/(M*R[1,1:]) DD=(z[0]*p1)/(M*R[0, 1:]) #EE= M**2*R[0, 1:]*R[1, 1:] #FF=M**2*R[1, 0:-1]*R[0,0:-1] arg01=(AA-BB)/(1+AA*BB) arg02=(CC-DD)/(1+CC*DD) add0 = logical_and( arg01>0, (-arg02)>0)*pi + logical_and( arg01<0,(-arg02)<0)*(-pi) argument1=arctan(arg01)-arctan(arg02) gzz=gamma * rho * sM *(sum(argument1)/len(argument1))#+sum(add0)) return gzz print "gzz=",compute_gzz([5,5,10]) def func2d(y0, x0, xx, yy, zz, z0): return 1 / ( (xx - x0)**2 + (yy - y0)**2 + (zz - z0)**2)**(3/2) def hfunc(x): return 1 - abs(x) def gfunc(x): return -1 + abs(x) def xy_integral(xxP, xxM, xx, yy, zz, z0): q = integrate.dblquad(func2d, xxM, xxP, gfunc, hfunc, args=(xx, yy,zz, z0)) return q[0] def compute_ana_gzz2(xx, yy, zz, xxP, yyP, zzP, xM, yM, zM): gzz = (zz - zzP) * xy_integral(xxP, xM, xx, yy, zz, zzP) - (zz -zM) * xy_integral(xxP, xM, xx, yy, zz, zM) return gzz print "intergal=",compute_ana_gzz2(5,5,10,1,1,0,-1,-1,-0.5) Plane_limitXmin = -2 Plane_limitXmax = 2 Plane_limitYmin = -2 Plane_limitYmax = 2 PlaneHeight = 1 z = PlaneHeight delta_x = 0.1 delta_y = 0.1 Xrng = arange(Plane_limitXmin, Plane_limitXmax + delta_x/2, delta_x) Yrng = arange(Plane_limitYmin, Plane_limitYmax + delta_y/2, delta_y) xlen = len(Xrng) ylen = len(Yrng) gzz = zeros((xlen, ylen), Float) i = 0 while i < xlen: j = 0 x = Xrng[i] while j < ylen: y = Yrng[j] obsPoint = array([x, y, z]) #gzz[i,j] = compute_gzz(obsPoint) gzz[i,j] = compute_ana_gzz2(x,y,z,1.0,1,0,-1.0,-1,-0.5) j += 1 i += 1 gp.set_range('xrange',(Plane_limitXmin, Plane_limitXmax)) gp.set_range('yrange',(Plane_limitYmin, Plane_limitYmax)) #gp.set_range('zrange',(0, 0.1)) gp('set hidden') gp('set ticslevel 0') gp('set view 50, 30') gp('set surface') gp('set contour surface') gp('set cntrparam levels auto 20') gp('set key off') gp('set xlabel "x"') gp('set ylabel "y"') gp('set zlabel "g_zz"') gp('set size square') gzzdata = Gnuplot.GridData(gzz, xvals = Xrng, yvals = Yrng, with = "lines") polygondata = Gnuplot.Data(xP, yP, zT*ones((N,),Float), with = 'filledcurve') gp.splot(gzzdata) #gp('unset hidden') gp('set terminal x11 2') gp('set view 0, 0') gp('unset surface') gp('set contour base') #gp('set cntrparam levels incremental 0, 0.005, 0.1') gp('set cntrparam levels auto 20') gp('set key off') gp.splot(gzzdata) gp('set terminal x11 3') gp('set xlabel "x"') gp('set ylabel "y"') gp('set surface') gp('unset contour ') gp('unset zlabel') gp('unset axis z') gp('unset border 16') gp.set_range('zrange',(0, 0.01)) gp.splot(polygondata) #gp.hardcopy(filename="triangle9.eps",terminal="postscript") ############################################################################################ Thanks From ryanlists at gmail.com Tue May 22 21:35:32 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 22 May 2007 20:35:32 -0500 Subject: [SciPy-user] RE : Re: Python crashes systematically with UnivariateSpline In-Reply-To: References: <4652B0B7.5010903@iam.uni-stuttgart.de> <335996.46200.qm@web26310.mail.ukl.yahoo.com> Message-ID: I am trying to post my exe to my school webpage, but it is 11 megs and will likely lead to me exceeding my disk quota (it just did, I can't post it). I have run the N-D interp on my wife's laptop with an AMD Athlon processor using the scipy I built from source. Building from source isn't hard once you get MinGW correctly and completely installed. You don't have to build ATLAS or LAPACK from source, just link to them. I gave an overview of what I did to get it mostly working in this thread: http://permalink.gmane.org/gmane.comp.python.scientific.user/11922 I mainly followed these instructions: http://scipy.org/Installing_SciPy/Windows Let me know if you need more details. Ryan On 5/22/07, J. Colin wrote: > Matthieu Brucher gmail.com> writes: > > > > > > > That must be the explaination. You used the Enthought packages ?Matthieu > > > > 2007/5/22, J. Colin laposte.net>: > > Matthieu Brucher gmail.com > > > writes:>>> Hi,What is your processor ?There is an issue with some installers > withprocessors that does not support SSE2.Matthieu>An AMD Duron > 850MHz.Jerome_______________________________________________ > > SciPy-user mailing listSciPy-user > scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user > > > > I installed scipy from the 'scipy-0.5.2.win32-py2.5.exe' (scipy.org download > link), and python from the binary available at python.org. > > Is there a solution (other than changing my laptop :-) ? > > Jerome > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From oliphant.travis at ieee.org Wed May 23 02:09:44 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 May 2007 00:09:44 -0600 Subject: [SciPy-user] NumPy 1.0.3 release tomorrow, SciPy 0.5.3 next week Message-ID: <4653DAA8.20708@ieee.org> I'd like to tag the tree and make a NumPy 1.0.4 release tomorrow. Is there anything that needs to be done before that can happen? I'd also like to get another release of SciPy out the door as soon as possible. At the end of this week or early next week. I've added a bunch of 1-d classic spline interpolation algorithms (not the fast B-spline versions but the basic flexible algorithms of order 2, 3, 4, and 5 which have different boundary conditions). There are several ways to get at them in scipy.interpolate. I've also added my own "smoothest" algorithm which specifies the extra degrees of freedom available in spline formulations by making the resulting Kth-order spline have the smallest Kth-order derivative discontinuity. I've never seen this described by anyone but I'm sure somebody must have done discussed it before. If anybody knows something about splines who recalls seeing something like this, I would appreciate a reference. This specification actually allows very decent looking quadratic (2nd-order) splines. Best regards, -Travis From lbolla at gmail.com Wed May 23 06:04:13 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 23 May 2007 12:04:13 +0200 Subject: [SciPy-user] zeros of a function Message-ID: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> Hi all! I'm working with scipy.optimize.fsolve to find the zeros of a function. I'm wondering: is there an easy way to find ALL the zeros of a function (within a given interval), instead of just ONE? With fsolve, an initial guess value x0 must be given, and I can try many different x0s to "scan" the function in the interval: however, this does not assure me that I'm not missing any zero. How would you solve this problem? Thank you in advance, Lorenzo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuelez at gmail.com Wed May 23 06:20:55 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Wed, 23 May 2007 12:20:55 +0200 Subject: [SciPy-user] zeros of a function In-Reply-To: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> Message-ID: Hello my friend :) I don't think this is directly scipy/numpy related but it might give you some ideas. I believe that an elegant way to find all the zeros of a function might be using intervals arithmetic. I'm not sure there exists a library to do that easily in Python, but i did it in Matlab a few monthes ago and it worked nicely. Actually in my case i was looking for minima/maxima, but i believe the same concepts might be reapplied. One neat feature is that you can actually use the machine precision as a stopping criterion when looking for the solutions. Oh well, just my two cents :) Ciao! Emanuele On 5/23/07, lorenzo bolla wrote: > > Hi all! > I'm working with scipy.optimize.fsolve to find the zeros of a function. > I'm wondering: is there an easy way to find ALL the zeros of a function > (within a given interval), instead of just ONE? > With fsolve, an initial guess value x0 must be given, and I can try many > different x0s to "scan" the function in the interval: however, this does not > assure me that I'm not missing any zero. > How would you solve this problem? > Thank you in advance, > Lorenzo. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Wed May 23 06:42:19 2007 From: fredmfp at gmail.com (fred) Date: Wed, 23 May 2007 12:42:19 +0200 Subject: [SciPy-user] zeros of a function In-Reply-To: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> Message-ID: <46541A8B.3010502@gmail.com> lorenzo bolla a ?crit : > Hi all! > I'm working with scipy.optimize.fsolve to find the zeros of a function. > I'm wondering: is there an easy way to find ALL the zeros of a > function (within a given interval), instead of just ONE? > With fsolve, an initial guess value x0 must be given, and I can try > many different x0s to "scan" the function in the interval: however, > this does not assure me that I'm not missing any zero. > How would you solve this problem? It depends of your function. I wrote some stuff for spherical bessel and its derivative function, using an iterative method. Thus, I'm sure I find out all the zeros. -- http://scipy.org/FredericPetit From hasslerjc at comcast.net Wed May 23 07:26:25 2007 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 23 May 2007 07:26:25 -0400 Subject: [SciPy-user] zeros of a function In-Reply-To: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> Message-ID: <465424E1.9090806@comcast.net> An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Wed May 23 07:37:09 2007 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 23 May 2007 07:37:09 -0400 Subject: [SciPy-user] RE : Re: Python crashes systematically with UnivariateSpline In-Reply-To: References: <4652B0B7.5010903@iam.uni-stuttgart.de> <335996.46200.qm@web26310.mail.ukl.yahoo.com> Message-ID: <46542765.7070001@comcast.net> This is very similar to what I did a month ago for my Athlon 1600+. I've been away, or I might have been able to help a little. I also found a couple of problems that you didn't mention. I try to keep C: drive for the OS, and put programs in "Program Files" on the D: drive. MingW was extremely unhappy with this, and gave me endless problems until I finally gave up and let it install itself where it wanted to go - its own directory on C:. I since found out that it does not like directory names with spaces, whether quoted or not. This, apparently, is a well known problem, to those who know it well. (Apologies to Prof. Hirshbach.) I also found that on my computer, I had to use double backslashes (escaped backslash) in my path to Atlas and friends. This didn't seem to be a problem for anyone else. My scipy.test() crashed in the same place that yours did. I figured out how to run the separate tests by themselves, and assured myself that what I needed (optimize and ode) worked, so I didn't pursue it further. I have some more time, now, so I'll take another look at what runs and what doesn't. john Ryan Krauss wrote: > I am trying to post my exe to my school webpage, but it is 11 megs and > will likely lead to me exceeding my disk quota (it just did, I can't > post it). I have run the N-D interp on my wife's laptop with an AMD > Athlon processor using the scipy I built from source. Building from > source isn't hard once you get MinGW correctly and completely > installed. You don't have to build ATLAS or LAPACK from source, just > link to them. > > I gave an overview of what I did to get it mostly working in this thread: > http://permalink.gmane.org/gmane.comp.python.scientific.user/11922 > > I mainly followed these instructions: > http://scipy.org/Installing_SciPy/Windows > > Let me know if you need more details. > > Ryan > > From bernhard.voigt at gmail.com Wed May 23 08:17:52 2007 From: bernhard.voigt at gmail.com (Bernhard Voigt) Date: Wed, 23 May 2007 14:17:52 +0200 Subject: [SciPy-user] fmin_bfgs doesn't terminate Message-ID: <21a270aa0705230517y65060316j3e7a1eda6647b882@mail.gmail.com> Dear all! I'm using scipy.optimize.fmin_bfgs to perform a log likelihood fit. Unfortunately it doesn't terminate in many cases. I guess the reason is an endless loop in linesearch.py: while 1: stp,fval,derphi,task = minpack2.dcsrch(alpha1, phi0, derphi0, c1, c2, xtol, task, amin, amax,isave,dsave) if task[:2] == 'FG': alpha1 = stp fval = f(xk+stp*pk,*args) fc += 1 gval = fprime(xk+stp*pk,*newargs) if gradient: gc += 1 else: fc += len(xk) + 1 phi0 = fval derphi0 = numpy.dot(gval,pk) else: break This only terminates, when the task is set to something different than 'FG'. However, it happens that this takes forever, this could be related to some problems determining the gradient of my llh function and/or the initial seed. Anyway, wouldn't it be usefull to insert a hard limit for that loop and raise an Exception? Thanks! Bernhard -------------- next part -------------- An HTML attachment was scrubbed... URL: From gvenezian at yahoo.com Wed May 23 09:00:09 2007 From: gvenezian at yahoo.com (Giulio Venezian) Date: Wed, 23 May 2007 06:00:09 -0700 (PDT) Subject: [SciPy-user] NumPy 1.0.3 release tomorrow, SciPy 0.5.3 next week In-Reply-To: <4653DAA8.20708@ieee.org> Message-ID: <801588.87930.qm@web51010.mail.re2.yahoo.com> Do these new versions address in any way the system crashes that people with AMD chips are experiencing? I have been unable to use jn_zeros and jnp_zeros, and Ryan Krauss has been struggling with some matrix operations. Giulio Venezian --- Travis Oliphant wrote: > > I'd like to tag the tree and make a NumPy 1.0.4 > release tomorrow. Is > there anything that needs to be done before that can > happen? > > I'd also like to get another release of SciPy out > the door as soon as > possible. At the end of this week or early next > week. > > I've added a bunch of 1-d classic spline > interpolation algorithms (not > the fast B-spline versions but the basic flexible > algorithms of order 2, > 3, 4, and 5 which have different boundary > conditions). There are > several ways to get at them in scipy.interpolate. > I've also added my > own "smoothest" algorithm which specifies the extra > degrees of freedom > available in spline formulations by making the > resulting Kth-order > spline have the smallest Kth-order derivative > discontinuity. I've > never seen this described by anyone but I'm sure > somebody must have done > discussed it before. If anybody knows something > about splines who > recalls seeing something like this, I would > appreciate a reference. > This specification actually allows very decent > looking quadratic > (2nd-order) splines. > > > Best regards, > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ____________________________________________________________________________________ TV dinner still cooling? Check out "Tonight's Picks" on Yahoo! TV. http://tv.yahoo.com/ From gvenezian at yahoo.com Wed May 23 09:04:33 2007 From: gvenezian at yahoo.com (Giulio Venezian) Date: Wed, 23 May 2007 06:04:33 -0700 (PDT) Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: Message-ID: <121808.34196.qm@web51001.mail.re2.yahoo.com> I think my email can handle the 2 meg. Before you send me the exe file though, could you try a little experiment and see whether your version can handle jn_zeros and jnp_zeros? The code would be something like this: from scipy import special z=special.jn_zeros(3,5) print z Giulio --- Ryan Krauss wrote: > I am going to give my students an executable that is > built for P3's > and AMD's if the one from scipy.org for SSE2 chips > doesn't work for > them. I mainly followed the instructions from here > http://scipy.org/Installing_SciPy/Windows > ignoring the section about Intel MKL stuff. So > basically I did 5 things: > 1. Check out Numpy and Scipy from SVN (download > tortoisesvn > http://tortoisesvn.tigris.org/) > 2. Setup a basic MinGW compiler system (I find this > to be more painful > than the MinGW people seem to think it is. You need > to download > almost all of the bin packages they provide and > unpack them into a > folder on your path so that you can type "gcc -v" in > a cmd.exe window > and get some sensible response. You need to make > sure you get > gcc-core, gcc-g77, mingw-runtime, and w32-api. The > mingw-runtime > contains things like stdio.h.) > 3. Download the precomiled ATLAS binaries (follow > link on > Installing_Scipy/Windows parge) > 4. Create a site.cfg file that contains > [atlas] > library_dirs = c:\path\to\BlasLapackLibs > atlas_libs = lapack, f77blas, cblas, atlas > 5. type c:\path\to\python.exe setup.py config > --compiler=mingw32 build > --compiler=mingw32 bdist_wininst > at a cmd.exe prompt. > > The exe file created isn't perfect, there are some > test failures. But > I think it meets my needs. If your email can handle > 2 meg > attachments, I can send you my exe file to try. I > don't know if you > need the numpy and the scipy one or if my scipy will > work with your > numpy. > > Ryan > > On 5/22/07, Giulio Venezian > wrote: > > excuse me if i keep barging into this > conversation. > > > > Ryan: Do I gather from all this that you have > > succeeded in constructing an AMD-compatible > version of > > SciPy? > > > > Would you please share what you've learned? As I > > mentioned before I'm trying to get the functions > > special.jn_zeros and special.jnp_zeros to work on > my > > AMD-based machine. Other than using the > > self-installing version, I don't understand how > one > > goes about constructing a version of SciPy. > > > > I'd appreciate it very much if you would email me > > whatever instructions you prepare for your > students. > > > > Giulio > > --- Robert Kern wrote: > > > > > Ryan Krauss wrote: > > > > Is there an easy way to test every package > except > > > ndimage? Because it > > > > fails and completely crashes Python, I don't > know > > > any tests that run > > > > after ndimage also fail. > > > > > > from numpy import NumpyTest > > > > > > packages = """ > > > scipy.cluster > > > scipy.fftpack > > > scipy.integrate > > > scipy.interpolate > > > scipy.io > > > scipy.lib > > > scipy.linalg > > > scipy.linsolve > > > scipy.maxentropy > > > scipy.misc > > > scipy.odr > > > scipy.optimize > > > scipy.signal > > > scipy.sparse > > > scipy.special > > > scipy.stats > > > scipy.stsci > > > scipy.weave > > > """.strip().split() > > > > > > for subpkg in packages: > > > print subpkg > > > t = NumpyTest(subpkg) > > > t.test(1, 2) > > > > > > -- > > > Robert Kern > > > > > > "I have come to believe that the whole world is > an > > > enigma, a harmless enigma > > > that is made terrible by our own mad attempt to > > > interpret it as though it had > > > an underlying truth." > > > -- Umberto Eco > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > ____________________________________________________________________________________ > > Sucker-punch spam with award-winning protection. > > Try the free Yahoo! Mail Beta. > > > http://advision.webevents.yahoo.com/mailbeta/features_spam.html > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ____________________________________________________________________________________ Finding fabulous fares is fun. Let Yahoo! FareChase search your favorite travel sites to find flight and hotel bargains. http://farechase.yahoo.com/promo-generic-14795097 From matthieu.brucher at gmail.com Wed May 23 09:16:23 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 23 May 2007 15:16:23 +0200 Subject: [SciPy-user] NumPy 1.0.3 release tomorrow, SciPy 0.5.3 next week In-Reply-To: <801588.87930.qm@web51010.mail.re2.yahoo.com> References: <4653DAA8.20708@ieee.org> <801588.87930.qm@web51010.mail.re2.yahoo.com> Message-ID: That's a packaging problem, no ? If the correct libs are packaged with scipy, there should not be a problem, shouldn't it ? Matthieu 2007/5/23, Giulio Venezian : > > Do these new versions address in any way the system > crashes that people with AMD chips are experiencing? I > have been unable to use jn_zeros and jnp_zeros, and > Ryan Krauss has been struggling with some matrix > operations. > > Giulio Venezian > --- Travis Oliphant wrote: > > > > > I'd like to tag the tree and make a NumPy 1.0.4 > > release tomorrow. Is > > there anything that needs to be done before that can > > happen? > > > > I'd also like to get another release of SciPy out > > the door as soon as > > possible. At the end of this week or early next > > week. > > > > I've added a bunch of 1-d classic spline > > interpolation algorithms (not > > the fast B-spline versions but the basic flexible > > algorithms of order 2, > > 3, 4, and 5 which have different boundary > > conditions). There are > > several ways to get at them in scipy.interpolate. > > I've also added my > > own "smoothest" algorithm which specifies the extra > > degrees of freedom > > available in spline formulations by making the > > resulting Kth-order > > spline have the smallest Kth-order derivative > > discontinuity. I've > > never seen this described by anyone but I'm sure > > somebody must have done > > discussed it before. If anybody knows something > > about splines who > > recalls seeing something like this, I would > > appreciate a reference. > > This specification actually allows very decent > > looking quadratic > > (2nd-order) splines. > > > > > > Best regards, > > > > -Travis > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > ____________________________________________________________________________________ > TV dinner still cooling? > Check out "Tonight's Picks" on Yahoo! TV. > http://tv.yahoo.com/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Wed May 23 09:21:42 2007 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 23 May 2007 09:21:42 -0400 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: <121808.34196.qm@web51001.mail.re2.yahoo.com> References: <121808.34196.qm@web51001.mail.re2.yahoo.com> Message-ID: <46543FE6.1070406@comcast.net> My build of scipy for Python 2.5 (which is probably very similar to Ryan's) crashes on special.jn_zeros. It does work on an _older_ version of scipy with Python 2.4: Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit (Intel)] on win32 >>> scipy.__version__ '0.5.1' >>> numpy.__version__ '1.0rc2' IDLE 1.1.4 >>> from scipy import special >>> z=special.jn_zeros(3,5) >>> print z [ 6.3801619 9.76102313 13.01520072 16.22346616 19.40941523] john Giulio Venezian wrote: > I think my email can handle the 2 meg. Before you send > me the exe file though, could you try a little > experiment and see whether your version can handle > jn_zeros and jnp_zeros? The code would be something > like this: > from scipy import special > z=special.jn_zeros(3,5) > print z > > Giulio > > > From fredmfp at gmail.com Wed May 23 09:50:40 2007 From: fredmfp at gmail.com (fred) Date: Wed, 23 May 2007 15:50:40 +0200 Subject: [SciPy-user] f2py and ifort flags... Message-ID: <465446B0.3070207@gmail.com> Hi, 1) To compile my fortran code, I use the command line: ifort call_func.f -limf -Wl,--rpath -Wl,/usr/local/share/intel/fc/9.1.045/lib funcs.o -o call_func_f which works fine. 2) I want to use the same trick with f2py. So, from f2py manpage, my Makefile looks like: IFCVER = 9.1.045 PATH = /bin:/usr/bin:/usr/local/share/intel/fc/$(IFCVER)/bin LD_LIBRARY_PATH_INTEL = /usr/local/share/intel/fc/$(IFCVER)/lib LD_LIBRARY_PATH = /usr/local/lib:${LD_LIBRARY_PATH_INTEL} LDFLAGS = -limf -Wl,--rpath -Wl,$(LD_LIBRARY_PATH_INTEL) OPTIMIZATION = -O3 -xT -ipo all: funcs.o calc_KM.so calc_KV.so calc_output.so funcs.o: funcs.c gcc -Wall funcs.c -c -o funcs.o calc_KM.so: calc_KM.f funcs.o f2py --fcompiler=intel --opt="$(OPTIMIZATION)" --f90flags="$(LDFLAGS)" -c calc_KM.f funcs.o -m calc_KM .../... but this has no effect, ie intel libs are not linked. What am I doing wrong ? Cheers, -- http://scipy.org/FredericPetit From peridot.faceted at gmail.com Wed May 23 10:07:15 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 23 May 2007 10:07:15 -0400 Subject: [SciPy-user] zeros of a function In-Reply-To: <465424E1.9090806@comcast.net> References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> <465424E1.9090806@comcast.net> Message-ID: On 23/05/07, John Hassler wrote: > I don't know of any way that is guaranteed to find _all_ of the zeros of an > arbitrary function. Think of a parabola, for example. A very tiny change > in coefficients can shift the problem from no roots, to a pair of equal > roots, to a pair of distinct roots. Even a direct search would be > problematical, since there might be no axis crossing to help you. > > If you know something of the properties of the function, analytical > mathematics might help. If it were my problem, I would try to find out as > much as I could of the properties of my function before I started. Other > than that, your method of 'scanning' by using different starting points is > as good as any, but there is no way to assure that you have found _all_ of > the zeros without bringing in some other mathematics. Let me elaborate on this: If you have a vector-valued function, that is if you are trying to find simultaneous zeros of several functions, it is going to be very difficult to find even one zero, let alone be confident that you have found all of them. See Numerical Recipes ( http://www.nrbook.com/b/bookcpdf.php ) for a discussion of why this is.so difficult. If you have a scalar function of vector inputs, you probably have infinitely many zeros (lying on hypersurfaces), and if you don't the equation is guaranteed to be a numerical nightmare. If you have a scalar function of scalar inputs, it's not hard to close in on a zero once you're sure you've found one (by having a bracket, one point with a negative value and the other with a positive value). In general it is indeed difficult (and perhaps impossible) to be certain you have found all the zeros. If you have analytic derivatives, you can use, for example, an upper bound on the second derivative to come up with a minimum distance to the nearest root; a method based on this idea could probably be written to confidently find all zeros (though it will probably be slow and laborious). If your function is a solution to a second-order linear differential equation (as many special functions are), there are tricks you can do based on Sturm theory (using for example the fact that between every two zeros of one solution there must be a zero of the other). If you have a (univariate) polynomial, you can use polynomial factorization to find all its complex roots; numpy's poly1d class does just that. You have to be very careful how you represent your polynomial (if it is nontrivial) or your roots will be obliterated by roundoff error (and I think numpy.poly1d takes a naive approach to this). There is one case where you can actually find all the zeros of a system of equations. If they are multivariate polynomials, Grobner basis methods will give you an explicit description of all the zeros, and will let you do all kinds of other things besides. I don't know what their roundoff behaviour is like - almost certainly poor, they are normally used with exact arithmetic - and they're exponential in the number of variables, but they do work. Anne From william.ratcliff at gmail.com Wed May 23 10:24:44 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Wed, 23 May 2007 10:24:44 -0400 Subject: [SciPy-user] zeros of a function In-Reply-To: References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> <465424E1.9090806@comcast.net> Message-ID: <827183970705230724x44ac8976v2b6ed3623b396e88@mail.gmail.com> What about re-expressing the problem as a nonlinear optimization problem where you're fitting a function function to a vector of zeros? Then, use your favorite global optimizer to try to find the minima of the "chi^2" of this function. The issue here is that for some functions, even a global optimizer could run into problems... William On 5/23/07, Anne Archibald wrote: > > On 23/05/07, John Hassler wrote: > > > I don't know of any way that is guaranteed to find _all_ of the zeros > of an > > arbitrary function. Think of a parabola, for example. A very tiny > change > > in coefficients can shift the problem from no roots, to a pair of equal > > roots, to a pair of distinct roots. Even a direct search would be > > problematical, since there might be no axis crossing to help you. > > > > If you know something of the properties of the function, analytical > > mathematics might help. If it were my problem, I would try to find out > as > > much as I could of the properties of my function before I > started. Other > > than that, your method of 'scanning' by using different starting points > is > > as good as any, but there is no way to assure that you have found _all_ > of > > the zeros without bringing in some other mathematics. > > Let me elaborate on this: > > If you have a vector-valued function, that is if you are trying to > find simultaneous zeros of several functions, it is going to be very > difficult to find even one zero, let alone be confident that you have > found all of them. See Numerical Recipes ( > http://www.nrbook.com/b/bookcpdf.php ) for a discussion of why this > is.so difficult. > > If you have a scalar function of vector inputs, you probably have > infinitely many zeros (lying on hypersurfaces), and if you don't the > equation is guaranteed to be a numerical nightmare. > > If you have a scalar function of scalar inputs, it's not hard to close > in on a zero once you're sure you've found one (by having a bracket, > one point with a negative value and the other with a positive value). > In general it is indeed difficult (and perhaps impossible) to be > certain you have found all the zeros. If you have analytic > derivatives, you can use, for example, an upper bound on the second > derivative to come up with a minimum distance to the nearest root; a > method based on this idea could probably be written to confidently > find all zeros (though it will probably be slow and laborious). If > your function is a solution to a second-order linear differential > equation (as many special functions are), there are tricks you can do > based on Sturm theory (using for example the fact that between every > two zeros of one solution there must be a zero of the other). > > If you have a (univariate) polynomial, you can use polynomial > factorization to find all its complex roots; numpy's poly1d class does > just that. You have to be very careful how you represent your > polynomial (if it is nontrivial) or your roots will be obliterated by > roundoff error (and I think numpy.poly1d takes a naive approach to > this). > > There is one case where you can actually find all the zeros of a > system of equations. If they are multivariate polynomials, Grobner > basis methods will give you an explicit description of all the zeros, > and will let you do all kinds of other things besides. I don't know > what their roundoff behaviour is like - almost certainly poor, they > are normally used with exact arithmetic - and they're exponential in > the number of variables, but they do work. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed May 23 10:34:56 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 23 May 2007 10:34:56 -0400 Subject: [SciPy-user] zeros of a function In-Reply-To: <827183970705230724x44ac8976v2b6ed3623b396e88@mail.gmail.com> References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> <465424E1.9090806@comcast.net> <827183970705230724x44ac8976v2b6ed3623b396e88@mail.gmail.com> Message-ID: On 23/05/07, william ratcliff wrote: > What about re-expressing the problem as a nonlinear optimization problem > where you're fitting a function function to a vector of zeros? Then, use > your favorite global optimizer to try to find the minima of the "chi^2" of > this function. The issue here is that for some functions, even a global > optimizer could run into problems... Global optimization is *hard*. If you make a scalar function by (say) computing the squared length of a vector you want to find a zero of, you're almost certain to wind up with zillions of spurious local minima. They talk about this in Numerical Recipes too. Anne From ryanlists at gmail.com Wed May 23 10:35:33 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 23 May 2007 09:35:33 -0500 Subject: [SciPy-user] Issues building from source on windows (SVN) In-Reply-To: <121808.34196.qm@web51001.mail.re2.yahoo.com> References: <121808.34196.qm@web51001.mail.re2.yahoo.com> Message-ID: I saved your 3 lines to a script and here is the output: In [5]: run special.py [ 6.3801619 9.76102313 13.01520072 16.22346616 19.40941523] No crashes. Ryan On 5/23/07, Giulio Venezian wrote: > I think my email can handle the 2 meg. Before you send > me the exe file though, could you try a little > experiment and see whether your version can handle > jn_zeros and jnp_zeros? The code would be something > like this: > from scipy import special > z=special.jn_zeros(3,5) > print z > > Giulio > --- Ryan Krauss wrote: > > > I am going to give my students an executable that is > > built for P3's > > and AMD's if the one from scipy.org for SSE2 chips > > doesn't work for > > them. I mainly followed the instructions from here > > http://scipy.org/Installing_SciPy/Windows > > ignoring the section about Intel MKL stuff. So > > basically I did 5 things: > > 1. Check out Numpy and Scipy from SVN (download > > tortoisesvn > > http://tortoisesvn.tigris.org/) > > 2. Setup a basic MinGW compiler system (I find this > > to be more painful > > than the MinGW people seem to think it is. You need > > to download > > almost all of the bin packages they provide and > > unpack them into a > > folder on your path so that you can type "gcc -v" in > > a cmd.exe window > > and get some sensible response. You need to make > > sure you get > > gcc-core, gcc-g77, mingw-runtime, and w32-api. The > > mingw-runtime > > contains things like stdio.h.) > > 3. Download the precomiled ATLAS binaries (follow > > link on > > Installing_Scipy/Windows parge) > > 4. Create a site.cfg file that contains > > [atlas] > > library_dirs = c:\path\to\BlasLapackLibs > > atlas_libs = lapack, f77blas, cblas, atlas > > 5. type c:\path\to\python.exe setup.py config > > --compiler=mingw32 build > > --compiler=mingw32 bdist_wininst > > at a cmd.exe prompt. > > > > The exe file created isn't perfect, there are some > > test failures. But > > I think it meets my needs. If your email can handle > > 2 meg > > attachments, I can send you my exe file to try. I > > don't know if you > > need the numpy and the scipy one or if my scipy will > > work with your > > numpy. > > > > Ryan > > > > On 5/22/07, Giulio Venezian > > wrote: > > > excuse me if i keep barging into this > > conversation. > > > > > > Ryan: Do I gather from all this that you have > > > succeeded in constructing an AMD-compatible > > version of > > > SciPy? > > > > > > Would you please share what you've learned? As I > > > mentioned before I'm trying to get the functions > > > special.jn_zeros and special.jnp_zeros to work on > > my > > > AMD-based machine. Other than using the > > > self-installing version, I don't understand how > > one > > > goes about constructing a version of SciPy. > > > > > > I'd appreciate it very much if you would email me > > > whatever instructions you prepare for your > > students. > > > > > > Giulio > > > --- Robert Kern wrote: > > > > > > > Ryan Krauss wrote: > > > > > Is there an easy way to test every package > > except > > > > ndimage? Because it > > > > > fails and completely crashes Python, I don't > > know > > > > any tests that run > > > > > after ndimage also fail. > > > > > > > > from numpy import NumpyTest > > > > > > > > packages = """ > > > > scipy.cluster > > > > scipy.fftpack > > > > scipy.integrate > > > > scipy.interpolate > > > > scipy.io > > > > scipy.lib > > > > scipy.linalg > > > > scipy.linsolve > > > > scipy.maxentropy > > > > scipy.misc > > > > scipy.odr > > > > scipy.optimize > > > > scipy.signal > > > > scipy.sparse > > > > scipy.special > > > > scipy.stats > > > > scipy.stsci > > > > scipy.weave > > > > """.strip().split() > > > > > > > > for subpkg in packages: > > > > print subpkg > > > > t = NumpyTest(subpkg) > > > > t.test(1, 2) > > > > > > > > -- > > > > Robert Kern > > > > > > > > "I have come to believe that the whole world is > > an > > > > enigma, a harmless enigma > > > > that is made terrible by our own mad attempt to > > > > interpret it as though it had > > > > an underlying truth." > > > > -- Umberto Eco > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > > > > > > > > ____________________________________________________________________________________ > > > Sucker-punch spam with award-winning protection. > > > Try the free Yahoo! Mail Beta. > > > > > > http://advision.webevents.yahoo.com/mailbeta/features_spam.html > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > ____________________________________________________________________________________ > Finding fabulous fares is fun. > Let Yahoo! FareChase search your favorite travel sites to find flight and hotel bargains. > http://farechase.yahoo.com/promo-generic-14795097 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Wed May 23 10:41:02 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 23 May 2007 09:41:02 -0500 Subject: [SciPy-user] NumPy 1.0.3 release tomorrow, SciPy 0.5.3 next week In-Reply-To: References: <4653DAA8.20708@ieee.org> <801588.87930.qm@web51010.mail.re2.yahoo.com> Message-ID: The problems I have run into are only in Scipy and not Numpy as far as I know. Numpy passes all of its tests; Scipy crashes on several of its. I think it is a problem with correct Atlas and Lapack versions for processorts that don't support SSE2. But building from source and linking to the correct version of Atlas didn't make all the problems go away. So, I don't know that I would call it a packaging problem. Ryan On 5/23/07, Matthieu Brucher wrote: > That's a packaging problem, no ? If the correct libs are packaged with > scipy, there should not be a problem, shouldn't it ? > > Matthieu > > 2007/5/23, Giulio Venezian < gvenezian at yahoo.com>: > > Do these new versions address in any way the system > > crashes that people with AMD chips are experiencing? I > > have been unable to use jn_zeros and jnp_zeros, and > > Ryan Krauss has been struggling with some matrix > > operations. > > > > Giulio Venezian > > --- Travis Oliphant wrote: > > > > > > > > I'd like to tag the tree and make a NumPy 1.0.4 > > > release tomorrow. Is > > > there anything that needs to be done before that can > > > happen? > > > > > > I'd also like to get another release of SciPy out > > > the door as soon as > > > possible. At the end of this week or early next > > > week. > > > > > > I've added a bunch of 1-d classic spline > > > interpolation algorithms (not > > > the fast B-spline versions but the basic flexible > > > algorithms of order 2, > > > 3, 4, and 5 which have different boundary > > > conditions). There are > > > several ways to get at them in scipy.interpolate. > > > I've also added my > > > own "smoothest" algorithm which specifies the extra > > > degrees of freedom > > > available in spline formulations by making the > > > resulting Kth-order > > > spline have the smallest Kth-order derivative > > > discontinuity. I've > > > never seen this described by anyone but I'm sure > > > somebody must have done > > > discussed it before. If anybody knows something > > > about splines who > > > recalls seeing something like this, I would > > > appreciate a reference. > > > This specification actually allows very decent > > > looking quadratic > > > (2nd-order) splines. > > > > > > > > > Best regards, > > > > > > -Travis > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > ____________________________________________________________________________________ > > TV dinner still cooling? > > Check out "Tonight's Picks" on Yahoo! TV. > > http://tv.yahoo.com/ > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From lbolla at gmail.com Wed May 23 10:49:42 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 23 May 2007 16:49:42 +0200 Subject: [SciPy-user] zeros of a function In-Reply-To: References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> <465424E1.9090806@comcast.net> <827183970705230724x44ac8976v2b6ed3623b396e88@mail.gmail.com> Message-ID: <80c99e790705230749t65bbc723m6c6caebc46b808c1@mail.gmail.com> Thanks to all for the replies. Following Emanuele's suggestion and googling for the problem, I came up with this: http://interval.sourceforge.net/interval/prolog/clip/clip/smath/README.html It's a library doing interval arithmetics (more info here: http://www.cs.utep.edu/interval-comp/) . Basically, one can scan for zeros of a function f, whose analytical formula is known, in an interval by bisectioning the interval itself and evaluating the function with the interval arithmetics formulas. the fundamental property of interval arithmetics is this: let y=f(x) be the function, one can define [c,d] = f([a,b]), with [a,b] and [c,d] intervals, so that for every x in [a,b]: f(x) is in [c,d]. i.e.: in interval arithmetics: sin([0,pi]) = [0,1] then bisect the interval and exclude all the intervals where [c,d] does not contain zero. keep on bisecting till machine precision :-) this procedure assure to find ALL the zeros. comments? L. On 5/23/07, Anne Archibald wrote: > > On 23/05/07, william ratcliff wrote: > > What about re-expressing the problem as a nonlinear optimization problem > > where you're fitting a function function to a vector of zeros? Then, > use > > your favorite global optimizer to try to find the minima of the "chi^2" > of > > this function. The issue here is that for some functions, even a global > > optimizer could run into problems... > > Global optimization is *hard*. If you make a scalar function by (say) > computing the squared length of a vector you want to find a zero of, > you're almost certain to wind up with zillions of spurious local > minima. They talk about this in Numerical Recipes too. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Wed May 23 11:05:33 2007 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 23 May 2007 11:05:33 -0400 Subject: [SciPy-user] zeros of a function In-Reply-To: <80c99e790705230749t65bbc723m6c6caebc46b808c1@mail.gmail.com> References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> <465424E1.9090806@comcast.net> <827183970705230724x44ac8976v2b6ed3623b396e88@mail.gmail.com> <80c99e790705230749t65bbc723m6c6caebc46b808c1@mail.gmail.com> Message-ID: <4654583D.40104@comcast.net> An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed May 23 11:05:54 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 23 May 2007 17:05:54 +0200 Subject: [SciPy-user] zeros of a function In-Reply-To: <80c99e790705230749t65bbc723m6c6caebc46b808c1@mail.gmail.com> References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> <465424E1.9090806@comcast.net> <827183970705230724x44ac8976v2b6ed3623b396e88@mail.gmail.com> <80c99e790705230749t65bbc723m6c6caebc46b808c1@mail.gmail.com> Message-ID: <46545852.5040505@iam.uni-stuttgart.de> lorenzo bolla wrote: > Thanks to all for the replies. > Following Emanuele's suggestion and googling for the problem, I came > up with this: > http://interval.sourceforge.net/interval/prolog/clip/clip/smath/README.html > > It's a library doing interval arithmetics (more info here: > http://www.cs.utep.edu/interval-comp/) > . > Basically, one can scan for zeros of a function f, whose analytical > formula is known, in an interval by bisectioning the interval itself > and evaluating the function with the interval arithmetics formulas. > > the fundamental property of interval arithmetics is this: > let y=f(x) be the function, one can define [c,d] = f([a,b]), with > [a,b] and [c,d] intervals, so that for every x in [a,b]: f(x) is in [c,d]. > > i.e.: in interval arithmetics: sin([0,pi]) = [0,1] > > then bisect the interval and exclude all the intervals where [c,d] > does not contain zero. keep on bisecting till machine precision :-) > this procedure assure to find ALL the zeros. > comments? > > L. > PyDX is a package for working with interval arithmetic. http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/user-guide.html#interval-arithmetic Nils From openopt at ukr.net Wed May 23 11:13:00 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 23 May 2007 18:13:00 +0300 Subject: [SciPy-user] zeros of a function In-Reply-To: <4654583D.40104@comcast.net> References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> <465424E1.9090806@comcast.net> <827183970705230724x44ac8976v2b6ed3623b396e88@mail.gmail.com> <80c99e790705230749t65bbc723m6c6caebc46b808c1@mail.gmail.com> <4654583D.40104@comcast.net> Message-ID: <465459FC.3050605@ukr.net> Think also about something similar to f(x) = x^N * sin(1/x) or g(x) = abs(f(x)), x from [eps...1], 0 < eps << 1, N>>1 (so f(x) is very small near eps) D. John Hassler wrote: > How do you know if there is a zero in your interval? How about: > y = x**2 + 1.e-999? > john From william.ratcliff at gmail.com Wed May 23 11:23:38 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Wed, 23 May 2007 11:23:38 -0400 Subject: [SciPy-user] zeros of a function In-Reply-To: References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> <465424E1.9090806@comcast.net> <827183970705230724x44ac8976v2b6ed3623b396e88@mail.gmail.com> Message-ID: <827183970705230823n6345792ie5f137b4f963c5f4@mail.gmail.com> You must have a newer edition of the Recipes than I ;> It's not immediately obvious to me why this shouldn't work. While there are local minima, I think that the ability of the global optimizer to find the global minima depends on how deep the holes are. I've also used fsolve and found in some cases that it is unable to find a solution depending on how I set the starting point. I've had some success with reformulation of the problem as a global optimization problem, but as you say, there are multiple local minima and it becomes computationally pricey to search for the global minima--my plan had been to use some more robust global optimization strategies (rather than simple simulated annealing)--but before going that route, it might be worthwhile to investigate other methods. Has anyone tried Broyden's Method? If so, any feeling for its robustness? Cheers, William On 5/23/07, Anne Archibald wrote: > > On 23/05/07, william ratcliff wrote: > > What about re-expressing the problem as a nonlinear optimization problem > > where you're fitting a function function to a vector of zeros? Then, > use > > your favorite global optimizer to try to find the minima of the "chi^2" > of > > this function. The issue here is that for some functions, even a global > > optimizer could run into problems... > > Global optimization is *hard*. If you make a scalar function by (say) > computing the squared length of a vector you want to find a zero of, > you're almost certain to wind up with zillions of spurious local > minima. They talk about this in Numerical Recipes too. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Wed May 23 11:40:18 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 23 May 2007 10:40:18 -0500 Subject: [SciPy-user] Semi-working Scipy exe for windows for SSE AMD chips posted Message-ID: I have 20 megs of space available to me through Charter (my ISP) and several people have requested this, so I have posted my AMD exe's here: http://webpages.charter.net/ryankrauss/numpy-1.0.3.dev3795.win32-py2.5_sse_P3_AMD_Athlon.exe http://webpages.charter.net/ryankrauss/scipy-0.5.3.dev3023.win32-py2.5_sse_P3_AMD_Athlon.exe Let me know if these are helpful. Ryan From fredmfp at gmail.com Wed May 23 11:43:53 2007 From: fredmfp at gmail.com (fred) Date: Wed, 23 May 2007 17:43:53 +0200 Subject: [SciPy-user] f2py and ifort flags... Message-ID: <46546139.8070905@gmail.com> > but this has no effect, ie intel libs are not linked. > >What am I doing wrong ? Ok, I had to modify intel.py ("opt.append()") to take in account my ifort options, but I guess this is not the right way. Any suugestion ? -- http://scipy.org/FredericPetit From lbolla at gmail.com Wed May 23 12:28:46 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 23 May 2007 18:28:46 +0200 Subject: [SciPy-user] zeros of a function In-Reply-To: <4654583D.40104@comcast.net> References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> <465424E1.9090806@comcast.net> <827183970705230724x44ac8976v2b6ed3623b396e88@mail.gmail.com> <80c99e790705230749t65bbc723m6c6caebc46b808c1@mail.gmail.com> <4654583D.40104@comcast.net> Message-ID: <80c99e790705230928se073248kd3ac2d5b8d7a7e03@mail.gmail.com> well, I'm not an "interval arithmetic guru", but I guess that evaluating your function in the interval, say, [-1, 1] would give: [1e-999,1] which does not contain 0. therefore, no zeros are present in [-1,1]. anyway, any stopping criterion for the bisection should solve the problems with 1e-999 type of numbers... what if y = x**2 - 1e-999 --> [with the minus sign] that would be a problem, I'm afraid... L. On 5/23/07, John Hassler wrote: > > How do you know if there is a zero in your interval? How about: > y = x**2 + 1.e-999? > john > > lorenzo bolla wrote: > > Thanks to all for the replies. > Following Emanuele's suggestion and googling for the problem, I came up > with this: http://interval.sourceforge.net/interval/prolog/clip/clip/smath/README.html > > It's a library doing interval arithmetics (more info here: > http://www.cs.utep.edu/interval-comp/) > . > Basically, one can scan for zeros of a function f, whose analytical > formula is known, in an interval by bisectioning the interval itself and > evaluating the function with the interval arithmetics formulas. > > the fundamental property of interval arithmetics is this: > let y=f(x) be the function, one can define [c,d] = f([a,b]), with [a,b] > and [c,d] intervals, so that for every x in [a,b]: f(x) is in [c,d]. > > i.e.: in interval arithmetics: sin([0,pi]) = [0,1] > > then bisect the interval and exclude all the intervals where [c,d] does > not contain zero. keep on bisecting till machine precision :-) > this procedure assure to find ALL the zeros. > comments? > > L. > > > On 5/23/07, Anne Archibald wrote: > > > > On 23/05/07, william ratcliff wrote: > > > What about re-expressing the problem as a nonlinear optimization > > problem > > > where you're fitting a function function to a vector of zeros? Then, > > use > > > your favorite global optimizer to try to find the minima of the > > "chi^2" of > > > this function. The issue here is that for some functions, even a > > global > > > optimizer could run into problems... > > > > Global optimization is *hard*. If you make a scalar function by (say) > > computing the squared length of a vector you want to find a zero of, > > you're almost certain to wind up with zillions of spurious local > > minima. They talk about this in Numerical Recipes too. > > > > Anne > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > ------------------------------ > > _______________________________________________ > SciPy-user mailing listSciPy-user at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user > > ------------------------------ > > No virus found in this incoming message. > Checked by AVG Free Edition. > Version: 7.5.467 / Virus Database: 269.7.6/815 - Release Date: 5/22/2007 3:49 PM > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Wed May 23 12:35:43 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 23 May 2007 18:35:43 +0200 Subject: [SciPy-user] zeros of a function In-Reply-To: <465459FC.3050605@ukr.net> References: <80c99e790705230304w6504e0b5o95fc9c498d323184@mail.gmail.com> <465424E1.9090806@comcast.net> <827183970705230724x44ac8976v2b6ed3623b396e88@mail.gmail.com> <80c99e790705230749t65bbc723m6c6caebc46b808c1@mail.gmail.com> <4654583D.40104@comcast.net> <465459FC.3050605@ukr.net> Message-ID: <80c99e790705230935x18791119paaf2cfbffa4029bf@mail.gmail.com> well, again I think it's a matter of comparing a very small floating point number with 0... your function is *very* small near 0, and oscillates *very* rapidly above and below zero: there will be an x0 below which a computer cannot distinguish the function value from 0 (as if the function would be a constant 0). right? how many zeros has the constant function: y = 0? no bisection method will tell you! same for abs(y). L. On 5/23/07, dmitrey wrote: > > Think also about something similar to > f(x) = x^N * sin(1/x) > or > g(x) = abs(f(x)), > x from [eps...1], > 0 < eps << 1, > N>>1 (so f(x) is very small near eps) > > D. > > John Hassler wrote: > > How do you know if there is a zero in your interval? How about: > > y = x**2 + 1.e-999? > > john > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed May 23 14:28:09 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 23 May 2007 12:28:09 -0600 Subject: [SciPy-user] NumPy 1.0.3 release tomorrow, SciPy 0.5.3 next week In-Reply-To: <801588.87930.qm@web51010.mail.re2.yahoo.com> References: <801588.87930.qm@web51010.mail.re2.yahoo.com> Message-ID: <465487B9.1050300@ieee.org> Giulio Venezian wrote: > Do these new versions address in any way the system > crashes that people with AMD chips are experiencing? I > have been unable to use jn_zeros and jnp_zeros, and > Ryan Krauss has been struggling with some matrix > operations. > This is entirely a binary packaging issue. These can and should be fixed by people building better binary versions. While I try to help, I just build something that works on my machine and do not have the resources do to extensive testing. -Travis From ggellner at uoguelph.ca Wed May 23 14:22:02 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 23 May 2007 14:22:02 -0400 Subject: [SciPy-user] About Genetic Algorithms In-Reply-To: <464220F1.7010906@gmail.com> References: <10f88f6d0705091223ya321cc6u41298c9bcab63281@mail.gmail.com> <464220F1.7010906@gmail.com> Message-ID: <20070523182202.GA18238@giton> I think I have ported the GA module to numpy, though as I am not a regular user of GA algorithms I have only made sure that my changes work with the included example. Before I figure out how to make a multi-file patch with subversion, I have some general questions to make sure I did everything correctly: The major stumbling block was that ga used import scipy.stats as rv extensively, these functions seem to have been gutted in the new scipy. Firstly stats.choice() is a function that is used extensively (to uniformly sample from a given sequence) which I failed to find the equivalent in numpy or scipy. Instead I wrote: def choice(seq): hi = len(seq) - 1 rindex = numpy.random.random_integers(0, hi) # this will raise an exception if len(seq) == 0, which seems # correct to me return seq[rindex] Does this make sense? I looked at the old code in scipy, and it seemed a mess (and more general) so I didn't spend much time on it. If the above is incorrect, or if there is a better port, please tell me, I will fix/port as needed. Another problem was that modern numpy doesn't let you change the random number algorithm, there is no easy way around this (would there be and reason to port the 3 other algorithms from old scipy to the modern pyrex version? I could try if this is a good thing to do). Instead I just ignore the alg argument to galg. Also it uses lines like: used_seed = rv.initial_seed() which I have changed to used_seed = numpy.random.get_state()[2] But I find it hard to tell if this correct, the documentation is sparse (if no one can tell me quickly I will just check that the seeds agree . . . any tips would be helpful mind you). That is were it stands. Once I convert the example to a unit test, I will try to make a subversion patch, and a trac report. Finally, is there a current style guideline? I usually follow PEP 8, (and the similiar Enthought guide), but the numpy/scipy seems a little different (lowercase class names (though at times this is broken: RandomState for example)) any pointers? I am happy to clean up the ga module (after I get the port patch out) so that it meets any such guidelines. Gabriel On Wed, May 09, 2007 at 02:28:49PM -0500, Robert Kern wrote: > Nolambar von L?meanor wrote: > > Hi > > > > I can't find the GA module in the latests releases in Scipy, does > > anybody know why? > > > > Which Scipy release still has the GA module? > > It's in the sandbox until someone comes along and ports it to numpy. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed May 23 14:58:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 May 2007 13:58:16 -0500 Subject: [SciPy-user] About Genetic Algorithms In-Reply-To: <20070523182202.GA18238@giton> References: <10f88f6d0705091223ya321cc6u41298c9bcab63281@mail.gmail.com> <464220F1.7010906@gmail.com> <20070523182202.GA18238@giton> Message-ID: <46548EC8.5040409@gmail.com> Gabriel Gellner wrote: > I think I have ported the GA module to numpy, though as I am not a > regular user of GA algorithms I have only made sure that my changes > work with the included example. > > Before I figure out how to make a multi-file patch with subversion, I > have some general questions to make sure I did everything correctly: > > The major stumbling block was that ga used > import scipy.stats as rv > extensively, these functions seem to have been gutted in the new > scipy. > > Firstly stats.choice() is a function that is used extensively (to > uniformly sample from a given sequence) which I failed to find the > equivalent in numpy or scipy. Instead I wrote: > > def choice(seq): > hi = len(seq) - 1 > rindex = numpy.random.random_integers(0, hi) > # this will raise an exception if len(seq) == 0, which seems > # correct to me > return seq[rindex] > > Does this make sense? I looked at the old code in scipy, and it seemed > a mess (and more general) so I didn't spend much time on it. If the > above is incorrect, or if there is a better port, please tell me, I > will fix/port as needed. randint(0, len(seq)) is clearer, but yes, that's fine. > Another problem was that modern numpy doesn't let you change the > random number algorithm, there is no easy way around this (would there > be and reason to port the 3 other algorithms from old scipy to the > modern pyrex version? I could try if this is a good thing to do). > Instead I just ignore the alg argument to galg. It would be nice, but not really important. Just remove the alg argument. > Also it uses lines like: > used_seed = rv.initial_seed() > > which I have changed to > used_seed = numpy.random.get_state()[2] > > But I find it hard to tell if this correct, the documentation is > sparse (if no one can tell me quickly I will just check that the seeds > agree . . . any tips would be helpful mind you). It isn't correct. If you are dealing with controlled random number streams, you need to instantiate a numpy.random.RandomState() instance yourself and call its methods instead of relying on the global instance and the functions in numpy.random. For this case, just grab the whole state with get_state() and store that. It's not a seed, per se, but resetting the state with set_state() will perform the same purpose that rv.initial_seed() was stored. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Wed May 23 15:10:25 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 23 May 2007 14:10:25 -0500 Subject: [SciPy-user] NumPy 1.0.3 release tomorrow, SciPy 0.5.3 next week In-Reply-To: <465487B9.1050300@ieee.org> References: <801588.87930.qm@web51010.mail.re2.yahoo.com> <465487B9.1050300@ieee.org> Message-ID: I am open to trying to build or test better binaries, but don't know what else to do? Are there are tricks to building Scipy from source correctly that I am not following? Ryan On 5/23/07, Travis Oliphant wrote: > Giulio Venezian wrote: > > Do these new versions address in any way the system > > crashes that people with AMD chips are experiencing? I > > have been unable to use jn_zeros and jnp_zeros, and > > Ryan Krauss has been struggling with some matrix > > operations. > > > > This is entirely a binary packaging issue. These can and should be > fixed by people building better binary versions. While I try to help, > I just build something that works on my machine and do not have the > resources do to extensive testing. > > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From zunzun at zunzun.com Wed May 23 15:44:12 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Wed, 23 May 2007 15:44:12 -0400 Subject: [SciPy-user] About Genetic Algorithms In-Reply-To: <46548EC8.5040409@gmail.com> References: <10f88f6d0705091223ya321cc6u41298c9bcab63281@mail.gmail.com> <464220F1.7010906@gmail.com> <20070523182202.GA18238@giton> <46548EC8.5040409@gmail.com> Message-ID: <20070523194412.GA23563@zunzun.com> On Wed, May 23, 2007 at 01:58:16PM -0500, Robert Kern wrote: > Gabriel Gellner wrote: > > I think I have ported the GA module to numpy I used weave to implement the C++ version of Differential Evolution (DE): http://www.icsi.berkeley.edu/~storn/code.html since my Python code was so slow. DE requires bit-shifting, if this is not slow in Python anymore we might add the Python version. James Phillips From robert.kern at gmail.com Wed May 23 16:00:41 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 May 2007 15:00:41 -0500 Subject: [SciPy-user] About Genetic Algorithms In-Reply-To: <20070523194412.GA23563@zunzun.com> References: <10f88f6d0705091223ya321cc6u41298c9bcab63281@mail.gmail.com> <464220F1.7010906@gmail.com> <20070523182202.GA18238@giton> <46548EC8.5040409@gmail.com> <20070523194412.GA23563@zunzun.com> Message-ID: <46549D69.9090805@gmail.com> zunzun at zunzun.com wrote: > On Wed, May 23, 2007 at 01:58:16PM -0500, Robert Kern wrote: >> Gabriel Gellner wrote: >>> I think I have ported the GA module to numpy > > I used weave to implement the C++ version of > Differential Evolution (DE): > http://www.icsi.berkeley.edu/~storn/code.html > since my Python code was so slow. DE requires > bit-shifting, if this is not slow in Python > anymore we might add the Python version. DE doesn't require bit-shifting, but that's not slow in Python, either. Doing a straightforward translation of the C++ code to Python will be slow because you do an inordinate number of loops that could be done by numpy array operations. You also translated the PRNG that they used, which is unnecessary. A more idiomatic implementation is here: http://svn.scipy.org/svn/scipy/trunk/Lib/sandbox/rkern/diffev.py -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From emanuelez at gmail.com Wed May 23 18:26:04 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Thu, 24 May 2007 00:26:04 +0200 Subject: [SciPy-user] masking large arrays Message-ID: Hello, i have a function that gives me regional maxima of an image (1024x1024 pixels). the regional maxima matrix contains 1 where regional maxima are found and 0 elsewhere. i would like to use this array to mask the initial image and then histogram it in order to see how the values of the maxima are distributed. newimg = img[maxima] the problem is that on such images i get a "ValueError: dimensions too large." any hint or suggestion? Emanuele -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed May 23 19:09:54 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 23 May 2007 19:09:54 -0400 Subject: [SciPy-user] masking large arrays In-Reply-To: References: Message-ID: <200705231909.54800.pgmdevlist@gmail.com> Emmanuele, Could you give us more information about the actual output of your function ? A short example would be quite helpful. From zunzun at zunzun.com Wed May 23 19:27:52 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Wed, 23 May 2007 19:27:52 -0400 Subject: [SciPy-user] About Genetic Algorithms In-Reply-To: <46549D69.9090805@gmail.com> References: <10f88f6d0705091223ya321cc6u41298c9bcab63281@mail.gmail.com> <464220F1.7010906@gmail.com> <20070523182202.GA18238@giton> <46548EC8.5040409@gmail.com> <20070523194412.GA23563@zunzun.com> <46549D69.9090805@gmail.com> Message-ID: <20070523232752.GA28009@zunzun.com> On Wed, May 23, 2007 at 03:00:41PM -0500, Robert Kern wrote: > > DE doesn't require bit-shifting, but that's not slow in Python, either. Doing a > straightforward translation of the C++ code to Python will be slow because you > do an inordinate number of loops that could be done by numpy array operations. > You also translated the PRNG that they used, which is unnecessary. A more > idiomatic implementation is here: > > http://svn.scipy.org/svn/scipy/trunk/Lib/sandbox/rkern/diffev.py SCHWEEEET! So much for *my* code, ha ha ha. I'll link to this from my web page on DE, and thank you. James From stefan at sun.ac.za Thu May 24 02:20:11 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 24 May 2007 08:20:11 +0200 Subject: [SciPy-user] masking large arrays In-Reply-To: References: Message-ID: <20070524062011.GF6192@mentat.za.net> On Thu, May 24, 2007 at 12:26:04AM +0200, Emanuele Zattin wrote: > Hello, > i have a function that gives me regional maxima of an image (1024x1024 pixels). > the regional maxima matrix contains 1 where regional maxima are found and 0 > elsewhere. > i would like to use this array to mask the initial image and then histogram it > in order to see how the values of the maxima are distributed. > > newimg = img[maxima] Is this roughly what you are trying to do? In [44]: x = N.random.random((3,3)) In [45]: y = N.array([[0,1.,0],[1.,0,1],[0,0,1]]) In [46]: x[y.astype(bool)] Out[46]: array([ 0.42440316, 0.29431396, 0.22774399, 0.78288364]) You can apply the histogram function to the resulting values. Cheers St?fan From emanuelez at gmail.com Thu May 24 02:51:10 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Thu, 24 May 2007 08:51:10 +0200 Subject: [SciPy-user] masking large arrays In-Reply-To: <20070524062011.GF6192@mentat.za.net> References: <20070524062011.GF6192@mentat.za.net> Message-ID: On 5/24/07, Stefan van der Walt wrote: > > On Thu, May 24, 2007 at 12:26:04AM +0200, Emanuele Zattin wrote: > > Hello, > > i have a function that gives me regional maxima of an image (1024x1024 > pixels). > > the regional maxima matrix contains 1 where regional maxima are found > and 0 > > elsewhere. > > i would like to use this array to mask the initial image and then > histogram it > > in order to see how the values of the maxima are distributed. > > > > newimg = img[maxima] > > Is this roughly what you are trying to do? > > In [44]: x = N.random.random((3,3)) > > In [45]: y = N.array([[0,1.,0],[1.,0,1],[0,0,1]]) > > In [46]: x[y.astype(bool)] > Out[46]: array([ 0.42440316, 0.29431396, 0.22774399, 0.78288364]) > > You can apply the histogram function to the resulting values. Actually what i was trying was to get a matrix with zeros where zeros appear in the mask (y in your example) and the value of the original array where the mask is set to 1. Right now i solved in some other way, but still i would be curious to know a nice way to do that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Thu May 24 04:12:23 2007 From: fredmfp at gmail.com (fred) Date: Thu, 24 May 2007 10:12:23 +0200 Subject: [SciPy-user] f2py on x86_64 arch and intel C compiler... Message-ID: <465548E7.1070209@gmail.com> Hi, As I use on my x86_64 arch linux box, I have to use -fPIC arg to the intel C compiler. Anyway, I don't see any option to pass to f2py to set this. How can I do that ? PS1: for intel fortran compiler on x86_64, you have to use --fcompiler=intelem but intelem is not a known parameter for --compiler option. What's the problem ? PS2: should I post my issue on f2py-users ml to have better chance to get an answer ? (no answer for my previous post on intel.py) Cheers, -- http://scipy.org/FredericPetit From stefan at sun.ac.za Thu May 24 04:22:42 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 24 May 2007 10:22:42 +0200 Subject: [SciPy-user] masking large arrays In-Reply-To: References: <20070524062011.GF6192@mentat.za.net> Message-ID: <20070524082242.GI6192@mentat.za.net> On Thu, May 24, 2007 at 08:51:10AM +0200, Emanuele Zattin wrote: > On 5/24/07, Stefan van der Walt wrote: > Is this roughly what you are trying to do? > > In [44]: x = N.random.random((3,3)) > > In [45]: y = N.array([[0,1.,0],[1.,0,1],[0,0,1]]) > > In [46]: x[y.astype(bool)] > Out[46]: array([ 0.42440316, 0.29431396, 0.22774399, 0.78288364]) > > You can apply the histogram function to the resulting values. > > Actually what i was trying was to get a matrix with zeros where zeros appear > in the mask (y in your example) and the value of the original array where the > mask is set to 1. x = N.random.random((3,3)) y = N.array([[0,1,0],[1,0,1],[0,0,1]],bool) x[~y] = 0 should do the trick. Cheers St?fan From pearu at cens.ioc.ee Thu May 24 04:58:52 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 24 May 2007 10:58:52 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <465446B0.3070207@gmail.com> References: <465446B0.3070207@gmail.com> Message-ID: <465553CC.8090108@cens.ioc.ee> fred wrote: > Hi, > > 1) To compile my fortran code, I use the command line: > ifort call_func.f -limf -Wl,--rpath > -Wl,/usr/local/share/intel/fc/9.1.045/lib funcs.o -o call_func_f > which works fine. > > 2) I want to use the same trick with f2py. So, from f2py manpage, my > Makefile looks like: > IFCVER = 9.1.045 > PATH = /bin:/usr/bin:/usr/local/share/intel/fc/$(IFCVER)/bin > LD_LIBRARY_PATH_INTEL = /usr/local/share/intel/fc/$(IFCVER)/lib > LD_LIBRARY_PATH = /usr/local/lib:${LD_LIBRARY_PATH_INTEL} > LDFLAGS = -limf -Wl,--rpath -Wl,$(LD_LIBRARY_PATH_INTEL) > OPTIMIZATION = -O3 -xT -ipo > > all: funcs.o calc_KM.so calc_KV.so calc_output.so > > funcs.o: funcs.c > gcc -Wall funcs.c -c -o funcs.o > > calc_KM.so: calc_KM.f funcs.o > f2py --fcompiler=intel --opt="$(OPTIMIZATION)" > --f90flags="$(LDFLAGS)" -c calc_KM.f funcs.o -m calc_KM > > .../... > > but this has no effect, ie intel libs are not linked. > > What am I doing wrong ? --f90flags are used in compilation of fortran codes, not when linking. For linking libraries use `-lname` and `-Lpath` in f2py command line. But note that numpy/distutils/fcompiler/intel.py should take care of linking intel compiler libraries. If it does not work for your compiler, try to fix intel.py and send us your changes. Pearu From pearu at cens.ioc.ee Thu May 24 05:05:34 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 24 May 2007 11:05:34 +0200 Subject: [SciPy-user] f2py on x86_64 arch and intel C compiler... In-Reply-To: <465548E7.1070209@gmail.com> References: <465548E7.1070209@gmail.com> Message-ID: <4655555E.3000608@cens.ioc.ee> fred wrote: > Hi, > > As I use on my x86_64 arch linux box, I have to use -fPIC arg to the > intel C compiler. This should be fixed in numpy/distutils/fcompiler/intel.py file. > Anyway, I don't see any option to pass to f2py to set this. This is because usually distutils should take care of this. > How can I do that ? > > PS1: for intel fortran compiler on x86_64, you have to use > --fcompiler=intelem > but intelem is not a known parameter for --compiler option. What's the > problem ? --compiler is for specifying C compiler, use --fcompiler instead. --help-fcompiler shows which fortran compilers are found in the system. > PS2: should I post my issue on f2py-users ml to have better chance to > get an answer ? > (no answer for my previous post on intel.py) scipy-user list is ok. I don't have intel compiler in my system and therefore I cannot test the problems you experience. If you think that you have found a bug or it is an issue that needs to be fixed in numpy, please file a ticket. Sooner or later someone will try to resolve your tickets. Pearu From nwagner at iam.uni-stuttgart.de Thu May 24 05:46:50 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 24 May 2007 11:46:50 +0200 Subject: [SciPy-user] Spurious entries in arrays Message-ID: <46555F0A.10906@iam.uni-stuttgart.de> Hi, I have some spurious entries (due to numerical integration) in a skew-symmetric array, e.g. >>> Co_bc1[-5:,-5:] array([[ 2.03465784e-19, 1.57000000e-02, -2.61666667e-04, 0.00000000e+00, 0.00000000e+00], [ -1.57000000e-02, 3.33066907e-16, 3.14000000e-02, 7.85000000e-01, -1.57000000e-02], [ 2.61666667e-04, -3.14000000e-02, 2.03465784e-19, 1.57000000e-02, -2.61666667e-04], [ 0.00000000e+00, -7.85000000e-01, -1.57000000e-02, 3.33066907e-16, 3.14000000e-02], [ 0.00000000e+00, 1.57000000e-02, 2.61666667e-04, -3.14000000e-02, 2.03465784e-19]]) How can I generally replace entries of the order less than 1e-16 with zeros ? Nils From jss at ast.cam.ac.uk Thu May 24 05:48:45 2007 From: jss at ast.cam.ac.uk (Jeremy Sanders) Date: Thu, 24 May 2007 10:48:45 +0100 Subject: [SciPy-user] ANN: Veusz-0.99.0 release Message-ID: I am pleased to announce a new beta of a largely rewritten Veusz plotting package. This now uses Qt4 and numpy, adding support for Windows. Windows and Linux binaries are provided. For details see below: Veusz 0.99.0 (new Qt4/numpy beta) ------------ Velvet Ember Under Sky Zenith ----------------------------- http://home.gna.org/veusz/ Veusz is Copyright (C) 2003-2007 Jeremy Sanders Licenced under the GPL (version 2 or greater). Veusz is a scientific plotting package written in Python, using PyQt4 for display and user-interfaces, and numpy for handling the numeric data. Veusz is designed to produce publication-ready Postscript/PDF output. The user interface aims to be simple, consistent and powerful. Veusz provides a GUI, command line, embedding and scripting interface (based on Python) to its plotting facilities. It also allows for manipulation and editing of datasets. Changes from 0.10: This is the first release of a much rewritten version of Veusz It has been updated to run under Qt4 and numpy, and now supports Windows The user interface is also signficantly easier to use Other useful features include: * Colorbars for images (better color scaling for images too) * Grids of graphs with different sized subgraphs * Much better import dialog * Antialiased screen output * Native PNG and PDF export * Separate formatting/properties dialog * Handling of INF/NaN in input data * Transparency of graphs (not for EPS output) Plus many more useful changes (see ChangeLog) Features of package: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Fitting functions to data * Stacked plots and arrays of plots * Plot keys * Plot labels * LaTeX-like formatting for text * EPS/PDF/PNG export * Scripting interface * Dataset creation/manipulation * Embed Veusz within other programs * Text, CSV and FITS importing Requirements: Python (2.3 or greater required) http://www.python.org/ Qt >= 4.1 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.1 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/pyqt/ http://www.riverbankcomputing.co.uk/sip/ numpy >= 1.0 http://numpy.scipy.org/ Microsoft Core Fonts (recommended for nice output) http://corefonts.sourceforge.net/ PyFITS >= 1.1rc3 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits For documentation on using Veusz, see the "Documents" directory. The manual is in pdf, html and text format (generated from docbook). Issues: * This is a new beta, so there are likely to be a number of bugs, even though it has been used by a couple of people for some time. * Can be very slow to plot large datasets if antialiasing is enabled. Right click on graph and disable antialias to speed up output. * Some older versions of Qt (<4.2.2) can produce very large postscript output and random crashes. This may not be completely resolved (especially on windows). * The embedding interface appears to crash on exiting. If you enjoy using Veusz, I would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the SVN repository. Jeremy Sanders From stefan at sun.ac.za Thu May 24 06:23:36 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 24 May 2007 12:23:36 +0200 Subject: [SciPy-user] Spurious entries in arrays In-Reply-To: <46555F0A.10906@iam.uni-stuttgart.de> References: <46555F0A.10906@iam.uni-stuttgart.de> Message-ID: <20070524102336.GO6192@mentat.za.net> Hi Nils On Thu, May 24, 2007 at 11:46:50AM +0200, Nils Wagner wrote: > I have some spurious entries (due to numerical integration) in a > skew-symmetric array, e.g. > >>> Co_bc1[-5:,-5:] > array([[ 2.03465784e-19, 1.57000000e-02, -2.61666667e-04, > 0.00000000e+00, 0.00000000e+00], > [ -1.57000000e-02, 3.33066907e-16, 3.14000000e-02, > 7.85000000e-01, -1.57000000e-02], > [ 2.61666667e-04, -3.14000000e-02, 2.03465784e-19, > 1.57000000e-02, -2.61666667e-04], > [ 0.00000000e+00, -7.85000000e-01, -1.57000000e-02, > 3.33066907e-16, 3.14000000e-02], > [ 0.00000000e+00, 1.57000000e-02, 2.61666667e-04, > -3.14000000e-02, 2.03465784e-19]]) > > How can I generally replace entries of the order less than 1e-16 with > zeros ? For printing purposes, you can use N.set_printoptions(suppress=True) If you physically want to change the values, something like x[abs(x) < 1e-15] = 0 should do. Cheers St?fan From jerome-r.colin at laposte.net Thu May 24 07:32:34 2007 From: jerome-r.colin at laposte.net (Jerome Colin) Date: Thu, 24 May 2007 11:32:34 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Semi-working_Scipy_exe_for_windows_for_SSE?= =?utf-8?q?_AMD_chips=09posted?= References: Message-ID: Ryan Krauss gmail.com> writes: > > I have 20 megs of space available to me through Charter (my ISP) and > several people have requested this, so I have posted my AMD exe's > here: > > http://webpages.charter.net/ryankrauss/numpy-1.0.3.dev3795.win32-py2.5_sse_P3_AMD_Athlon.exe > http://webpages.charter.net/ryankrauss/scipy-0.5.3.dev3023.win32-py2.5_sse_P3_AMD_Athlon.exe > > Let me know if these are helpful. > > Ryan > I have installed these binaries on my AMD Duron winXP, and I do not have anymore problems with scipy.interpolate modules. Thanks a lot ! Jerome From pgmdevlist at gmail.com Thu May 24 09:08:38 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 24 May 2007 09:08:38 -0400 Subject: [SciPy-user] masking large arrays In-Reply-To: References: <20070524062011.GF6192@mentat.za.net> Message-ID: <200705240908.38933.pgmdevlist@gmail.com> Emanuele, > Actually what i was trying was to get a matrix with zeros where zeros > appear in the mask (y in your example) and the value of the original array > where the mask is set to 1. It seems you think backwards: with a masked array, the values where the mask is True (1) are discarded, and only the values where the mask is False(0) are used. You could first try to invert your mask (by logical_not, for example), and then fill the masked_array with zero. newimg = masked_array(img, mask=logical_not(initialmask)).filled(0) From gvenezian at yahoo.com Thu May 24 10:19:17 2007 From: gvenezian at yahoo.com (Giulio Venezian) Date: Thu, 24 May 2007 07:19:17 -0700 (PDT) Subject: [SciPy-user] Semi-working Scipy exe for windows for SSE AMD chips posted In-Reply-To: Message-ID: <964565.39593.qm@web51009.mail.re2.yahoo.com> Thank you Ryan! I installed these files and my problem with the zeros of the Bessel function disappeared. I run Windows XP on an Athlon XP chip. Giulio --- Jerome Colin wrote: > Ryan Krauss gmail.com> writes: > > > > > I have 20 megs of space available to me through > Charter (my ISP) and > > several people have requested this, so I have > posted my AMD exe's > > here: > > > > > http://webpages.charter.net/ryankrauss/numpy-1.0.3.dev3795.win32-py2.5_sse_P3_AMD_Athlon.exe > > > http://webpages.charter.net/ryankrauss/scipy-0.5.3.dev3023.win32-py2.5_sse_P3_AMD_Athlon.exe > > > > Let me know if these are helpful. > > > > Ryan > > > > I have installed these binaries on my AMD Duron > winXP, and I do not have anymore > problems with scipy.interpolate modules. > Thanks a lot ! > > Jerome > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ____________________________________________________________________________________Luggage? GPS? Comic books? Check out fitting gifts for grads at Yahoo! Search http://search.yahoo.com/search?fr=oni_on_mail&p=graduation+gifts&cs=bz From fredmfp at gmail.com Thu May 24 18:09:44 2007 From: fredmfp at gmail.com (fred) Date: Fri, 25 May 2007 00:09:44 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <465553CC.8090108@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> Message-ID: <46560D28.1090305@gmail.com> Pearu Peterson a ?crit : Hi Pearu, > --f90flags are used in compilation of fortran codes, not when > linking. Ok for this point. > For linking libraries use `-lname` and `-Lpath` > in f2py command line. But note that numpy/distutils/fcompiler/intel.py > should take care of linking intel compiler libraries. If it does not > work for your compiler, try to fix intel.py and send us your changes. > Ok, so how intel.py could guess that I want to add some args such as -limf -Wl,--rpath -Wl,/usr/local/share/intel/fc/9.1.045/lib ? I don't know. In others words, how can I set these args that f2py can understand and take in account ? For the fix, I got one. But I guess you won't like it: I have hardcoded it, so it fits only my own needs. This look like this, line 61 of intel.py: opt.append('-limf -Wl,--rpath -Wl,/usr/local/share/intel/fc/9.1.045/lib') I think a real fix should be to set some f2py option like --f90ldflags='...' but this is only my 2 cents... Others few notes: - intel.py does not take in account Core 2 Duo CPU, for whose you need to set the '-xT' option. - -xM option does not exist anymore in recent ifort release; thus it conflicts with other options, such as -xP, -xT, etc... I think I can write a "real" fix, if you have no time and give me some information about how things work in distutils/... Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Thu May 24 18:16:02 2007 From: fredmfp at gmail.com (fred) Date: Fri, 25 May 2007 00:16:02 +0200 Subject: [SciPy-user] f2py on x86_64 arch and intel C compiler... In-Reply-To: <4655555E.3000608@cens.ioc.ee> References: <465548E7.1070209@gmail.com> <4655555E.3000608@cens.ioc.ee> Message-ID: <46560EA2.9070209@gmail.com> Pearu Peterson a ?crit : > fred wrote: > >> Hi, >> >> As I use on my x86_64 arch linux box, I have to use -fPIC arg to the >> intel C compiler. >> > > This should be fixed in numpy/distutils/fcompiler/intel.py file. > > For which release ? I get numpy 1.0.2 and this is not the case. >> Anyway, I don't see any option to pass to f2py to set this. >> > > This is because usually distutils should take care of this. > So can I say it fails ? Or not ? >> How can I do that ? >> >> PS1: for intel fortran compiler on x86_64, you have to use >> --fcompiler=intelem >> but intelem is not a known parameter for --compiler option. What's the >> problem ? >> > > --compiler is for specifying C compiler, use --fcompiler instead. > No ;-) I do want to use --compiler f2py option because I do want to use intel C compiler with f2py. > --help-fcompiler shows which fortran compilers are found in the system. > I was talking about C compiler, not Fortran compiler. >> PS2: should I post my issue on f2py-users ml to have better chance to >> get an answer ? >> (no answer for my previous post on intel.py) >> > > scipy-user list is ok. I don't have intel compiler in my system and > therefore I cannot test the problems you experience. > Ok. > If you think that you have found a bug or it is an issue that needs > to be fixed in numpy, please file a ticket. Sooner or later someone > will try to resolve your tickets. > Ok. Thanks. Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Thu May 24 19:23:17 2007 From: fredmfp at gmail.com (fred) Date: Fri, 25 May 2007 01:23:17 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <465553CC.8090108@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> Message-ID: <46561E65.8030507@gmail.com> >Others few notes: > >- intel.py does not take in account Core 2 Duo CPU, for whose you need >to >set the '-xT' option. > >- -xM option does not exist anymore in recent ifort release; thus it >conflicts >with other options, such as -xP, -xT, etc... I forgot to mention, as third note: -KPIC option is deprecated and should be replaced by -fpic. -- http://scipy.org/FredericPetit From pearu at cens.ioc.ee Fri May 25 03:43:53 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 25 May 2007 09:43:53 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <46560D28.1090305@gmail.com> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> Message-ID: <465693B9.2000103@cens.ioc.ee> fred wrote: > Pearu Peterson a ?crit : > > Hi Pearu, >> --f90flags are used in compilation of fortran codes, not when >> linking. > Ok for this point. >> For linking libraries use `-lname` and `-Lpath` >> in f2py command line. But note that numpy/distutils/fcompiler/intel.py >> should take care of linking intel compiler libraries. If it does not >> work for your compiler, try to fix intel.py and send us your changes. >> > Ok, so how intel.py could guess that I want to add some args such as > -limf -Wl,--rpath -Wl,/usr/local/share/intel/fc/9.1.045/lib ? > I don't know. > In others words, how can I set these args that f2py can understand and > take in account ? At the moment this is not possible via f2py command line. > For the fix, I got one. > But I guess you won't like it: I have hardcoded it, so it fits only my > own needs. > This look like this, line 61 of intel.py: > > opt.append('-limf -Wl,--rpath -Wl,/usr/local/share/intel/fc/9.1.045/lib') > > I think a real fix should be to set some f2py option like > --f90ldflags='...' > but this is only my 2 cents... Yes, I agree. Could you file a numpy issue ticket for this feature and sign it to me? > Others few notes: > > - intel.py does not take in account Core 2 Duo CPU, for whose you need to > set the '-xT' option. This flag should be added via cpu.is_Core2() check to .get_arch_flags() methods. > - -xM option does not exist anymore in recent ifort release; thus it > conflicts > with other options, such as -xP, -xT, etc... Do you know which version of the compiler dropped -xM option? Then we can disable it by checking the value of self.get_version(). > I think I can write a "real" fix, if you have no time and give me some > information about how things work in distutils/... When you make changes to intel.py, for instance, then you can run python intel.py to see the effect of your changes. From pearu at cens.ioc.ee Fri May 25 03:53:25 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 25 May 2007 09:53:25 +0200 Subject: [SciPy-user] f2py on x86_64 arch and intel C compiler... In-Reply-To: <46560EA2.9070209@gmail.com> References: <465548E7.1070209@gmail.com> <4655555E.3000608@cens.ioc.ee> <46560EA2.9070209@gmail.com> Message-ID: <465695F5.5050706@cens.ioc.ee> fred wrote: > Pearu Peterson a ?crit : >> fred wrote: >> >>> Hi, >>> >>> As I use on my x86_64 arch linux box, I have to use -fPIC arg to the >>> intel C compiler. >>> >> This should be fixed in numpy/distutils/fcompiler/intel.py file. >> >> > For which release ? I get numpy 1.0.2 and this is not the case. Sorry, I was not clear due to my bad English. I meant that the problem must be fixed in the given file. >>> Anyway, I don't see any option to pass to f2py to set this. >>> >> This is because usually distutils should take care of this. >> > So can I say it fails ? Or not ? I think it might be related to how the python is compiled, see also note below. >>> How can I do that ? >>> >>> PS1: for intel fortran compiler on x86_64, you have to use >>> --fcompiler=intelem >>> but intelem is not a known parameter for --compiler option. What's the >>> problem ? >>> >> --compiler is for specifying C compiler, use --fcompiler instead. >> > No ;-) > I do want to use --compiler f2py option because I do want to use intel C > compiler with f2py. Ah, ok. See the output of `setup.py build_ext --help-compiler` for the valid compiler types for --compiler=. Note that when you are trying to use Intel compiler as C compiler then your Python should be compiled with Intel compiler as well. If not then you are pretty much alone with all the issues that can merge from trying to mix C object codes generated with different C compilers. Pearu From fredmfp at gmail.com Fri May 25 03:58:13 2007 From: fredmfp at gmail.com (fred) Date: Fri, 25 May 2007 09:58:13 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <465693B9.2000103@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> Message-ID: <46569715.2080404@gmail.com> Pearu Peterson a ?crit : > fred wrote: > >> Ok, so how intel.py could guess that I want to add some args such as >> -limf -Wl,--rpath -Wl,/usr/local/share/intel/fc/9.1.045/lib ? >> I don't know. >> In others words, how can I set these args that f2py can understand and >> take in account ? >> > > At the moment this is not possible via f2py command line. > > Ok, so we agree ;-) Thanks to have clarified this point. >> For the fix, I got one. >> But I guess you won't like it: I have hardcoded it, so it fits only my >> own needs. >> This look like this, line 61 of intel.py: >> >> opt.append('-limf -Wl,--rpath -Wl,/usr/local/share/intel/fc/9.1.045/lib') >> >> I think a real fix should be to set some f2py option like >> --f90ldflags='...' >> but this is only my 2 cents... >> > > Yes, I agree. Could you file a numpy issue ticket for this feature and > sign it to me? > No problem. > >> Others few notes: >> >> - intel.py does not take in account Core 2 Duo CPU, for whose you need to >> set the '-xT' option. >> > > This flag should be added via cpu.is_Core2() check to .get_arch_flags() > methods. > > >> - -xM option does not exist anymore in recent ifort release; thus it >> conflicts >> with other options, such as -xP, -xT, etc... >> > > Do you know which version of the compiler dropped -xM option? Then > we can disable it by checking the value of self.get_version(). > No, but I can ask some people at intel, if there is none here... Cheers, -- http://scipy.org/FredericPetit From robert.vergnes at yahoo.fr Fri May 25 05:32:10 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Fri, 25 May 2007 11:32:10 +0200 (CEST) Subject: [SciPy-user] matplotlib and py2exe Message-ID: <20070525093210.19745.qmail@web27413.mail.ukl.yahoo.com> Hello, I am trying to make a small standalone based using scipy and matplotlib but I encounter some issues. Is there a particular forum about matplotlib and py2exe? (I use python 2.4.3, matplolib 0.9.0 and scipy 0.5.2) Whatever I do and whichever is the script I use for setup try Any idea? Best Regards, Robert my setup is: from distutils.core import setup import py2exe import matplotlib setup(version = "0.20", description = "QPendulum", name = "QPendulum", windows = [ { "script": "QPendulum.py", "icon_resources": [(1, "QPendulum.ico")] } ], options={ 'py2exe': { 'packages' : ['matplotlib', 'pytz'], } }, #console = ["QPendulum_csl.py"], zipfile = "lib/shared.zip", data_files=[matplotlib.get_py2exe_datafiles()] ) But I get anyway an error like: (well, I believe the warning is not of a problem) F:\RFV-WorkFiles\DT_Danil_EQM\SoftwarePendulumEQM\MQE_Pendulum_2006\Python_rfv\May2007_QPendulum\WinDist\dist\lib\shared.zip\numpy\testing\numpytest.py:634: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Traceback (most recent call last): File "QPendulum.py", line 36, in ? File "pylab.pyc", line 1, in ? File "matplotlib\__init__.pyc", line 720, in ? File "matplotlib\__init__.pyc", line 273, in wrapper File "matplotlib\__init__.pyc", line 360, in _get_data_path RuntimeError: Could not find the matplotlib data files --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.g.basden at durham.ac.uk Fri May 25 08:03:36 2007 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Fri, 25 May 2007 13:03:36 +0100 (BST) Subject: [SciPy-user] scipy on ps3/ydl Message-ID: Hi, has anyone had any luck installing scipy on a ps3 with yellow dog linux (ydl)? I've managed to get it installing okay, but when I try: import scipy.lib.blas.fblas I get: File "/usr/lib/python2.4/site-packages/scipy/lib/blas/__init__.py", line 9, in ? import fblas ImportError: /usr/lib/python2.4/site-packages/scipy/lib/blas/fblas.so: R_PPC_REL24 relocation at 0x0db4fb18 for symbol `cabsf' out of range Any ideas? Thanks... From fredmfp at gmail.com Fri May 25 08:34:12 2007 From: fredmfp at gmail.com (fred) Date: Fri, 25 May 2007 14:34:12 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <465693B9.2000103@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> Message-ID: <4656D7C4.1040402@gmail.com> Pearu Peterson a ?crit : > This flag should be added via cpu.is_Core2() check to .get_arch_flags() > methods. > > Do you agree with something like this ?: --- intel.py 2007-05-25 11:00:11.000000000 +0200 +++ intel.py.orig 2007-05-25 10:59:34.000000000 +0200 @@ -56,8 +56,6 @@ opt.append('-tpp5') elif cpu.is_PentiumIV() or cpu.is_Xeon(): opt.extend(['-tpp7','-xW']) - elif cpu.is_Core2(): - opt.extend(['-tpp7','-xT']) if cpu.has_mmx() and not cpu.is_Xeon(): opt.append('-xM') if cpu.has_sse2(): >> - -xM option does not exist anymore in recent ifort release; thus it >> conflicts >> with other options, such as -xP, -xT, etc... >> > > Do you know which version of the compiler dropped -xM option? Then > we can disable it by checking the value of self.get_version(). > I'll reply when I get the answer from intel team. Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Fri May 25 08:38:21 2007 From: fredmfp at gmail.com (fred) Date: Fri, 25 May 2007 14:38:21 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <465693B9.2000103@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> Message-ID: <4656D8BD.3000003@gmail.com> Pearu Peterson a ?crit : > Yes, I agree. Could you file a numpy issue ticket for this feature and > sign it to me? > I have filed a ticket to numpy-tickets@ this morning, but no news since. Did something wrong ? -- http://scipy.org/FredericPetit From pearu at cens.ioc.ee Fri May 25 08:38:36 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 25 May 2007 14:38:36 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <4656D7C4.1040402@gmail.com> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <4656D7C4.1040402@gmail.com> Message-ID: <4656D8CC.5000408@cens.ioc.ee> fred wrote: > Pearu Peterson a ?crit : >> This flag should be added via cpu.is_Core2() check to .get_arch_flags() >> methods. >> >> > Do you agree with something like this ?: > --- intel.py 2007-05-25 11:00:11.000000000 +0200 > +++ intel.py.orig 2007-05-25 10:59:34.000000000 +0200 > @@ -56,8 +56,6 @@ > opt.append('-tpp5') > elif cpu.is_PentiumIV() or cpu.is_Xeon(): > opt.extend(['-tpp7','-xW']) > - elif cpu.is_Core2(): > - opt.extend(['-tpp7','-xT']) > if cpu.has_mmx() and not cpu.is_Xeon(): > opt.append('-xM') > if cpu.has_sse2(): Yes. >>> - -xM option does not exist anymore in recent ifort release; thus it >>> conflicts >>> with other options, such as -xP, -xT, etc... >>> >> Do you know which version of the compiler dropped -xM option? Then >> we can disable it by checking the value of self.get_version(). >> > I'll reply when I get the answer from intel team. That would be great, thanks! Pearu From pearu at cens.ioc.ee Fri May 25 08:50:43 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 25 May 2007 14:50:43 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <4656D8BD.3000003@gmail.com> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <4656D8BD.3000003@gmail.com> Message-ID: <4656DBA3.3020804@cens.ioc.ee> fred wrote: > Pearu Peterson a ?crit : >> Yes, I agree. Could you file a numpy issue ticket for this feature and >> sign it to me? >> > I have filed a ticket to numpy-tickets@ this morning, but no news since. > Did something wrong ? Tickets can be submitted in http://projects.scipy.org/scipy/numpy/ (you may need to log in) Pearu From fredmfp at gmail.com Fri May 25 09:17:57 2007 From: fredmfp at gmail.com (fred) Date: Fri, 25 May 2007 15:17:57 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <4656DBA3.3020804@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <4656D8BD.3000003@gmail.com> <4656DBA3.3020804@cens.ioc.ee> Message-ID: <4656E205.8090206@gmail.com> Pearu Peterson a ?crit : > Tickets can be submitted in > > http://projects.scipy.org/scipy/numpy/ > > (you may need to log in) > Ok, that's what I was looking for. Anyway, I did not see any "Assign to" field, as for enthought trac, for example... Cheers, -- http://scipy.org/FredericPetit From pearu at cens.ioc.ee Fri May 25 09:35:03 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 25 May 2007 15:35:03 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <4656E205.8090206@gmail.com> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <4656D8BD.3000003@gmail.com> <4656DBA3.3020804@cens.ioc.ee> <4656E205.8090206@gmail.com> Message-ID: <4656E607.7050105@cens.ioc.ee> fred wrote: > Pearu Peterson a ?crit : >> Tickets can be submitted in >> >> http://projects.scipy.org/scipy/numpy/ >> >> (you may need to log in) >> > Ok, that's what I was looking for. > Anyway, I did not see any "Assign to" field, > as for enthought trac, for example... enthought trac?, you should use http://projects.scipy.org/scipy/numpy/newticket to send numpy tickets. Pearu From nwagner at iam.uni-stuttgart.de Fri May 25 09:37:55 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 25 May 2007 15:37:55 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <4656E607.7050105@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <4656D8BD.3000003@gmail.com> <4656DBA3.3020804@cens.ioc.ee> <4656E205.8090206@gmail.com> <4656E607.7050105@cens.ioc.ee> Message-ID: <4656E6B3.1070905@iam.uni-stuttgart.de> Pearu Peterson wrote: > fred wrote: > >> Pearu Peterson a ?crit : >> >>> Tickets can be submitted in >>> >>> http://projects.scipy.org/scipy/numpy/ >>> >>> (you may need to log in) >>> >>> >> Ok, that's what I was looking for. >> Anyway, I did not see any "Assign to" field, >> as for enthought trac, for example... >> > > enthought trac?, you should use > > http://projects.scipy.org/scipy/numpy/newticket > > to send numpy tickets. > > Pearu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I guess he was looking for a field where he can assign the ticket directly to you. I cannot find it, too. Nils From fredmfp at gmail.com Fri May 25 09:46:56 2007 From: fredmfp at gmail.com (fred) Date: Fri, 25 May 2007 15:46:56 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <4656E6B3.1070905@iam.uni-stuttgart.de> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <4656D8BD.3000003@gmail.com> <4656DBA3.3020804@cens.ioc.ee> <4656E205.8090206@gmail.com> <4656E607.7050105@cens.ioc.ee> <4656E6B3.1070905@iam.uni-stuttgart.de> Message-ID: <4656E8D0.7070804@gmail.com> Nils Wagner a ?crit : > I guess he was looking for a field where he can assign the ticket > directly to you. > I cannot find it, too. > You got the trick, Nils ;-) Thanks to explain better than I did :-) Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Fri May 25 09:53:39 2007 From: fredmfp at gmail.com (fred) Date: Fri, 25 May 2007 15:53:39 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <4656E607.7050105@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <4656D8BD.3000003@gmail.com> <4656DBA3.3020804@cens.ioc.ee> <4656E205.8090206@gmail.com> <4656E607.7050105@cens.ioc.ee> Message-ID: <4656EA63.6080105@gmail.com> >Pearu Peterson a ?crit : >> enthought trac?, you should use >> >> http://projects.scipy.org/scipy/numpy/newticket >> >> to send numpy tickets. >> >No, no, I did not mean this ;-) >I only wanted to say that I do not see "Assign to" field in the >numpy trac as I can see it in enthought trac, so I could not assign to >you... I wanted to post this message with snapshot attached to show the issue with numpy trac,but scipy.org refused it. -- http://scipy.org/FredericPetit From cookedm at physics.mcmaster.ca Fri May 25 09:59:46 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 25 May 2007 09:59:46 -0400 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <465693B9.2000103@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> Message-ID: <20070525135946.GA24232@arbutus.physics.mcmaster.ca> On Fri, May 25, 2007 at 09:43:53AM +0200, Pearu Peterson wrote: > > > fred wrote: > > Pearu Peterson a ?crit : > > > > Hi Pearu, > >> --f90flags are used in compilation of fortran codes, not when > >> linking. > > Ok for this point. > >> For linking libraries use `-lname` and `-Lpath` > >> in f2py command line. But note that numpy/distutils/fcompiler/intel.py > >> should take care of linking intel compiler libraries. If it does not > >> work for your compiler, try to fix intel.py and send us your changes. > >> > > Ok, so how intel.py could guess that I want to add some args such as > > -limf -Wl,--rpath -Wl,/usr/local/share/intel/fc/9.1.045/lib ? > > I don't know. > > In others words, how can I set these args that f2py can understand and > > take in account ? > > At the moment this is not possible via f2py command line. > > > For the fix, I got one. > > But I guess you won't like it: I have hardcoded it, so it fits only my > > own needs. > > This look like this, line 61 of intel.py: > > > > opt.append('-limf -Wl,--rpath -Wl,/usr/local/share/intel/fc/9.1.045/lib') I've merged in my distutils-rework branch into numpy, which should let you specify this either with the LDFLAGS environment variable, or put this in your setup.cfg or ~/.pydistutils.cfg: [config_fc] ldflags=-limf -Wl,--rpath -Wl,/usr/local/share/intel/fc/9.1.045/lib -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From pearu at cens.ioc.ee Fri May 25 10:02:04 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 25 May 2007 16:02:04 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <4656E8D0.7070804@gmail.com> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <4656D8BD.3000003@gmail.com> <4656DBA3.3020804@cens.ioc.ee> <4656E205.8090206@gmail.com> <4656E607.7050105@cens.ioc.ee> <4656E6B3.1070905@iam.uni-stuttgart.de> <4656E8D0.7070804@gmail.com> Message-ID: <4656EC5C.6050705@cens.ioc.ee> fred wrote: > Nils Wagner a ?crit : >> I guess he was looking for a field where he can assign the ticket >> directly to you. >> I cannot find it, too. >> > You got the trick, Nils ;-) > Thanks to explain better than I did :-) Hmm, I can see my name in the assign to list: http://cens.ioc.ee/~pearu/sh.jpg Anyway, just file the ticket and I'll reassign it to myself. Pearu From fredmfp at gmail.com Fri May 25 10:04:58 2007 From: fredmfp at gmail.com (fred) Date: Fri, 25 May 2007 16:04:58 +0200 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <4656EC5C.6050705@cens.ioc.ee> References: <465446B0.3070207@gmail.com> <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <4656D8BD.3000003@gmail.com> <4656DBA3.3020804@cens.ioc.ee> <4656E205.8090206@gmail.com> <4656E607.7050105@cens.ioc.ee> <4656E6B3.1070905@iam.uni-stuttgart.de> <4656E8D0.7070804@gmail.com> <4656EC5C.6050705@cens.ioc.ee> Message-ID: <4656ED0A.7090804@gmail.com> Pearu Peterson a ?crit : > fred wrote: > >> Nils Wagner a ?crit : >> >>> I guess he was looking for a field where he can assign the ticket >>> directly to you. >>> I cannot find it, too. >>> >>> >> You got the trick, Nils ;-) >> Thanks to explain better than I did :-) >> > > Hmm, I can see my name in the assign to list: > http://cens.ioc.ee/~pearu/sh.jpg > > Yes, you got the "Assign to" field, unlike I and Nils. http://fredantispam.free.fr/numpy.png > Anyway, just file the ticket and I'll reassign it to myself. > Already done. Cheers, -- http://scipy.org/FredericPetit From cookedm at physics.mcmaster.ca Fri May 25 10:53:53 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 25 May 2007 10:53:53 -0400 Subject: [SciPy-user] f2py and ifort flags... In-Reply-To: <4656EC5C.6050705@cens.ioc.ee> References: <465553CC.8090108@cens.ioc.ee> <46560D28.1090305@gmail.com> <465693B9.2000103@cens.ioc.ee> <4656D8BD.3000003@gmail.com> <4656DBA3.3020804@cens.ioc.ee> <4656E205.8090206@gmail.com> <4656E607.7050105@cens.ioc.ee> <4656E6B3.1070905@iam.uni-stuttgart.de> <4656E8D0.7070804@gmail.com> <4656EC5C.6050705@cens.ioc.ee> Message-ID: <20070525145353.GA24414@arbutus.physics.mcmaster.ca> On Fri, May 25, 2007 at 04:02:04PM +0200, Pearu Peterson wrote: > > > fred wrote: > > Nils Wagner a ?crit : > >> I guess he was looking for a field where he can assign the ticket > >> directly to you. > >> I cannot find it, too. > >> > > You got the trick, Nils ;-) > > Thanks to explain better than I did :-) > > Hmm, I can see my name in the assign to list: > http://cens.ioc.ee/~pearu/sh.jpg > > Anyway, just file the ticket and I'll reassign it to myself. I'm guessing only those on the assignee list can assign. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cimrman3 at ntc.zcu.cz Fri May 25 12:15:06 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 25 May 2007 18:15:06 +0200 Subject: [SciPy-user] ANN: SFE 00.22.02 towel day release Message-ID: <46570B8A.9040308@ntc.zcu.cz> Version 00.22.02 is released as today is the towel day (as all hitchhikers in the Galaxy know)! In total, I had 42 reasons for the release; to name a few: working subdomain handling, improved region handling and bug fixes, see http://ui505p06-mbs.ntc.zcu.cz/sfe cheers, r. From t_crane at mrl.uiuc.edu Fri May 25 14:44:53 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Fri, 25 May 2007 13:44:53 -0500 Subject: [SciPy-user] list of arrays Message-ID: <9EADC1E53F9C70479BF6559370369114134470@mrlnt6.mrl.uiuc.edu> Hi, If I have a list of arrays, what's the best way of getting the last element of each array without using a for-loop? thanks, trevis ________________________________________________ Trevis Crane Postdoctoral Research Assoc. Department of Physics University of Ilinois 1110 W. Green St. Urbana, IL 61801 p: 217-244-8652 f: 217-244-2278 e: tcrane at uiuc.edu ________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmay at ou.edu Fri May 25 14:56:35 2007 From: rmay at ou.edu (Ryan May) Date: Fri, 25 May 2007 13:56:35 -0500 Subject: [SciPy-user] list of arrays In-Reply-To: <9EADC1E53F9C70479BF6559370369114134470@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114134470@mrlnt6.mrl.uiuc.edu> Message-ID: <46573163.2050101@ou.edu> Trevis Crane wrote: > Hi, > > > > If I have a list of arrays, what?s the best way of getting the last > element of each array without using a for-loop? > [array[-1] for array in list] (assuming the arrays are 1-D, otherwise use array.flatten() Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From t_crane at mrl.uiuc.edu Fri May 25 15:13:54 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Fri, 25 May 2007 14:13:54 -0500 Subject: [SciPy-user] list of arrays Message-ID: <9EADC1E53F9C70479BF6559370369114142ED7@mrlnt6.mrl.uiuc.edu> > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of Ryan May > Sent: Friday, May 25, 2007 1:57 PM > To: SciPy Users List > Subject: Re: [SciPy-user] list of arrays > > Trevis Crane wrote: > > Hi, > > > > > > > > If I have a list of arrays, what's the best way of getting the last > > element of each array without using a for-loop? > > > > [array[-1] for array in list] [Trevis Crane] I know I'm often dense, but I'm unsure how to implement this suggestion. Here's some code to clarify what I mean: from numpy import * a = array([1,2,3]) b = array([4,5,6]) c = array([7,8,9]) d = [] d.append(a) d.append(b) d.append(c) Now, d is a list containing three arrays, a,b and c. I assume you suggest something like this: d[array[-1] for array in d] But this returns a syntax error. I'm sure I'm missing something but not sure what. What I want is the output of the following for-loop e = zeros(len(d)) for j in range(len(d)) e[j] = d[j][-1] This gives me e, which obviously is an array containing the last elements of each of the arrays contained in the list. thanks, trevis > > (assuming the arrays are 1-D, otherwise use array.flatten() > > Ryan > > -- > Ryan May > Graduate Research Assistant > School of Meteorology > University of Oklahoma > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Fri May 25 15:32:24 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 May 2007 14:32:24 -0500 Subject: [SciPy-user] list of arrays In-Reply-To: <9EADC1E53F9C70479BF6559370369114142ED7@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142ED7@mrlnt6.mrl.uiuc.edu> Message-ID: <465739C8.4070903@gmail.com> Trevis Crane wrote: >> -----Original Message----- >> From: scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] On >> Behalf Of Ryan May >> Sent: Friday, May 25, 2007 1:57 PM >> To: SciPy Users List >> Subject: Re: [SciPy-user] list of arrays >> >> Trevis Crane wrote: >>> Hi, >>> >>> If I have a list of arrays, what's the best way of getting the last >>> element of each array without using a for-loop? >>> >> [array[-1] for array in list] > > [Trevis Crane] > > I know I'm often dense, but I'm unsure how to implement this suggestion. > Here's some code to clarify what I mean: > > from numpy import * > a = array([1,2,3]) > b = array([4,5,6]) > c = array([7,8,9]) > > d = [] > d.append(a) > d.append(b) > d.append(c) > > Now, d is a list containing three arrays, a,b and c. I assume you > suggest something like this: > > d[array[-1] for array in d] > > But this returns a syntax error. I'm sure I'm missing something but not > sure what. Exactly this: e = [array[-1] for array in d] No more. No less. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From t_crane at mrl.uiuc.edu Fri May 25 15:39:57 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Fri, 25 May 2007 14:39:57 -0500 Subject: [SciPy-user] list of arrays Message-ID: <9EADC1E53F9C70479BF6559370369114134472@mrlnt6.mrl.uiuc.edu> > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of Robert Kern > Sent: Friday, May 25, 2007 2:32 PM > To: SciPy Users List > Subject: Re: [SciPy-user] list of arrays > > Trevis Crane wrote: > >> -----Original Message----- > >> From: scipy-user-bounces at scipy.org > > [mailto:scipy-user-bounces at scipy.org] On > >> Behalf Of Ryan May > >> Sent: Friday, May 25, 2007 1:57 PM > >> To: SciPy Users List > >> Subject: Re: [SciPy-user] list of arrays > >> > >> Trevis Crane wrote: > >>> Hi, > >>> > >>> If I have a list of arrays, what's the best way of getting the last > >>> element of each array without using a for-loop? > >>> > >> [array[-1] for array in list] > > > > [Trevis Crane] > > > > I know I'm often dense, but I'm unsure how to implement this suggestion. > > Here's some code to clarify what I mean: > > > > from numpy import * > > a = array([1,2,3]) > > b = array([4,5,6]) > > c = array([7,8,9]) > > > > d = [] > > d.append(a) > > d.append(b) > > d.append(c) > > > > Now, d is a list containing three arrays, a,b and c. I assume you > > suggest something like this: > > > > d[array[-1] for array in d] > > > > But this returns a syntax error. I'm sure I'm missing something but not > > sure what. > > Exactly this: > > e = [array[-1] for array in d] [Trevis Crane] Gotcha. Thanks. > > No more. No less. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From rmay at ou.edu Fri May 25 15:45:16 2007 From: rmay at ou.edu (Ryan May) Date: Fri, 25 May 2007 14:45:16 -0500 Subject: [SciPy-user] list of arrays In-Reply-To: <9EADC1E53F9C70479BF6559370369114142ED7@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142ED7@mrlnt6.mrl.uiuc.edu> Message-ID: <46573CCC.1050603@ou.edu> Trevis Crane wrote: >> -----Original Message----- >> From: scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] On >> Behalf Of Ryan May >> Sent: Friday, May 25, 2007 1:57 PM >> To: SciPy Users List >> Subject: Re: [SciPy-user] list of arrays >> >> Trevis Crane wrote: >>> Hi, >>> >>> >>> >>> If I have a list of arrays, what's the best way of getting the last >>> element of each array without using a for-loop? >>> >> [array[-1] for array in list] > > [Trevis Crane] > > I know I'm often dense, but I'm unsure how to implement this suggestion. > Here's some code to clarify what I mean: > > from numpy import * > a = array([1,2,3]) > b = array([4,5,6]) > c = array([7,8,9]) > > d = [] > d.append(a) > d.append(b) > d.append(c) > > Now, d is a list containing three arrays, a,b and c. I assume you > suggest something like this: > > d[array[-1] for array in d] > > But this returns a syntax error. I'm sure I'm missing something but not > sure what. > > What I want is the output of the following for-loop > > e = zeros(len(d)) > for j in range(len(d)) > e[j] = d[j][-1] > > This gives me e, which obviously is an array containing the last > elements of each of the arrays contained in the list. > > Well, using the list comprehension from above, this would be: e = numpy.array([a[-1] for a in l]) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From anand at cse.ucsc.edu Sun May 27 17:42:03 2007 From: anand at cse.ucsc.edu (Anand Patil) Date: Sun, 27 May 2007 14:42:03 -0700 Subject: [SciPy-user] Fwd: Numpy.linalg not linking dgesdd: threaded ATLAS, AMD 64 bit, Fedora-5 In-Reply-To: <2bc7a5a50705271440x4456d82p36bd827e0b8fbda8@mail.gmail.com> References: <2bc7a5a50705271440x4456d82p36bd827e0b8fbda8@mail.gmail.com> Message-ID: <2bc7a5a50705271442s28d878b0gdc3bc64532913964@mail.gmail.com> Hi all, I'm having trouble understanding the following error; it looks like numpy's setup.py is finding my ATLAS installation all right, but when linalg it tries to load the non-ATLAS function dgesdd it has problems. I tried adding extra_compile_args and extra_link_args = -fPIC to the add_extension('lapack_lite'...) portion of linalg's setup.py to no avail. threaded ATLAS, AMD 64 bit, Fedora-5; numpy source checked out just now. Thanks in advance, Anand Problem: In [2]: from numpy.linalg import * --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /projects/mangellab/anand/ /projects/mangellab/PythonPkgs/lib64/python/numpy/__init__.py 41 import lib 42 from lib import * ---> 43 import linalg 44 import fft 45 import random /projects/mangellab/PythonPkgs/lib64/python/numpy/linalg/__init__.py 2 from info import __doc__ 3 ----> 4 from linalg import * 5 6 def test(level=1, verbosity=1): /projects/mangellab/PythonPkgs/lib64/python/numpy/linalg/linalg.py 23 isfinite 24 from numpy.lib import triu ---> 25 from numpy.linalg import lapack_lite 26 27 fortran_int = intc ImportError: /projects/mangellab/PythonPkgs/lib64/python/numpy/linalg/lapack_lite.so: undefined symbol: dgesdd_ My site.cfg file: [blas_opt] libraries = ptf77blas, ptcblas, atlas library_dirs = /projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4 include_dirs = /projects/mangellab/anand/ATLAS/include [lapack_opt] libraries = lapack, ptf77blas, ptcblas, atlas library_dirs = /projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4 include_dirs = /projects/mangellab/anand/ATLAS/ setup.py output: [anand at stardance numpy]$ python setup.py install --home=/projects/mangellab/PythonPkgs Running from numpy source directory. F2PY Version 2_3830 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4'] language = c customize GnuFCompiler Found executable /usr/bin/g77 Found executable /usr/bin/g77 customize GnuFCompiler Found executable /usr/bin/g77 Found executable /usr/bin/g77 customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4 -lptf77blas -lptcblas -latlas -o _configtest ATLAS version 3.6.0 built by anand on Sun May 27 11:33:38 PDT 2007: UNAME : Linux stardance.cse.ucsc.edu 2.6.20-1.2300.fc5 #1 SMP Sun Mar 11 19:29:01 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux INSTFLG : MMDEF : ARCHDEF : F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 524288 F77 : /usr/bin/g77, version GNU Fortran (GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-56.fc5)) 3.2.3 20030502 (Red Hat Linux 3.2.3-13) F77FLAGS : -fPIC -fomit-frame-pointer -O -m64 CC : /usr/bin/gcc, version gcc (GCC) 4.1.1 20070105 (Red Hat 4.1.1-51) CC FLAGS : -fPIC -fomit-frame-pointer -O -mfpmath=387 -m64 MCC : /usr/bin/gcc, version gcc (GCC) 4.1.1 20070105 (Red Hat 4.1.1-51) MCCFLAGS : -fPIC -fomit-frame-pointer -O -mfpmath=387 -m64 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4'] language = c define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in /projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4 numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS /projects/mangellab/anand/numpy/numpy/distutils/system_info.py:943: UserWarning: ********************************************************************* Lapack library (from ATLAS) is probably incomplete: size of /projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4/liblapack.a is 485k (expected >4000k) Follow the instructions in the KNOWN PROBLEMS section of the file numpy/INSTALL.txt. ********************************************************************* warnings.warn(message) Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4'] language = c customize GnuFCompiler Found executable /usr/bin/g77 Found executable /usr/bin/g77 customize GnuFCompiler Found executable /usr/bin/g77 Found executable /usr/bin/g77 customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4 -llapack -lptf77blas -lptcblas -latlas -o _configtest ATLAS version 3.6.0 built by anand on Sun May 27 11:33:38 PDT 2007: UNAME : Linux stardance.cse.ucsc.edu 2.6.20-1.2300.fc5 #1 SMP Sun Mar 11 19:29:01 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux INSTFLG : MMDEF : ARCHDEF : F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 524288 F77 : /usr/bin/g77, version GNU Fortran (GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-56.fc5)) 3.2.3 20030502 (Red Hat Linux 3.2.3-13) F77FLAGS : -fPIC -fomit-frame-pointer -O -m64 CC : /usr/bin/gcc, version gcc (GCC) 4.1.1 20070105 (Red Hat 4.1.1-51) CC FLAGS : -fPIC -fomit-frame-pointer -O -mfpmath=387 -m64 MCC : /usr/bin/gcc, version gcc (GCC) 4.1.1 20070105 (Red Hat 4.1.1-51) MCCFLAGS : -fPIC -fomit-frame-pointer -O -mfpmath=387 -m64 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4'] language = c define_macros = [('ATLAS_INFO', '"\\" 3.6.0\\""')] running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources creating build creating build/src.linux-x86_64-2.4 creating build/src.linux-x86_64-2.4/numpy creating build/src.linux-x86_64-2.4/numpy/distutils building extension " numpy.core.multiarray" sources creating build/src.linux-x86_64-2.4/numpy/core Generating build/src.linux-x86_64-2.4/numpy/core/config.h customize GnuFCompiler Found executable /usr/bin/g77 Found executable /usr/bin/g77 customize GnuFCompiler Found executable /usr/bin/g77 Found executable /usr/bin/g77 customize GnuFCompiler using config C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-I/usr/include/python2.4 -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:50: warning: format '%d' expects type 'int', but argument 4 has type 'long unsigned int' _configtest.c:57: warning: format '%d' expects type 'int', but argument 4 has type 'long unsigned int' _configtest.c:72: warning: format '%d' expects type 'int', but argument 4 has type 'long unsigned int' gcc -pthread _configtest.o -L/usr/local/lib -L/usr/lib -o _configtest /usr/bin/ld: skipping incompatible /usr/lib/libpthread.so when searching for -lpthread /usr/bin/ld: skipping incompatible /usr/lib/libpthread.a when searching for -lpthread /usr/bin/ld: skipping incompatible /usr/lib/libc.so when searching for -lc /usr/bin/ld: skipping incompatible /usr/lib/libc.a when searching for -lc _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c gcc -pthread _configtest.o -o _configtest _configtest.o: In function `main': /projects/mangellab/anand/numpy/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status _configtest.o: In function `main': /projects/mangellab/anand/numpy/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c gcc -pthread _configtest.o -lm -o _configtest _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: warning: statement with no effect gcc -pthread _configtest.o -lm -o _configtest success! removing: _configtest.c _configtest.o _configtest adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h' to sources. creating build/src.linux-x86_64-2.4/numpy/core/src conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/scalartypes.inc adding 'build/src.linux-x86_64-2.4/numpy/core/src' to include_dirs. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc numpy.core - nothing done with h_files= ['build/src.linux-x86_64- 2.4/numpy/core/src/scalartypes.inc', 'build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc', 'build/src.linux-x86_64-2.4/numpy/core/config.h', 'build/src.linux-x86_64- 2.4/numpy/core/__multiarray_api.h'] building extension "numpy.core.umath" sources adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.linux-x86_64- 2.4/numpy/core/__ufunc_api.h' to sources. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/umathmodule.c adding 'build/src.linux-x86_64-2.4/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src.linux-x86_64- 2.4/numpy/core/src/scalartypes.inc', 'build/src.linux-x86_64-2.4/numpy/core/src/arraytypes.inc', 'build/src.linux-x86_64-2.4/numpy/core/config.h', 'build/src.linux-x86_64- 2.4/numpy/core/__ufunc_api.h'] building extension "numpy.core._sort" sources adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.linux-x86_64- 2.4/numpy/core/__multiarray_api.h' to sources. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/_sortmodule.c numpy.core - nothing done with h_files= ['build/src.linux-x86_64-2.4/numpy/core/config.h', 'build/src.linux-x86_64- 2.4/numpy/core/__multiarray_api.h'] building extension "numpy.core.scalarmath" sources adding 'build/src.linux-x86_64-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.linux-x86_64-2.4/numpy/core/__ufunc_api.h' to sources. conv_template:> build/src.linux-x86_64-2.4/numpy/core/src/scalarmathmodule.c numpy.core - nothing done with h_files= ['build/src.linux-x86_64-2.4/numpy/core/config.h', 'build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h', 'build/src.linux-x86_64- 2.4/numpy/core/__ufunc_api.h'] building extension "numpy.core._dotblas" sources adding 'numpy/core/blasdot/_dotblas.c' to sources. building extension "numpy.lib._compiled_base" sources building extension "numpy.numarray._capi" sources building extension "numpy.fft.fftpack_lite" sources building extension "numpy.linalg.lapack_lite" sources creating build/src.linux-x86_64- 2.4/numpy/linalg adding 'numpy/linalg/lapack_litemodule.c' to sources. building extension "numpy.random.mtrand" sources creating build/src.linux-x86_64-2.4/numpy/random C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: _configtest.c _configtest.c:7:2: error: #error No _WIN32 _configtest.c:7:2: error: #error No _WIN32 failure. removing: _configtest.c _configtest.o building data_files sources running build_py creating build/lib.linux-x86_64-2.4 creating build/lib.linux-x86_64-2.4/numpy copying numpy/_import_tools.py -> build/lib.linux-x86_64- 2.4/numpy copying numpy/ctypeslib.py -> build/lib.linux-x86_64-2.4/numpy copying numpy/__init__.py -> build/lib.linux-x86_64-2.4/numpy copying numpy/setup.py -> build/lib.linux-x86_64-2.4/numpy copying numpy/matlib.py -> build/lib.linux-x86_64- 2.4/numpy copying numpy/add_newdocs.py -> build/lib.linux-x86_64-2.4/numpy copying numpy/dual.py -> build/lib.linux-x86_64-2.4/numpy copying numpy/version.py -> build/lib.linux-x86_64-2.4/numpy copying build/src.linux-x86_64- 2.4/numpy/__config__.py -> build/lib.linux-x86_64-2.4/numpy creating build/lib.linux-x86_64-2.4/numpy/distutils copying numpy/distutils/misc_util.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/core.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils copying numpy/distutils/__init__.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/info.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/mingw32ccompiler.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils copying numpy/distutils/line_endings.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/from_template.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/system_info.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils copying numpy/distutils/conv_template.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/setup.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/cpuinfo.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils copying numpy/distutils/lib2def.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/intelccompiler.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/extension.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils copying numpy/distutils/interactive.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/ccompiler.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/__version__.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils copying numpy/distutils/log.py -> build/lib.linux-x86_64-2.4/numpy/distutils copying numpy/distutils/unixccompiler.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying numpy/distutils/exec_command.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils copying numpy/distutils/environment.py -> build/lib.linux-x86_64-2.4 /numpy/distutils copying build/src.linux-x86_64-2.4/numpy/distutils/__config__.py -> build/lib.linux-x86_64-2.4/numpy/distutils creating build/lib.linux-x86_64-2.4/numpy/distutils/command copying numpy/distutils/command/config_compiler.py -> build/lib.linux-x86_64-2.4/numpy/distutils/command copying numpy/distutils/command/build_clib.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils/command copying numpy/distutils/command/egg_info.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/command copying numpy/distutils/command/__init__.py -> build/lib.linux-x86_64-2.4/numpy/distutils/command copying numpy/distutils/command/build.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/command copying numpy/distutils/command/build_ext.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/command copying numpy/distutils/command/install_data.py -> build/lib.linux-x86_64- 2.4/numpy/distutils/command copying numpy/distutils/command/install_headers.py -> build/lib.linux-x86_64-2.4/numpy/distutils/command copying numpy/distutils/command/bdist_rpm.py -> build/lib.linux-x86_64-2.4/numpy/distutils/command copying numpy/distutils/command/config.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/command copying numpy/distutils/command/build_scripts.py -> build/lib.linux-x86_64- 2.4/numpy/distutils/command copying numpy/distutils/command/build_src.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils/command copying numpy/distutils/command/install.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/command copying numpy/distutils/command/sdist.py -> build/lib.linux-x86_64-2.4/numpy/distutils/command copying numpy/distutils/command/build_py.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/command creating build/lib.linux-x86_64-2.4/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/__init__.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils/fcompiler copying numpy/distutils/fcompiler/mips.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/fcompiler copying numpy/distutils/fcompiler/gnu.py -> build/lib.linux-x86_64-2.4/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/intel.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/fcompiler copying numpy/distutils/fcompiler/vast.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/fcompiler copying numpy/distutils/fcompiler/absoft.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils/fcompiler copying numpy/distutils/fcompiler/none.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/fcompiler copying numpy/distutils/fcompiler/compaq.py -> build/lib.linux-x86_64-2.4/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/lahey.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/fcompiler copying numpy/distutils/fcompiler/g95.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/fcompiler copying numpy/distutils/fcompiler/hpux.py -> build/lib.linux-x86_64- 2.4 /numpy/distutils/fcompiler copying numpy/distutils/fcompiler/nag.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/fcompiler copying numpy/distutils/fcompiler/sun.py -> build/lib.linux-x86_64-2.4/numpy/distutils/fcompiler copying numpy/distutils/fcompiler/pg.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/fcompiler copying numpy/distutils/fcompiler/ibm.py -> build/lib.linux-x86_64-2.4 /numpy/distutils/fcompiler creating build/lib.linux-x86_64- 2.4/numpy/testing copying numpy/testing/numpytest.py -> build/lib.linux-x86_64-2.4 /numpy/testing copying numpy/testing/info.py -> build/lib.linux-x86_64-2.4/numpy/testing copying numpy/testing/__init__.py -> build/lib.linux-x86_64- 2.4 /numpy/testing copying numpy/testing/setup.py -> build/lib.linux-x86_64-2.4/numpy/testing copying numpy/testing/utils.py -> build/lib.linux-x86_64-2.4/numpy/testing creating build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/rules.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/common_rules.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/diagnose.py -> build/lib.linux-x86_64- 2.4/numpy/f2py copying numpy/f2py/info.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/capi_maps.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/auxfuncs.py -> build/lib.linux-x86_64- 2.4/numpy/f2py copying numpy/f2py/cb_rules.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/__init__.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/setup.py -> build/lib.linux-x86_64- 2.4/numpy/f2py copying numpy/f2py/use_rules.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/func2subr.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/f2py2e.py -> build/lib.linux-x86_64- 2.4/numpy/f2py copying numpy/f2py/f90mod_rules.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/f2py_testing.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/crackfortran.py -> build/lib.linux-x86_64- 2.4/numpy/f2py copying numpy/f2py/cfuncs.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/__version__.py -> build/lib.linux-x86_64-2.4/numpy/f2py copying numpy/f2py/__svn_version__.py -> build/lib.linux-x86_64- 2.4 /numpy/f2py creating build/lib.linux-x86_64-2.4/numpy/f2py/lib copying numpy/f2py/lib/api.py -> build/lib.linux-x86_64-2.4/numpy/f2py/lib copying numpy/f2py/lib/nary.py -> build/lib.linux-x86_64-2.4/numpy/f2py/lib copying numpy/f2py/lib/main.py -> build/lib.linux-x86_64-2.4/numpy/f2py/lib copying numpy/f2py/lib/wrapper_base.py -> build/lib.linux-x86_64-2.4 /numpy/f2py/lib copying numpy/f2py/lib/__init__.py -> build/lib.linux-x86_64- 2.4 /numpy/f2py/lib copying numpy/f2py/lib/setup.py -> build/lib.linux-x86_64-2.4/numpy/f2py/lib copying numpy/f2py/lib/py_wrap.py -> build/lib.linux-x86_64-2.4 /numpy/f2py/lib copying numpy/f2py/lib/py_wrap_subprogram.py -> build/lib.linux-x86_64- 2.4 /numpy/f2py/lib copying numpy/f2py/lib/py_wrap_type.py -> build/lib.linux-x86_64-2.4 /numpy/f2py/lib creating build/lib.linux-x86_64-2.4/numpy/f2py/lib/parser copying numpy/f2py/lib/parser/test_parser.py -> build/lib.linux-x86_64- 2.4 /numpy/f2py/lib/parser copying numpy/f2py/lib/parser/block_statements.py -> build/lib.linux-x86_64- 2.4/numpy/f2py/lib/parser copying numpy/f2py/lib/parser/pattern_tools.py -> build/lib.linux-x86_64-2.4/numpy/f2py/lib/parser copying numpy/f2py/lib/parser/__init__.py -> build/lib.linux-x86_64-2.4 /numpy/f2py/lib/parser copying numpy/f2py/lib/parser/utils.py -> build/lib.linux-x86_64-2.4 /numpy/f2py/lib/parser copying numpy/f2py/lib/parser/parsefortran.py -> build/lib.linux-x86_64- 2.4 /numpy/f2py/lib/parser copying numpy/f2py/lib/parser/splitline.py -> build/lib.linux-x86_64-2.4 /numpy/f2py/lib/parser copying numpy/f2py/lib/parser/base_classes.py -> build/lib.linux-x86_64-2.4/numpy/f2py/lib/parser copying numpy/f2py/lib/parser/readfortran.py -> build/lib.linux-x86_64-2.4 /numpy/f2py/lib/parser copying numpy/f2py/lib/parser/api.py -> build/lib.linux-x86_64-2.4 /numpy/f2py/lib/parser copying numpy/f2py/lib/parser/sourceinfo.py -> build/lib.linux-x86_64- 2.4 /numpy/f2py/lib/parser copying numpy/f2py/lib/parser/Fortran2003.py -> build/lib.linux-x86_64-2.4 /numpy/f2py/lib/parser copying numpy/f2py/lib/parser/test_Fortran2003.py -> build/lib.linux-x86_64- 2.4/numpy/f2py/lib/parser copying numpy/f2py/lib/parser/typedecl_statements.py -> build/lib.linux-x86_64-2.4/numpy/f2py/lib/parser copying numpy/f2py/lib/parser/statements.py -> build/lib.linux-x86_64-2.4 /numpy/f2py/lib/parser creating build/lib.linux-x86_64- 2.4/numpy/core copying numpy/core/arrayprint.py -> build/lib.linux-x86_64-2.4/numpy/core copying numpy/core/info.py -> build/lib.linux-x86_64-2.4/numpy/core copying numpy/core/defchararray.py -> build/lib.linux-x86_64- 2.4/numpy/core copying numpy/core/ma.py -> build/lib.linux-x86_64-2.4/numpy/core copying numpy/core/__init__.py -> build/lib.linux-x86_64-2.4/numpy/core copying numpy/core/setup.py -> build/lib.linux-x86_64- 2.4/numpy/core copying numpy/core/records.py -> build/lib.linux-x86_64-2.4/numpy/core copying numpy/core/numeric.py -> build/lib.linux-x86_64-2.4/numpy/core copying numpy/core/_internal.py -> build/lib.linux-x86_64- 2.4/numpy/core copying numpy/core/memmap.py -> build/lib.linux-x86_64-2.4/numpy/core copying numpy/core/defmatrix.py -> build/lib.linux-x86_64-2.4/numpy/core copying numpy/core/fromnumeric.py -> build/lib.linux-x86_64- 2.4/numpy/core copying numpy/core/numerictypes.py -> build/lib.linux-x86_64-2.4/numpy/core copying numpy/core/__svn_version__.py -> build/lib.linux-x86_64-2.4 /numpy/core copying numpy/core/code_generators/generate_array_api.py -> build/lib.linux-x86_64- 2.4/numpy/core creating build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/shape_base.py -> build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/scimath.py -> build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/twodim_base.py -> build/lib.linux-x86_64- 2.4/numpy/lib copying numpy/lib/machar.py -> build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/info.py -> build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/function_base.py -> build/lib.linux-x86_64- 2.4/numpy/lib copying numpy/lib/__init__.py -> build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/setup.py -> build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/utils.py -> build/lib.linux-x86_64- 2.4/numpy/lib copying numpy/lib/getlimits.py -> build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/convdtype.py -> build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/arraysetops.py -> build/lib.linux-x86_64- 2.4/numpy/lib copying numpy/lib/user_array.py -> build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/type_check.py -> build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/index_tricks.py -> build/lib.linux-x86_64- 2.4/numpy/lib copying numpy/lib/polynomial.py -> build/lib.linux-x86_64-2.4/numpy/lib copying numpy/lib/ufunclike.py -> build/lib.linux-x86_64-2.4/numpy/lib creating build/lib.linux-x86_64-2.4/numpy/oldnumeric copying numpy/oldnumeric/__init__.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/precision.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/misc.py -> build/lib.linux-x86_64- 2.4 /numpy/oldnumeric copying numpy/oldnumeric/ma.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/rng_stats.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/setup.py -> build/lib.linux-x86_64- 2.4 /numpy/oldnumeric copying numpy/oldnumeric/ufuncs.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/matrix.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/random_array.py -> build/lib.linux-x86_64- 2.4 /numpy/oldnumeric copying numpy/oldnumeric/typeconv.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/functions.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/mlab.py -> build/lib.linux-x86_64- 2.4 /numpy/oldnumeric copying numpy/oldnumeric/user_array.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/compat.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/fft.py -> build/lib.linux-x86_64- 2.4 /numpy/oldnumeric copying numpy/oldnumeric/fix_default_axis.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/array_printer.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/alter_code1.py -> build/lib.linux-x86_64- 2.4 /numpy/oldnumeric copying numpy/oldnumeric/alter_code2.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/arrayfns.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric copying numpy/oldnumeric/rng.py -> build/lib.linux-x86_64- 2.4 /numpy/oldnumeric copying numpy/oldnumeric/linear_algebra.py -> build/lib.linux-x86_64-2.4 /numpy/oldnumeric creating build/lib.linux-x86_64-2.4/numpy/numarray copying numpy/numarray/setup.py -> build/lib.linux-x86_64- 2.4 /numpy/numarray copying numpy/numarray/convolve.py -> build/lib.linux-x86_64-2.4 /numpy/numarray copying numpy/numarray/ma.py -> build/lib.linux-x86_64-2.4/numpy/numarray copying numpy/numarray/random_array.py -> build/lib.linux-x86_64- 2.4 /numpy/numarray copying numpy/numarray/__init__.py -> build/lib.linux-x86_64-2.4 /numpy/numarray copying numpy/numarray/nd_image.py -> build/lib.linux-x86_64-2.4 /numpy/numarray copying numpy/numarray/ufuncs.py -> build/lib.linux-x86_64- 2.4 /numpy/numarray copying numpy/numarray/matrix.py -> build/lib.linux-x86_64-2.4 /numpy/numarray copying numpy/numarray/functions.py -> build/lib.linux-x86_64-2.4 /numpy/numarray copying numpy/numarray/mlab.py -> build/lib.linux-x86_64- 2.4/numpy/numarray copying numpy/numarray/util.py -> build/lib.linux-x86_64-2.4/numpy/numarray copying numpy/numarray/image.py -> build/lib.linux-x86_64-2.4/numpy/numarray copying numpy/numarray/fft.py -> build/lib.linux-x86_64- 2.4/numpy/numarray copying numpy/numarray/numerictypes.py -> build/lib.linux-x86_64-2.4 /numpy/numarray copying numpy/numarray/alter_code1.py -> build/lib.linux-x86_64-2.4 /numpy/numarray copying numpy/numarray/compat.py -> build/lib.linux-x86_64- 2.4 /numpy/numarray copying numpy/numarray/alter_code2.py -> build/lib.linux-x86_64-2.4 /numpy/numarray copying numpy/numarray/session.py -> build/lib.linux-x86_64-2.4 /numpy/numarray copying numpy/numarray/linear_algebra.py -> build/lib.linux-x86_64- 2.4 /numpy/numarray creating build/lib.linux-x86_64-2.4/numpy/fft copying numpy/fft/info.py -> build/lib.linux-x86_64-2.4/numpy/fft copying numpy/fft/fftpack.py -> build/lib.linux-x86_64-2.4/numpy/fft copying numpy/fft/__init__.py -> build/lib.linux-x86_64- 2.4/numpy/fft copying numpy/fft/helper.py -> build/lib.linux-x86_64-2.4/numpy/fft copying numpy/fft/setup.py -> build/lib.linux-x86_64-2.4/numpy/fft creating build/lib.linux-x86_64-2.4/numpy/linalg copying numpy/linalg/info.py -> build/lib.linux-x86_64- 2.4/numpy/linalg copying numpy/linalg/__init__.py -> build/lib.linux-x86_64-2.4/numpy/linalg copying numpy/linalg/setup.py -> build/lib.linux-x86_64-2.4/numpy/linalg copying numpy/linalg/linalg.py -> build/lib.linux-x86_64- 2.4/numpy/linalg creating build/lib.linux-x86_64-2.4/numpy/random copying numpy/random/info.py -> build/lib.linux-x86_64-2.4/numpy/random copying numpy/random/__init__.py -> build/lib.linux-x86_64-2.4/numpy/random copying numpy/random/setup.py -> build/lib.linux-x86_64-2.4/numpy/random running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'numpy.core.multiarray' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC creating build/temp.linux-x86_64-2.4 creating build/temp.linux-x86_64-2.4/numpy creating build/temp.linux-x86_64-2.4/numpy/core creating build/temp.linux-x86_64-2.4/numpy/core/src compile options: '-Ibuild/src.linux-x86_64- 2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: numpy/core/src/multiarraymodule.c gcc -pthread -shared build/temp.linux-x86_64- 2.4/numpy/core/src/multiarraymodule.o -lm -lm -o build/lib.linux-x86_64-2.4/numpy/core/multiarray.so building 'numpy.core.umath' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC creating build/temp.linux-x86_64-2.4/build creating build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4 creating build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/numpy creating build/temp.linux-x86_64- 2.4/build/src.linux-x86_64-2.4/numpy/core creating build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4 /numpy/core/src compile options: '-Ibuild/src.linux-x86_64-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-x86_64- 2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: build/src.linux-x86_64-2.4/numpy/core/src/umathmodule.c gcc -pthread -shared build/temp.linux-x86_64-2.4/build/src.linux-x86_64- 2.4/numpy/core/src/umathmodule.o -lm -o build/lib.linux-x86_64-2.4/numpy/core/umath.so building 'numpy.core._sort' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: build/src.linux-x86_64-2.4/numpy/core/src/_sortmodule.c gcc -pthread -shared build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/numpy/core/src/_sortmodule.o -lm -o build/lib.linux-x86_64-2.4/numpy/core/_sort.so building 'numpy.core.scalarmath' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64- 2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: build/src.linux-x86_64-2.4/numpy/core/src/scalarmathmodule.c gcc -pthread -shared build/temp.linux-x86_64-2.4/build/src.linux-x86_64- 2.4/numpy/core/src/scalarmathmodule.o -lm -o build/lib.linux-x86_64-2.4/numpy/core/scalarmath.so building 'numpy.core._dotblas' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC creating build/temp.linux-x86_64-2.4/numpy/core/blasdot compile options: '-DATLAS_INFO="\"3.6.0\"" -Inumpy/core/blasdot -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: numpy/core/blasdot/_dotblas.c gcc -pthread -shared build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o -L/projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4 -lptf77blas -lptcblas -latlas -o build/lib.linux-x86_64- 2.4/numpy/core/_dotblas.so building 'numpy.lib._compiled_base' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC creating build/temp.linux-x86_64-2.4/numpy/lib creating build/temp.linux-x86_64-2.4/numpy/lib/src compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: numpy/lib/src/_compiled_base.c gcc -pthread -shared build/temp.linux-x86_64-2.4/numpy/lib/src/_compiled_base.o -o build/lib.linux-x86_64-2.4/numpy/lib/_compiled_base.so building 'numpy.numarray._capi' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC creating build/temp.linux-x86_64-2.4/numpy/numarray compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: numpy/numarray/_capi.c gcc -pthread -shared build/temp.linux-x86_64-2.4/numpy/numarray/_capi.o -o build/lib.linux-x86_64-2.4/numpy/numarray/_capi.so building 'numpy.fft.fftpack_lite' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC creating build/temp.linux-x86_64-2.4/numpy/fft compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: numpy/fft/fftpack_litemodule.c gcc: numpy/fft/fftpack.c gcc -pthread -shared build/temp.linux-x86_64-2.4/numpy/fft/fftpack_litemodule.o build/temp.linux-x86_64-2.4/numpy/fft/fftpack.o -o build/lib.linux-x86_64- 2.4/numpy/fft/fftpack_lite.so building ' numpy.linalg.lapack_lite' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC creating build/temp.linux-x86_64-2.4/numpy/linalg compile options: '-DATLAS_INFO="\"3.6.0\"" -Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' extra options: '-fPIC' gcc: numpy/linalg/lapack_litemodule.c gcc -pthread -shared build/temp.linux-x86_64-2.4/numpy/linalg/lapack_litemodule.o -L/projects/mangellab/anand/ATLAS/lib/Linux_HAMMER64SSE2_4 -llapack -lptf77blas -lptcblas -latlas -o build/lib.linux-x86_64- 2.4/numpy/linalg/lapack_lite.so -fPIC building 'numpy.random.mtrand' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC creating build/temp.linux-x86_64-2.4/numpy/random creating build/temp.linux-x86_64-2.4/numpy/random/mtrand compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: numpy/random/mtrand/initarray.c gcc: numpy/random/mtrand/distributions.c gcc: numpy/random/mtrand/randomkit.c gcc: numpy/random/mtrand/mtrand.c gcc -pthread -shared build/temp.linux-x86_64-2.4/numpy/random/mtrand/mtrand.o build/temp.linux-x86_64- 2.4/numpy/random/mtrand/randomkit.o build/temp.linux-x86_64-2.4/numpy/random/mtrand/initarray.o build/temp.linux-x86_64-2.4/numpy/random/mtrand/distributions.o -lm -o build/lib.linux-x86_64-2.4/numpy/random/mtrand.so running build_scripts creating build/scripts.linux-x86_64-2.4 Creating build/scripts.linux-x86_64-2.4/f2py adding 'build/scripts.linux-x86_64-2.4/f2py' to scripts changing mode of build/scripts.linux-x86_64-2.4/f2py from 644 to 755 running install_lib copying build/lib.linux-x86_64-2.4/numpy/__config__.py -> /projects/mangellab/PythonPkgs/lib64/python/numpy copying build/lib.linux-x86_64-2.4/numpy/distutils/__config__.py -> /projects/mangellab/PythonPkgs/lib64/python/numpy/distutils copying build/lib.linux-x86_64-2.4/numpy/f2py/__svn_version__.py -> /projects/mangellab/PythonPkgs/lib64/python/numpy/f2py copying build/lib.linux-x86_64-2.4/numpy/core/__svn_version__.py -> /projects/mangellab/PythonPkgs/lib64/python/numpy/core copying build/lib.linux-x86_64-2.4/numpy/core/multiarray.so -> /projects/mangellab/PythonPkgs/lib64/python/numpy/core copying build/lib.linux-x86_64-2.4/numpy/core/umath.so -> /projects/mangellab/PythonPkgs/lib64/python/numpy/core copying build/lib.linux-x86_64-2.4/numpy/core/_sort.so -> /projects/mangellab/PythonPkgs/lib64/python/numpy/core copying build/lib.linux-x86_64-2.4/numpy/core/scalarmath.so -> /projects/mangellab/PythonPkgs/lib64/python/numpy/core copying build/lib.linux-x86_64-2.4/numpy/core/_dotblas.so -> /projects/mangellab/PythonPkgs/lib64/python/numpy/core copying build/lib.linux-x86_64-2.4/numpy/lib/_compiled_base.so -> /projects/mangellab/PythonPkgs/lib64/python/numpy/lib copying build/lib.linux-x86_64-2.4/numpy/numarray/_capi.so -> /projects/mangellab/PythonPkgs/lib64/python/numpy/numarray copying build/lib.linux-x86_64-2.4/numpy/fft/fftpack_lite.so -> /projects/mangellab/PythonPkgs/lib64/python/numpy/fft copying build/lib.linux-x86_64-2.4/numpy/linalg/lapack_lite.so -> /projects/mangellab/PythonPkgs/lib64/python/numpy/linalg copying build/lib.linux-x86_64-2.4/numpy/random/mtrand.so -> /projects/mangellab/PythonPkgs/lib64/python/numpy/random byte-compiling /projects/mangellab/PythonPkgs/lib64/python/numpy/__config__.py to __config__.pyc byte-compiling /projects/mangellab/PythonPkgs/lib64/python/numpy/distutils/__config__.py to __config__.pyc byte-compiling /projects/mangellab/PythonPkgs/lib64/python/numpy/f2py/__svn_version__.py to __svn_version__.pyc byte-compiling /projects/mangellab/PythonPkgs/lib64/python/numpy/core/__svn_version__.py to __svn_version__.pyc running install_scripts copying build/scripts.linux-x86_64-2.4/f2py -> /projects/mangellab/PythonPkgs/bin changing mode of /projects/mangellab/PythonPkgs/bin/f2py to 755 running install_data copying build/src.linux-x86_64-2.4/numpy/core/config.h -> /projects/mangellab/PythonPkgs/lib64/python/numpy/core/include/numpy copying build/src.linux-x86_64-2.4/numpy/core/__multiarray_api.h -> /projects/mangellab/PythonPkgs/lib64/python/numpy/core/include/numpy copying build/src.linux-x86_64-2.4/numpy/core/multiarray_api.txt -> /projects/mangellab/PythonPkgs/lib64/python/numpy/core/include/numpy copying build/src.linux-x86_64-2.4/numpy/core/__ufunc_api.h -> /projects/mangellab/PythonPkgs/lib64/python/numpy/core/include/numpy copying build/src.linux-x86_64-2.4/numpy/core/ufunc_api.txt -> /projects/mangellab/PythonPkgs/lib64/python/numpy/core/include/numpy [anand at stardance numpy]$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at shrogers.com Sun May 27 19:05:59 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Sun, 27 May 2007 17:05:59 -0600 Subject: [SciPy-user] APL2007: Arrays and Objects Message-ID: <465A0ED7.1090506@shrogers.com> I'd like to see some NumPy/SciPy participation in this. I'm thinking of submitting a "Python as a Tool of Thought" paper, playing off of Ken Iverson's Turing Award Lecture "Notation as a Tool of Thought". A NumPy/SciPy tutorial would be valuable as well. # Steve =================================== Announcing APL2007, Montreal, October 21-23 The APL 2007 conference, sponsored by ACM SIGAPL, has as its principal theme "Arrays and Objects" and, appropriately, is co-located with OOPSLA 2007, in Montreal this October. APL 2007 starts with a tutorial day on Sunday, October 21, followed by a two-day program on Monday and Tuesday, October 22 and 23. APLers are welcome to attend OOPSLA program events on Monday and Tuesday (and OOPSLA attendees are welcome to come to APL program events). Registrants at APL 2007 can add full OOPSLA attendance at a favorable price. Dates: Sunday Oct 21 Tutorials Monday, Tuesday Oct 22,23 APL 2007 program Monday-Friday Oct 22-26 OOPSLA program APL 2007 keynote speaker: Guy Steele, Sun Microsystems Laboratories Tutorials Using objects within APL Array language practicum Intro to [language] for other-language users ( We expect that there will be at least one introductory tutorial on "classic" APL, and hope to have introductions to a variety of array languages ) We solicit papers and proposals for tutorials, panels and workshops on all aspects of array-oriented programming and languages; this year we have particular interest in the themes of integrating the use of arrays and objects languages that support the use of arrays as a central and thematic technique marketplace and education: making practitioners aware of array thinking and array languages Our interest is in the essential use of arrays in programming in any language (though our historical concern has been the APL family of languages: classic APL, J, K, NIAL, ....). Dates: Tutorial, panel, and workshop proposals, and notice of intent to submit papers, are due by Friday June 15, to the Program Chair. Contributed papers, not more than 10 pages in length, are due by Monday, July 23, to the Program Chair. Details of form of submission can be obtained from the program chair. Deadline for detailed tutorial/panel/workshop information TBA. Cost (to SIGAPL and ACM members, approximate $US, final cost TBA) APL2007 registration $375 Tutorial day $250 Single conference days $200 Social events: Opening reception Monday Others TBA Conference venue: Palais de Congres, Montreal, Quebec, CANADA Conference hotel: Hyatt Regency Montreal Committee General Chair Guy Laroque Guy.Laroque at nrcan.gc.ca Program Chair Lynne C. Shaw Shaw at ACM.org Treasurer Steven H. Rogers Steve at SHRogers.com Publicity Mike Kent MKent at ACM.org Links APL2007 http://www.sigapl.org/apl2007 OOPSLA 2007 http://www.oopsla.org/oopsla2007 Palais de Congres http://www.congresmtl.com/ Hyatt Regency Montreal http://montreal.hyatt.com Guy Steele http://research.sun.com/people/mybio.php?uid=25706 ACM SIGAPL http://www.sigapl.org From david at ar.media.kyoto-u.ac.jp Mon May 28 02:27:06 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 28 May 2007 15:27:06 +0900 Subject: [SciPy-user] scipy on ps3/ydl In-Reply-To: References: Message-ID: <465A763A.6090705@ar.media.kyoto-u.ac.jp> Alastair Basden wrote: > Hi, > has anyone had any luck installing scipy on a ps3 with yellow dog linux > (ydl)? > > I've managed to get it installing okay, but when I try: > import scipy.lib.blas.fblas > I get: > File "/usr/lib/python2.4/site-packages/scipy/lib/blas/__init__.py", line > 9, in ? > import fblas > ImportError: /usr/lib/python2.4/site-packages/scipy/lib/blas/fblas.so: > R_PPC_REL24 relocation at 0x0db4fb18 for symbol `cabsf' out of range > > Any ideas? I cannot tell much because I don't have any access to a Ps3, and I would need a complete log of the build from you, but when I compile some code for ppc, I sometimes have this error for code used in objects used in shared libraries compiled without the -fPIC option (the other case I know is when I built broken cross-compiling toolchain, but that would be to sad to consider for now in your case). cheers, David From peridot.faceted at gmail.com Mon May 28 04:18:51 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 28 May 2007 04:18:51 -0400 Subject: [SciPy-user] hyp2f1 Message-ID: Hi, I'm writing some code to do something rather complicated, one part of which involves evaluating the hypergeometric 2F1 function. I was very pleased to discover that scipy implements it (as scipy.special.hyp2f1). I was not so pleased to discover that scipy's implementation appears to contain a bug, which DM Cooke has kindly fixed. (http://projects.scipy.org/scipy/scipy/changeset/3043) However, I am encountering some bad numerical behaviour and I'm wondering if this function can be blamed. More specifically, I worked around the bug, as did DM Cooke, by averaging points either side of the problem point. Since the function is fairly smooth, this should not be too problematic. However I am in the unfortunate situation of feeding the output of this function into a calculation that does some very delicate cancellations (calculating the covariance matrix of high-order detrending filters), and the matrix that results is not positive definite, though it should be. Is it likely that transformation of the function near these values is causing problems? Are there other options for transforming the hypergeometric function that would improve accuracy? For what values of the arguments is scipy's current implementation most accurate? Thanks, Anne From issa at aims.ac.za Mon May 28 07:38:26 2007 From: issa at aims.ac.za (Issa Karambal) Date: Mon, 28 May 2007 13:38:26 +0200 Subject: [SciPy-user] laurent polynomial Message-ID: <465ABF32.9020209@aims.ac.za> hello all, if there anyone who knows how to compute the laurent polynomial? Because I have tried to use scipy but it seems not possible to do it. (polydiv(1,p)='error', where p is a polynomial ) issa From stefan at sun.ac.za Mon May 28 10:23:31 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 28 May 2007 16:23:31 +0200 Subject: [SciPy-user] laurent polynomial In-Reply-To: <465ABF32.9020209@aims.ac.za> References: <465ABF32.9020209@aims.ac.za> Message-ID: <20070528142331.GD14004@mentat.za.net> Hi Issa On Mon, May 28, 2007 at 01:38:26PM +0200, Issa Karambal wrote: > if there anyone who knows how to compute the laurent polynomial? > Because I have tried to use scipy but it seems not possible to do it. > (polydiv(1,p)='error', where p is a polynomial ) You could always just evaluate it yourself, for example: # ======================================================= # 1 # ----- = sum_0^inf (-1)^n z^n # 1 + z def f(z,order): ans = 0. for n in range(order): ans += (-1)**n * z**n return ans z = 0.07 print "Precise value: ", 1./(1+z) for order in range(1,10): print "%d order estimation: %s" % (order, f(z,order)) # ======================================================= Cheers St?fan From nicolas.chopin at bristol.ac.uk Mon May 28 13:50:08 2007 From: nicolas.chopin at bristol.ac.uk (Nicolas Chopin) Date: Mon, 28 May 2007 19:50:08 +0200 Subject: [SciPy-user] how to crash scipy in 3s! (memory leak?) Message-ID: <465B1650.4050400@bris.ac.uk> Hi, the program below eats all the memory of my computer in approx. 3 seconds, making it pretty unstable. from scipy import * def f(a): return rand(1)+a n = 1e8 for i in range(n): x += f(0.1) It looks like a problem with rand()? e.g. rand() forgets to free memory? Or am I doing something "forbidden"? like returning an array, which creates a reference which is never deleted? In this case, what is the correct way to do something like def f(x): return some_function_of(x, rand(1) ) I use Ubuntu Feisty, Scipy 0.5.2, Numpy 1.01, Python 2.5.1 (all 3 from Ubuntu repositories). The program above is in fact a simplified version of a small bit of one of my programs, which was leaking memory too, albeit at a slower pace. This problem has bugged me for weeks now, before I managed to track it down to this. So any help is greatly appreciated! Cheers From muchomuse at yahoo.com Mon May 28 13:50:51 2007 From: muchomuse at yahoo.com (Luke Bradley) Date: Mon, 28 May 2007 10:50:51 -0700 (PDT) Subject: [SciPy-user] md5 digests for mac intel superpack Message-ID: <235472.36225.qm@web57813.mail.re3.yahoo.com> Hi. I downloaded the scipy superpack for python 2.5 on intel mac from the Download page. The md5 digest given on the page is d6802467cc9734269c1d61dca88c20a8 but the one I got when I typed md5 [filename] from my command line is 554b3e0b3327ff7f9ef8f62bc7311333 I tried downloading it to a separate (linux) machine on a different network and got the same thing. Am I doing something wrong, or is this file corrupted? Thanks --------------------------------- Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon May 28 14:12:11 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 28 May 2007 20:12:11 +0200 Subject: [SciPy-user] how to crash scipy in 3s! (memory leak?) In-Reply-To: <465B1650.4050400@bris.ac.uk> References: <465B1650.4050400@bris.ac.uk> Message-ID: Hi, Did you tried xrange instead of range ? range(1e8) eats several hundreds of MB, what might lead to a crash. Matthieu 2007/5/28, Nicolas Chopin : > > Hi, > the program below eats all the memory of my computer > in approx. 3 seconds, making it pretty unstable. > > from scipy import * > def f(a): > return rand(1)+a > > n = 1e8 > for i in range(n): > x += f(0.1) > > It looks like a problem with rand()? e.g. rand() forgets > to free memory? Or am I doing something "forbidden"? > like returning an array, which creates a reference > which is never deleted? > In this case, what is the correct way to do something like > def f(x): > return some_function_of(x, rand(1) ) > > I use Ubuntu Feisty, Scipy 0.5.2, Numpy 1.01, Python 2.5.1 > (all 3 from Ubuntu repositories). > The program above is in fact a simplified version of a small bit > of one of my programs, which was leaking memory too, > albeit at a slower pace. This problem has bugged me for weeks > now, before I managed to track it down to this. So any help > is greatly appreciated! > > Cheers > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From c-b at asu.edu Mon May 28 16:47:00 2007 From: c-b at asu.edu (Christopher Brown) Date: Mon, 28 May 2007 13:47:00 -0700 Subject: [SciPy-user] MultiChannel Audio Message-ID: <465B3FC4.1000601@asu.edu> Hi List, I am brand new to Python and SciPy. I am considering a switch after years with Matlab. SciPy seems to do everything I need. My question is, what is the simplest way to implement playback of multichannel audio (>2)? In Matlab and on Windows, pa_wavplay works great with ASIO for multichannel audio. I pass a matrix, and the number of columns specifies the number of channels. Is there something that can approximate that with Python? A linux/windows solution would be ideal. I realize that SciPy doesn't do audio io, but I thought that its users might have some insight given its current scope. Or is there a better venue for this question? From aisaac at american.edu Mon May 28 17:16:37 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 28 May 2007 17:16:37 -0400 Subject: [SciPy-user] MultiChannel Audio In-Reply-To: <465B3FC4.1000601@asu.edu> References: <465B3FC4.1000601@asu.edu> Message-ID: Two suggestions (based on rumor, as I don't touch audio): http://www.ar.media.kyoto-u.ac.jp/members/david/pyaudiolab.tar.gz PyAudioLab reads and writes audio files as numpy arrays. Requires http://www.mega-nerd.com/libsndfile/ libsndfile There is http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/pyaudiolab/index.html documentation For more extensive audio capabilities, try http://people.csail.mit.edu/hubert/pyaudio/ PyAudio hth, Alan Isaac From david at ar.media.kyoto-u.ac.jp Mon May 28 20:41:04 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 May 2007 09:41:04 +0900 Subject: [SciPy-user] MultiChannel Audio In-Reply-To: References: <465B3FC4.1000601@asu.edu> Message-ID: <465B76A0.2090407@ar.media.kyoto-u.ac.jp> Alan G Isaac wrote: > Two suggestions (based on rumor, as I don't touch audio): > > http://www.ar.media.kyoto-u.ac.jp/members/david/pyaudiolab.tar.gz > PyAudioLab reads and writes audio files as numpy arrays. > Requires http://www.mega-nerd.com/libsndfile/ libsndfile Note that it does not have audio IO capabilities. So it is fine using pyaudiolab for importing data from/to audio files, but not yet to play them on the soundcard (that's why it is still 0.* :) ). David From david at ar.media.kyoto-u.ac.jp Mon May 28 21:19:07 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 May 2007 10:19:07 +0900 Subject: [SciPy-user] how to crash scipy in 3s! (memory leak?) In-Reply-To: <465B1650.4050400@bris.ac.uk> References: <465B1650.4050400@bris.ac.uk> Message-ID: <465B7F8B.1080500@ar.media.kyoto-u.ac.jp> Nicolas Chopin wrote: > Hi, > the program below eats all the memory of my computer > in approx. 3 seconds, making it pretty unstable. > > from scipy import * > def f(a): > return rand(1)+a > > n = 1e8 > for i in range(n): > x += f(0.1) > > It looks like a problem with rand()? e.g. rand() forgets > to free memory? Or am I doing something "forbidden"? > like returning an array, which creates a reference > which is never deleted? > In this case, what is the correct way to do something like > def f(x): > return some_function_of(x, rand(1) ) > > I use Ubuntu Feisty, Scipy 0.5.2, Numpy 1.01, Python 2.5.1 > (all 3 from Ubuntu repositories). If it is crashing, it is a bug. If it is eating all your memory, then it is working fine :) First, doing loop as you do in an interpreted language (like python) is inherently slow for several reasons: don't do it. You could say the whole point of numpy is to give you abstractions to avoid looping like your doing. If I understand correctly, you want to create n random scalars, and sum them up, right ? First, instead of calling n times rand(1), you can call rand(n), which will returns n random values. Then, instead of accumulating in a loop, you should first create the array of all intermediate values, and then summing the whole array (eg you create a temporary array tmp where tmp[i] contains the i-th call of f(0.1)): """ import numpy as N from scipy import rand def f(a): return rand(len(a)) + a tmp = f(0.1 * N.ones(n)) x = N.sum(a) """ (I didn't check whether this code is running). Basically, in scipy/numpy, you should avoid using loop as much as possible, and try to "vectorize" as much as possible. Most numpy/scipy functions can be used like numfunc(sequence) instead of for i in sequence: numfunc(i) Concrete example: from scipy import rand def numfunc(n): a = rand(n) return numpy.sum(a) def pyfunc(n): x = 0 for i in xrange(n): x += rand(1) return x numfunc is already 200 times faster for n = 1e4 on my computer, and the difference will grow for bigger size. It requires some thinking if you are not used to it. But if you really need looping with around 1e8 elements, you won't be able to do it efficiently otherwise (that is if you stay in python). Technically, the difference is that in the first case, the loop is done in numpy, which is optimized for this kind of things, and the second one is done entirely in python, and loops with functions calls are extremely slow compared to compiled language (several order of magnitudes most of the cases; this is actually not always true, there are some techniques to optimize this kind of thing, but this would take use way beyond the point of this thread). Using xrange as Matthieu suggested would solve the memory part: range(n) allocates a list of n integers -> 1e8 * 4 bytes minimum if python is using 32 bits integers, I don't remember, and whereas xrange(n) creates the new iteration value at each iteration. David From strawman at astraw.com Mon May 28 21:48:10 2007 From: strawman at astraw.com (Andrew Straw) Date: Mon, 28 May 2007 18:48:10 -0700 Subject: [SciPy-user] MultiChannel Audio In-Reply-To: <465B3FC4.1000601@asu.edu> References: <465B3FC4.1000601@asu.edu> Message-ID: <465B865A.7090401@astraw.com> Dear Christopher, I haven't use these libraries myself, but you may be interested in fastaudio and pyPortAudio ( http://www.freenet.org.nz/python/pyPortAudio/ ) which apparently wrap the same library, PortAudio, that pa_wavplay wraps. Christopher Brown wrote: > Hi List, > > I am brand new to Python and SciPy. I am considering a switch after > years with Matlab. > > SciPy seems to do everything I need. My question is, what is the > simplest way to implement playback of multichannel audio (>2)? > > In Matlab and on Windows, pa_wavplay works great with ASIO for > multichannel audio. I pass a matrix, and the number of columns specifies > the number of channels. Is there something that can approximate that > with Python? A linux/windows solution would be ideal. > > I realize that SciPy doesn't do audio io, but I thought that its users > might have some insight given its current scope. Or is there a better > venue for this question? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From gerard.vermeulen at grenoble.cnrs.fr Tue May 29 01:39:21 2007 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Tue, 29 May 2007 07:39:21 +0200 Subject: [SciPy-user] SciPy SVN installation problem with Python-2.4 Message-ID: <20070529073921.5d5d23e8@zombie.grenoble.cnrs.fr> Hi, My builds (part of a self-hacked Gentoo ebuild which was working before) of SciPy SVN with Python-2.4 die with: running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize Gnu95FCompiler Found executable /usr/bin/gfortran Found executable /usr/bin/gfortran Traceback (most recent call last): File "setup.py", line 55, in ? setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "/usr/lib64/python2.4/site-packages/numpy/distutils/core.py", line 159, in setup return old_setup(**new_attr) File "/usr/lib/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib64/python2.4/site-packages/numpy/distutils/command/build_clib.py", line 81, in run self.fcompiler.customize(self.distribution) File "/usr/lib64/python2.4/site-packages/numpy/distutils/fcompiler/__init__.py", line 445, in customize get_flags('opt', oflags) File "/usr/lib64/python2.4/site-packages/numpy/distutils/fcompiler/__init__.py", line 442, in get_flags flagvar.extend(to_list(getattr(self.flag_vars, t))) AttributeError: 'str' object has no attribute 'extend' No problems with Python-2.5. Gerard From nicolas.chopin at bristol.ac.uk Tue May 29 02:21:29 2007 From: nicolas.chopin at bristol.ac.uk (Nicolas CHOPIN) Date: Tue, 29 May 2007 06:21:29 +0000 (UTC) Subject: [SciPy-user] how to crash scipy in 3s! (memory leak?) References: <465B1650.4050400@bris.ac.uk> <465B7F8B.1080500@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau ar.media.kyoto-u.ac.jp> writes: > > Nicolas Chopin wrote: > > Hi, > > the program below eats all the memory of my computer > > in approx. 3 seconds, making it pretty unstable. > > > > from scipy import * > > def f(a): > > return rand(1)+a > > > > n = 1e8 > > for i in range(n): > > x += f(0.1) > > > > It looks like a problem with rand()? e.g. rand() forgets > > to free memory? Or am I doing something "forbidden"? > > like returning an array, which creates a reference > > which is never deleted? > > In this case, what is the correct way to do something like > > def f(x): > > return some_function_of(x, rand(1) ) > > > > I use Ubuntu Feisty, Scipy 0.5.2, Numpy 1.01, Python 2.5.1 > > (all 3 from Ubuntu repositories). > If it is crashing, it is a bug. If it is eating all your memory, then it > is working fine :) First, doing loop as you do in an interpreted > language (like python) is inherently slow for several reasons: don't do > it. You could say the whole point of numpy is to give you abstractions > to avoid looping like your doing. > > If I understand correctly, you want to create n random scalars, and sum > them up, right ? First, instead of calling n times rand(1), you can call > rand(n), which will returns n random values. Then, instead of > accumulating in a loop, you should first create the array of all > intermediate values, and then summing the whole array (eg you create a > temporary array tmp where tmp[i] contains the i-th call of f(0.1)): > > """ > import numpy as N > from scipy import rand > > def f(a): > return rand(len(a)) + a > > tmp = f(0.1 * N.ones(n)) > x = N.sum(a) > """ > > (I didn't check whether this code is running). Basically, in > scipy/numpy, you should avoid using loop as much as possible, and try to > "vectorize" as much as possible. Most numpy/scipy functions can be used like > > numfunc(sequence) > > instead of > > for i in sequence: > numfunc(i) > > Concrete example: > > from scipy import rand > > def numfunc(n): > a = rand(n) > return numpy.sum(a) > > def pyfunc(n): > x = 0 > for i in xrange(n): > x += rand(1) > return x > > numfunc is already 200 times faster for n = 1e4 on my computer, and the > difference will grow for bigger size. It requires some thinking if you > are not used to it. But if you really need looping with around 1e8 > elements, you won't be able to do it efficiently otherwise (that is if > you stay in python). > > Technically, the difference is that in the first case, the loop is done > in numpy, which is optimized for this kind of things, and the second one > is done entirely in python, and loops with functions calls are extremely > slow compared to compiled language (several order of magnitudes most of > the cases; this is actually not always true, there are some techniques > to optimize this kind of thing, but this would take use way beyond the > point of this thread). > > Using xrange as Matthieu suggested would solve the memory part: range(n) > allocates a list of n integers -> 1e8 * 4 bytes minimum if python is > using 32 bits integers, I don't remember, and whereas xrange(n) creates > the new iteration value at each iteration. > > David > Hi everybody, ok, maybe I just simplified too much my problem. 1. Yes, I agree that you should avoid loops as much as possible (I come from Matlab, so I'm used to that), but they are situations (my big program), where it is nearly impossible to do. In particular, MCMC simulation, where you need to simulate recursively from a Markov chain. 2. range()? ok, that's an interesting direction, but in my big program, I don't use range(n) with very large values of n. Instead I perform many loops with small n. Maybe it's better if I just post my big program: As I said in my first post, it just eats all my memory in a few hours, so I have to kill it. It's a bit complicated (it implements the nested sampling algorithm, in case anybody is interested), but the bulk of calculations is performed by truncgauss function, which simulates a Gaussian rv truncated to interval [a,b] (simulation method is not optimal). This function is called inside a look which is nested in another loop, which is inside nested()., Last night, I tried to replace the many calls of rand(1) by less frequent calls to rand(k), but the program was also leaking memory. What is weird is that even I call the main routines nested() several times in a row, the memory does not seem to get freed at the end of the execution of nested(). so I think there is something really weird going on. Thanks for your help. Nicolas ps: when I tried to debug my program with pdb, I looked at locals(), and could not find any variable that is taking too much space. What is the best way to spot memory-expensive variables when debugging? """ Nested sampling: Non-centred Gaussian Example """ from scipy import * class context: """ Parameters for the algorithm""" N = 100 # how many simulatenous points NMCMC = 10 # how many MCMC steps v = 0.1**2 # Likelihood variance nd = 10 # how many runs for each dim. ndim = 20 # how many different dimensions (5,10,15...) filename = "resdc" #Name of save files newfile = False #starts from scractch/continues from current file def logplus(x,y): """ Computes log(exp(x)+exp(y))""" if xlogest-8*log(10)) # Results return (j-1+c.N,logest+lognc-log_true_value) # Main program c = context() if c.newfile: sto = zeros((10,c.nd,3)) curind = 0 else: exec 'from '+c.filename+' import *' # import sto and curind for index in range(curind,c.ndim*c.nd): # Loop over dim. k,l = index/c.nd, index%c.nd d = 5*(1+k) print "dim: ",d," iter: ",l y = repeat(3.,d) sto[k,l,:] = d sto[k,l,1:3] = nested(d,y,c) print d, sto[k,l,:] io.save(c.filename,{'sto':sto,'curind':index+1}) io.write_array(c.filename+".txt",sto.reshape((-1,3))) # plot(mean(sto[:,:,1],axis=1)*mean(sto[:,:,2],axis=1)**2) From cookedm at physics.mcmaster.ca Tue May 29 06:38:36 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 29 May 2007 06:38:36 -0400 Subject: [SciPy-user] SciPy SVN installation problem with Python-2.4 In-Reply-To: <20070529073921.5d5d23e8@zombie.grenoble.cnrs.fr> References: <20070529073921.5d5d23e8@zombie.grenoble.cnrs.fr> Message-ID: <20070529103836.GA11272@arbutus.physics.mcmaster.ca> On Tue, May 29, 2007 at 07:39:21AM +0200, Gerard Vermeulen wrote: > Hi, > > My builds (part of a self-hacked Gentoo ebuild which was working > before) of SciPy SVN with Python-2.4 die with: > > running build_clib > customize UnixCCompiler > customize UnixCCompiler using build_clib > customize Gnu95FCompiler > Found executable /usr/bin/gfortran > Found executable /usr/bin/gfortran > Traceback (most recent call last): > File "setup.py", line 55, in ? > setup_package() > File "setup.py", line 47, in setup_package > configuration=configuration ) [snip] > "/usr/lib64/python2.4/site-packages/numpy/distutils/command/build_clib.py", > line 81, in run self.fcompiler.customize(self.distribution) File > "/usr/lib64/python2.4/site-packages/numpy/distutils/fcompiler/__init__.py", > line 445, in customize get_flags('opt', oflags) File > "/usr/lib64/python2.4/site-packages/numpy/distutils/fcompiler/__init__.py", > line 442, in get_flags flagvar.extend(to_list(getattr(self.flag_vars, > t))) AttributeError: 'str' object has no attribute 'extend' > > No problems with Python-2.5. This should be fixed in the numpy svn now. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Tue May 29 12:55:36 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 29 May 2007 12:55:36 -0400 Subject: [SciPy-user] hyp2f1 In-Reply-To: References: Message-ID: <9F4F39A5-FF58-45BD-A71A-88A4FC8D31E7@physics.mcmaster.ca> On Mon, May 28, 2007 at 04:18:51AM -0400, Anne Archibald wrote: > Hi, > > I'm writing some code to do something rather complicated, one part of > which involves evaluating the hypergeometric 2F1 function. I was very > pleased to discover that scipy implements it (as > scipy.special.hyp2f1). I was not so pleased to discover that scipy's > implementation appears to contain a bug, which DM Cooke has kindly > fixed. (http://projects.scipy.org/scipy/scipy/changeset/3043) However, > I am encountering some bad numerical behaviour and I'm wondering if > this function can be blamed. > > More specifically, I worked around the bug, as did DM Cooke, by > averaging points either side of the problem point. Since the function > is fairly smooth, this should not be too problematic. However I am in > the unfortunate situation of feeding the output of this function into > a calculation that does some very delicate cancellations (calculating > the covariance matrix of high-order detrending filters), and the > matrix that results is not positive definite, though it should be. > > Is it likely that transformation of the function near these values is > causing problems? Are there other options for transforming the > hypergeometric function that would improve accuracy? For what values > of the arguments is scipy's current implementation most accurate? Who knows ;) For those following along at home, the transformation is 15.3.7 in Abramowitz & Stegum, and is F(a,b;c;z) = G(c)G(b-a)/(G(b)*G(c-a)) * (-z)^(-a)*F(a,1-c+a;1-b+a;1/z) + G(c)G(a-b)/((G(a)*G(c-b)) * (-z)^(-b) * F(b,1-c+b;1-a+b;1/z) where G(x) is the gamma function, and F(a,b;c;z) == 2F1(a,b;c;z). This is valid for |arg(-z)| < pi. It fails when b-a is an integer (which, since F is symmetric in a and b, we can take to be >= 0), as it then requires the gamma function of a negative integer. It's a removable singularity, though. Of course, when doing numerics, removable singularities signal that the region around it is likely not well-approximated either. So you either need more precision or do an expansion around the singularity. One possibility is to look at Stepen Moshier's other implementations of cephes with different precisions: http://www.moshier.net/#Cephes . He's got an extended-precision version of hyp2f1, although you'd have to wrap it yourself (and the transformation in question isn't included; that's just in scipy's version). hyp2f1 isn't in the long double version unfortunately, so you've got some work there. Something like clnum (http://calcrpnpy.sourceforge.net/clnum.html) is useful for arbitrary-precision floating-point. Alternatively, PyDX (http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/user-guide.html) does arbitrary-precision, along with interval arithmetic. I have one idea I'm working on: including the singularity explicitly. For instance, G(-m+e), where m>=0 is an integer and e is small, can be written using the reflection formula as G(-m+e) = a*p(e) + a/e, where a = (-1)^m/G(1+m-e) and p(e) = pi/sin(pi*e) - 1/e (which is well-defined, being about pi^2/6*e). You can do the same thing for F (a,b;-m+e;z): the series can be split into two, one of which has a 1/ e factor. Ideally, you can collect all the 1/e terms separately, and they'll mostly cancel out there instead of blowing up the individual terms and forcing disastrous loss of precision. This formalism (ideally) would also handle the limit of e->0. Writing this in Python is (relatively) easy; converting that to fast C code is something else.... Some sort of function approximation kit would be handy: give it a high-level description, it does the expansion around special cases for you and writes your code. -- |>|\/|< /----------------------------------------------------------------------- ---\ |David M. Cooke http:// arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From faltet at carabos.com Tue May 29 13:15:16 2007 From: faltet at carabos.com (Francesc Altet) Date: Tue, 29 May 2007 19:15:16 +0200 Subject: [SciPy-user] ANN: PyTables 2.0rc2 released Message-ID: <1180458916.2593.2.camel@localhost.localdomain> ============================ Announcing PyTables 2.0rc2 ============================ PyTables is a library for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data with support for full 64-bit file addressing. PyTables runs on top of the HDF5 library and NumPy package for achieving maximum throughput and convenient use. This is the second (and probably last) release candidate for PyTables 2.0. On it, together with the traditional bunch of bug fixes, you will find a handful of optimizations for dealing with very large tables. Also, the "Optimization tips" chapter of User's Guide has been updated and the manual is almost ready (bar some errors or typos we may have introduced) for the long awaited 2.0 final release. In particular, the "Indexed searches" section shows pretty definitive plots on the performance of the completely new and innovative indexing engine that will be available in the Pro version (to be released very soon now). You can download a source package of the version 2.0rc2 with generated PDF and HTML docs and binaries for Windows from http://www.pytables.org/download/preliminary/ For an on-line version of the manual, visit: http://www.pytables.org/docs/manual-2.0rc2 In case you want to know more in detail what has changed in this version, have a look at ``RELEASE_NOTES.txt``. Find the HTML version for this document at: http://www.pytables.org/moin/ReleaseNotes/Release_2.0rc2 If you are a user of PyTables 1.x, probably it is worth for you to look at ``MIGRATING_TO_2.x.txt`` file where you will find directions on how to migrate your existing PyTables 1.x apps to the 2.0 version. You can find an HTML version of this document at http://www.pytables.org/moin/ReleaseNotes/Migrating_To_2.x Keep reading for an overview of the most prominent improvements in PyTables 2.0 series. New features of PyTables 2.0 ============================ - A complete refactoring of many, many modules in PyTables. With this, the different parts of the code are much better integrated and code redundancy is kept under a minimum. A lot of new optimizations have been included as well, making working with it a smoother experience than ever before. - NumPy is finally at the core! That means that PyTables no longer needs numarray in order to operate, although it continues to be supported (as well as Numeric). This also means that you should be able to run PyTables in scenarios combining Python 2.5 and 64-bit platforms (these are a source of problems with numarray/Numeric because they don't support this combination as of this writing). - Most of the operations in PyTables have experimented noticeable speed-ups (sometimes up to 2x, like in regular Python table selections). This is a consequence of both using NumPy internally and a considerable effort in terms of refactorization and optimization of the new code. - Combined conditions are finally supported for in-kernel selections. So, now it is possible to perform complex selections like:: result = [ row['var3'] for row in table.where('(var2 < 20) | (var1 == "sas")') ] or:: complex_cond = '((%s <= col5) & (col2 <= %s)) ' \ '| (sqrt(col1 + 3.1*col2 + col3*col4) > 3)' result = [ row['var3'] for row in table.where(complex_cond % (inf, sup)) ] and run them at full C-speed (or perhaps more, due to the cache-tuned computing kernel of Numexpr, which has been integrated into PyTables). - Now, it is possible to get fields of the ``Row`` iterator by specifying their position, or even ranges of positions (extended slicing is supported). For example, you can do:: result = [ row[4] for row in table # fetch field #4 if row[1] < 20 ] result = [ row[:] for row in table # fetch all fields if row['var2'] < 20 ] result = [ row[1::2] for row in # fetch odd fields table.iterrows(2, 3000, 3) ] in addition to the classical:: result = [row['var3'] for row in table.where('var2 < 20')] - ``Row`` has received a new method called ``fetch_all_fields()`` in order to easily retrieve all the fields of a row in situations like:: [row.fetch_all_fields() for row in table.where('column1 < 0.3')] The difference between ``row[:]`` and ``row.fetch_all_fields()`` is that the former will return all the fields as a tuple, while the latter will return the fields in a NumPy void type and should be faster. Choose whatever fits better to your needs. - Now, all data that is read from disk is converted, if necessary, to the native byteorder of the hosting machine (before, this only happened with ``Table`` objects). This should help to accelerate applications that have to do computations with data generated in platforms with a byteorder different than the user machine. - The modification of values in ``*Array`` objects (through __setitem__) now doesn't make a copy of the value in the case that the shape of the value passed is the same as the slice to be overwritten. This results in considerable memory savings when you are modifying disk objects with big array values. - All leaf constructors (except for ``Array``) have received a new ``chunkshape`` argument that lets the user explicitly select the chunksizes for the underlying HDF5 datasets (only for advanced users). - All leaf constructors have received a new parameter called ``byteorder`` that lets the user specify the byteorder of their data *on disk*. This effectively allows to create datasets in other byteorders than the native platform. - Native HDF5 datasets with ``H5T_ARRAY`` datatypes are fully supported for reading now. - The test suites for the different packages are installed now, so you don't need a copy of the PyTables sources to run the tests. Besides, you can run the test suite from the Python console by using:: >>> tables.tests() Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdfgroup.org/HDF5/ About NumPy: http://numpy.scipy.org/ To know more about the company behind the development of PyTables, see: http://www.carabos.com/ Acknowledgments =============== Thanks to many users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last, but not least thanks a lot to the HDF5 and NumPy (and numarray!) makers. Without them PyTables simply would not exist. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From peridot.faceted at gmail.com Tue May 29 14:42:21 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 29 May 2007 14:42:21 -0400 Subject: [SciPy-user] hyp2f1 In-Reply-To: <9F4F39A5-FF58-45BD-A71A-88A4FC8D31E7@physics.mcmaster.ca> References: <9F4F39A5-FF58-45BD-A71A-88A4FC8D31E7@physics.mcmaster.ca> Message-ID: On 29/05/07, David M. Cooke wrote: > On Mon, May 28, 2007 at 04:18:51AM -0400, Anne Archibald wrote: > > Is it likely that transformation of the function near these values is > > causing problems? Are there other options for transforming the > > hypergeometric function that would improve accuracy? For what values > > of the arguments is scipy's current implementation most accurate? > > Who knows ;) For those following along at home, the transformation is > 15.3.7 in Abramowitz & Stegum, and is > > F(a,b;c;z) = > G(c)G(b-a)/(G(b)*G(c-a)) * (-z)^(-a)*F(a,1-c+a;1-b+a;1/z) > + G(c)G(a-b)/((G(a)*G(c-b)) * (-z)^(-b) * F(b,1-c+b;1-a+b;1/z) > > where G(x) is the gamma function, and F(a,b;c;z) == 2F1(a,b;c;z). > This is valid for |arg(-z)| < pi. It fails when b-a is an integer > (which, since F is symmetric in a and b, we can take to be >= 0), as > it then requires the gamma function of a negative integer. It's a > removable singularity, though. Of course, when doing numerics, > removable singularities signal that the region around it is likely > not well-approximated either. So you either need more precision or do > an expansion around the singularity. In fact I'm not evaluating this for arbitrary values of the arguments, so I may be able to do better. I'm computing: hyp2f1(a,1-a,1+a,x) for 0<=a<=4 and -1000 One possibility is to look at Stepen Moshier's other implementations > of cephes with different precisions: http://www.moshier.net/#Cephes . > He's got an extended-precision version of hyp2f1, although you'd have > to wrap it yourself (and the transformation in question isn't > included; that's just in scipy's version). hyp2f1 isn't in the long > double version unfortunately, so you've got some work there. That's an interesting possibility. > Something like clnum (http://calcrpnpy.sourceforge.net/clnum.html) is > useful for arbitrary-precision floating-point. Alternatively, PyDX > (http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/user-guide.html) > does arbitrary-precision, along with interval arithmetic. Unfortunately I am also in a hurry. In particular, I'm fitting a noise model to some data and for every optimization trial I need to compute a covariance matrix involving tens of thousands (and possibly millions) of these. So I don't think arbitrary precision is the way to go (though if I must I can farm this out to a cluster). > Some sort of function approximation kit would be handy: give it a > high-level description, it does the expansion around special cases > for you and writes your code. This seems like the kind of thing that could be built into SAGE. It should of course generate C code, for portability... Thanks, Anne From chanley at stsci.edu Tue May 29 15:38:45 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 29 May 2007 15:38:45 -0400 Subject: [SciPy-user] PyFITS 1.1 "candidate" RELEASE 4 Message-ID: <465C8145.6050506@stsci.edu> ------------------ | PYFITS Release | ------------------ Space Telescope Science Institute is pleased to announce the fourth and final candidate release of PyFITS 1.1 (really). This release includes support for both the NUMPY and NUMARRAY array packages. This software can be downloaded at: http://www.stsci.edu/resources/software_hardware/pyfits/Download If you encounter bugs, please send bug reports to "help at stsci.edu". We intend to support NUMARRAY and NUMPY simultaneously for a transition period of no less than 6 months. Eventually, however, support for NUMARRAY will disappear. During this period, it is likely that new features will appear only for NUMPY. The support for NUMARRAY will primarily be to fix serious bugs and handle platform updates. We plan to release the "official" PyFITS 1.1 version in a few weeks. ----------- | Version | ----------- Version 1.1rc4; May 29, 2007 ------------------------------- | Major Changes since v1.1rc3 | ------------------------------- * Fixes a bug in the creation of binary FITS tables introduced in release candidate 3. ------------------------- | Software Requirements | ------------------------- PyFITS Version 1.1rc4 REQUIRES: * Python 2.3 or later * NUMPY 1.0.1(or later) or NUMARRAY --------------------- | Installing PyFITS | --------------------- PyFITS 1.1rc1 is distributed as a Python distutils module. Installation simply involves unpacking the package and executing % python setup.py install to install it in Python's site-packages directory. Alternatively the command %python setup.py install --local="/destination/directory/" will install PyFITS in an arbitrary directory which should be placed on PYTHONPATH. Once numarray or numpy has been installed, then PyFITS should be available for use under Python. ----------------- | Download Site | ----------------- http://www.stsci.edu/resources/software_hardware/pyfits/Download ---------- | Usage | ---------- Users will issue an "import pyfits" command as in the past. However, the use of the NUMPY or NUMARRAY version of PyFITS will be controlled by an environment variable called NUMERIX. Set NUMERIX to 'numarray' for the NUMARRAY version of PyFITS. Set NUMERIX to 'numpy' for the NUMPY version of pyfits. If only one array package is installed, that package's version of PyFITS will be imported. If both packages are installed the NUMERIX value is used to decide which version to import. If no NUMERIX value is set then the NUMARRAY version of PyFITS will be imported. Anything else will raise an exception upon import. --------------- | Bug Reports | --------------- Please send all PyFITS bug reports to help at stsci.edu ------------------ | Advanced Users | ------------------ Users who would like the "bleeding" edge of PyFITS can retrieve the software from our SUBVERSION repository hosted at: http://astropy.scipy.org/svn/pyfits/trunk We also provide a Trac site at: http://projects.scipy.org/astropy/pyfits/wiki -- Christopher Hanley Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From muchomuse at yahoo.com Tue May 29 15:51:43 2007 From: muchomuse at yahoo.com (Luke Bradley) Date: Tue, 29 May 2007 12:51:43 -0700 (PDT) Subject: [SciPy-user] MultiChannel Audio (Alan G Isaac) In-Reply-To: Message-ID: <965571.7379.qm@web57812.mail.re3.yahoo.com> Allen, I am using pyaudio right now for multichannel work on a macbook with 1.83 g processor. (I can tell you that its not fast enough for real time processing with multiple tracks at pro sampling rates) But tis fine for other work If you want code, email me. I'm happy to share. MuchoMuse at yahoo.com Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool.http://us.rd.yahoo.com/evt=48518/*http://autos.yahoo.com/carfinder/;_ylc=X3oDMTE3NWsyMDd2BF9TAzk3MTA3MDc2BHNlYwNtYWlsdGFncwRzbGsDY2FyLWZpbmRlcg-- hot CTA = Yahoo! Autos new Car Finder tool -------------- next part -------------- An HTML attachment was scrubbed... URL: From anand at cse.ucsc.edu Tue May 29 16:53:01 2007 From: anand at cse.ucsc.edu (Anand Patil) Date: Tue, 29 May 2007 13:53:01 -0700 Subject: [SciPy-user] f2py'ed expokit? In-Reply-To: <2bc7a5a50705291350y1dba15dcqdfe14b4f30ae97c2@mail.gmail.com> References: <2bc7a5a50705291350y1dba15dcqdfe14b4f30ae97c2@mail.gmail.com> Message-ID: <2bc7a5a50705291353pbc2aa9ep95e0919a0bf6c1cd@mail.gmail.com> Hi all, I haven't been able to find a publicly available version of expokit wrapped for scipy. Would anyone happen to have or know where I can find one? I'm particularly interested in the large sparse routines. Thanks in advance, Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed May 30 01:06:30 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 30 May 2007 01:06:30 -0400 Subject: [SciPy-user] hyp2f1 In-Reply-To: <9F4F39A5-FF58-45BD-A71A-88A4FC8D31E7@physics.mcmaster.ca> References: <9F4F39A5-FF58-45BD-A71A-88A4FC8D31E7@physics.mcmaster.ca> Message-ID: On 29/05/07, David M. Cooke wrote: > On Mon, May 28, 2007 at 04:18:51AM -0400, Anne Archibald wrote: > > I'm writing some code to do something rather complicated, one part of > > which involves evaluating the hypergeometric 2F1 function. I was very > > pleased to discover that scipy implements it (as > > scipy.special.hyp2f1). I was not so pleased to discover that scipy's > > implementation appears to contain a bug, which DM Cooke has kindly > > fixed. (http://projects.scipy.org/scipy/scipy/changeset/3043) However, > > I am encountering some bad numerical behaviour and I'm wondering if > > this function can be blamed. So this is not generally useful, but I have cured my problem through an application of one of the "quadratic transformations" (equation 15.3.32 in Abramowitz & Stegun - which is online!). It gives me accuracies on the order of one part in 10^13, not as good as I was hoping but better than the 1 in 10^8 I was getting from the averaging shortcut. Good enough to get positive definite matrices out of it anyway. Thanks, Anne From matthieu.brucher at gmail.com Wed May 30 05:31:40 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 30 May 2007 11:31:40 +0200 Subject: [SciPy-user] MultiChannel Audio In-Reply-To: <465B865A.7090401@astraw.com> References: <465B3FC4.1000601@asu.edu> <465B865A.7090401@astraw.com> Message-ID: Did someone try PySndObj ? It seems to have wrappers around a lot of audio libs. Matthieu 2007/5/29, Andrew Straw : > > Dear Christopher, > > I haven't use these libraries myself, but you may be interested in > fastaudio and pyPortAudio ( > http://www.freenet.org.nz/python/pyPortAudio/ ) which apparently wrap > the same library, PortAudio, that pa_wavplay wraps. > > > Christopher Brown wrote: > > Hi List, > > > > I am brand new to Python and SciPy. I am considering a switch after > > years with Matlab. > > > > SciPy seems to do everything I need. My question is, what is the > > simplest way to implement playback of multichannel audio (>2)? > > > > In Matlab and on Windows, pa_wavplay works great with ASIO for > > multichannel audio. I pass a matrix, and the number of columns specifies > > the number of channels. Is there something that can approximate that > > with Python? A linux/windows solution would be ideal. > > > > I realize that SciPy doesn't do audio io, but I thought that its users > > might have some insight given its current scope. Or is there a better > > venue for this question? > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From JMoreland at ansell.com Wed May 30 10:07:40 2007 From: JMoreland at ansell.com (Jeff Moreland) Date: Wed, 30 May 2007 10:07:40 -0400 Subject: [SciPy-user] Incorrect stats t-values Message-ID: An HTML attachment was scrubbed... URL: From jmalicki at computer.org Wed May 30 10:21:27 2007 From: jmalicki at computer.org (Joseph Malicki) Date: Wed, 30 May 2007 10:21:27 -0400 Subject: [SciPy-user] Incorrect stats t-values In-Reply-To: References: Message-ID: <7860170705300721n668a62e6j20d4d421ed04a3c4@mail.gmail.com> Yes you are. The T-Table (as most seem to) is for a *two-tailed* confidence interval that you'd be using a t distribution for... i.e. the middle 95%. scipy.stats is calculating a one-tailed interval, or the leftmost 95%. If you want a 95% interval, that's alpha of .05.... you want to call t.ppf with 1-alpha/2, which is 0.975 In [4]: scipy.stats.t.ppf(0.975,2) Out[4]: array(4.3026527299112747) So your confidence interval is (-n,n) where n=scipy.stats.t.ppf(1.0-alpha/2,2) On 5/30/07, Jeff Moreland wrote: > > > The stats.py module seems to incorrectly calculate values for the > T-Distribution: > > >> stats.t.ppf(0.95, 2) > outputs > >> 2.91998558036 > > However, the correct value from a T-table is 4.3027. > > Am I using the function correctly? I am attempting to determine the > critical t-values based on a probability and the degrees of freedom. > > Jeff Moreland > > ____________________________________________________________________________________ > This e-mail (including any attachments) is intended only for the exclusive > use of the individual to whom it is addressed. The information contained > hereinafter may be proprietary, confidential, privileged and exempt from > disclosure under applicable law. If the reader of this e-mail is not the > intended recipient or agent responsible for delivering the message to the > intended recipient, the reader is hereby put on notice that any use, > dissemination, distribution or copying of this communication is strictly > prohibited. If the reader has received this communication in error, please > immediately notify the sender by telephone or e-mail and delete all copies > of this e-mail and any attachments. Thank you. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From fred.jen at web.de Wed May 30 14:58:43 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Wed, 30 May 2007 20:58:43 +0200 Subject: [SciPy-user] Expending Python with SWIG or Boost.Python? Message-ID: <1180551523.6648.5.camel@muli> Hello, i have to implement a c++ library into python. These moduls should be used for an user interface then. So i played around with SWIG and look at Boost.Python. A had the feeling, that SWIG is nice for C but not really for C++, while Boost.Python is a little bit strange at first but i like more the concept of boost. Now i wanted to ask who has some experiences with expanding python with such a lib and what could be the best decision. Kind Redards, Fred From fullung at gmail.com Wed May 30 16:07:15 2007 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 30 May 2007 22:07:15 +0200 Subject: [SciPy-user] Expending Python with SWIG or Boost.Python? In-Reply-To: <1180551523.6648.5.camel@muli> References: <1180551523.6648.5.camel@muli> Message-ID: <20070530200715.GA13158@dogbert.sdsl.sun.ac.za> Hello all On Wed, 30 May 2007, Fred Jendrzejewski wrote: > Hello, > i have to implement a c++ library into python. > These moduls should be used for an user interface then. So i played > around with SWIG and look at Boost.Python. > A had the feeling, that SWIG is nice for C but not really for C++, while > Boost.Python is a little bit strange at first but i like more the > concept of boost. I have spent quite a bit of time looking at ctypes and Boost.Python and also some time looking at SWIG. There's also Pyrex and various other possibilities, but I don't have any experience with those. If you're wrapping C code in a shared library/DLL, ctypes is almost definitely what you want to do. If you're wrapping C++ for more than one platform, e.g. Python and Java, you probably want SWIG. SWIG's typemap feature is also rather nice, and there's already been some very good work done on SWIG typemaps for NumPy: http://svn.scipy.org/svn/numpy/trunk/numpy/doc/swig/ Then there's Boost.Python. I really like Boost.Python's call policies and I think you can do most of what you can do with SWIG's typemaps using call policies. Recently, I've also been playing with the idea of automatically converting between C++ and Python types using Boost.Python. You can take a look at some code here: http://pyspkrec.googlecode.com/svn/numpycpp/ The idea here is that you use Python types in Python as far as possible, and the wrapper converts them to C++ types when you actually call some function or member function in C++ code (without copying any data as far as possible). You can also do the same thing to convert from C++ types back into Python types when your C++ functions return. If you're interested in this at all, I'll write some more about this code in another post. > Now i wanted to ask who has some experiences with expanding python with > such a lib and what could be the best decision. The best advice I can give is to to wrap a bit of your library with SWIG and Boost.Python, and see which one you prefer. What works for you is going to depend to an extent on the API you are wrapping. Cheers, Albert From fred.jen at web.de Wed May 30 16:30:08 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Wed, 30 May 2007 22:30:08 +0200 Subject: [SciPy-user] Expending Python with SWIG or Boost.Python? In-Reply-To: <20070530200715.GA13158@dogbert.sdsl.sun.ac.za> References: <1180551523.6648.5.camel@muli> <20070530200715.GA13158@dogbert.sdsl.sun.ac.za> Message-ID: <1180557008.7529.8.camel@muli> Am Mittwoch, den 30.05.2007, 22:07 +0200 schrieb Albert Strasheim: > Hello all > > On Wed, 30 May 2007, Fred Jendrzejewski wrote: > > > Hello, > > i have to implement a c++ library into python. > > These moduls should be used for an user interface then. So i played > > around with SWIG and look at Boost.Python. > > A had the feeling, that SWIG is nice for C but not really for C++, while > > Boost.Python is a little bit strange at first but i like more the > > concept of boost. > > I have spent quite a bit of time looking at ctypes and Boost.Python and > also some time looking at SWIG. There's also Pyrex and various other > possibilities, but I don't have any experience with those. > > If you're wrapping C code in a shared library/DLL, ctypes is almost > definitely what you want to do. > > If you're wrapping C++ for more than one platform, e.g. Python and > Java, you probably want SWIG. SWIG's typemap feature is also rather > nice, and there's already been some very good work done on SWIG > typemaps for NumPy: > > http://svn.scipy.org/svn/numpy/trunk/numpy/doc/swig/ > > Then there's Boost.Python. I really like Boost.Python's call policies > and I think you can do most of what you can do with SWIG's typemaps > using call policies. > > Recently, I've also been playing with the idea of automatically > converting between C++ and Python types using Boost.Python. You can > take a look at some code here: > > http://pyspkrec.googlecode.com/svn/numpycpp/ > > The idea here is that you use Python types in Python as far as > possible, and the wrapper converts them to C++ types when you actually > call some function or member function in C++ code (without copying any > data as far as possible). You can also do the same thing to convert > from C++ types back into Python types when your C++ functions return. > > If you're interested in this at all, I'll write some more about this > code in another post. > > > Now i wanted to ask who has some experiences with expanding python with > > such a lib and what could be the best decision. > > The best advice I can give is to to wrap a bit of your library with > SWIG and Boost.Python, and see which one you prefer. What works for you > is going to depend to an extent on the API you are wrapping. > > Cheers, > > Albert > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Thank you for this advices, i will read very very much of the docs tomorrow. The aim of the project was it to save time by using python for interfaces. So the main preference is to find something, that makes it possible to wrap the code in a short time. Because if there is a lot of overhead to do everytime i am changing the library for implementing this in python, all this work is senseless. So I would be mainly interested in experiences with the effort to maintain the code after the switch. Kind Regards, Fred From fullung at gmail.com Wed May 30 18:15:38 2007 From: fullung at gmail.com (Albert Strasheim) Date: Thu, 31 May 2007 00:15:38 +0200 Subject: [SciPy-user] Expending Python with SWIG or Boost.Python? In-Reply-To: <1180557008.7529.8.camel@muli> References: <1180551523.6648.5.camel@muli> <20070530200715.GA13158@dogbert.sdsl.sun.ac.za> <1180557008.7529.8.camel@muli> Message-ID: <20070530221538.GA15896@dogbert.sdsl.sun.ac.za> On Wed, 30 May 2007, Fred Jendrzejewski wrote: > Am Mittwoch, den 30.05.2007, 22:07 +0200 schrieb Albert Strasheim: > Thank you for this advices, i will read very very much of the docs > tomorrow. > The aim of the project was it to save time by using python for > interfaces. So the main preference is to find something, that makes it > possible to wrap the code in a short time. Because if there is a lot of > overhead to do everytime i am changing the library for implementing this > in python, all this work is senseless. So I would be mainly interested > in experiences with the effort to maintain the code after the switch. You might also be interested in Py++. It allows you to write Python code that generates the Boost.Python wrapper code. As I understand it, it is a replacement for Pyste. http://www.language-binding.net/ I haven't used Py++ much (for now I prefer to write the wrappers by hand so that I know what's going on), but if you have to deal with code that changes frequently, Py++ might be what you want. The Py++ author is also very active on the cpp-sig (Python C++ SIG) mailing list, so you can get more help there. Cheers, Albert From oliphant.travis at ieee.org Thu May 31 01:30:00 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 May 2007 23:30:00 -0600 Subject: [SciPy-user] SciPy Journal Message-ID: <465E5D58.9030107@ieee.org> Hi everybody, I'm sorry for the cross posting, but I wanted to reach a wide audience and I know not everybody subscribes to all the lists. I've been thinking more about the "SciPy Journal" that we discussed before and I have some thoughts. 1) I'd like to get it going so that we can push out an electronic issue after the SciPy conference (in September) 2) I think it's scope should be limited to papers that describe algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we could also accept papers that describe code that depends on NumPy / SciPy that is also easily available. 3) I'd like to make a requirement for inclusion of new code in SciPy that it have an associated journal article describing the algorithms, design approach, etc. I don't see this journal article as being user-interface documentation for the code. I see this is as a place to describe why the code is organized as it is and to detail any algorithms that are used. 4) The purpose of the journal as I see it is to a) provide someplace to document what is actually done in SciPy and related software. b) provide a teaching tool of numerical methods with actual "people use-it" code that would be useful to researchers, students, and professionals. c) hopefully clever new algorithms will be developed for SciPy by people using Python that could be show-cased here d) provide a peer-review publication opportunity for people who contribute to open-source software 5) We obviously need associate editors and people willing to review submitted articles as well as people willing to submit articles. I have two articles that can be submitted within the next two months. What do other people have? As an example of the kind of thing a SciPy Journal would be useful for. I have recently over-hauled the interpolation.py file for SciPy by incorporating the B-spline stuff that is partly in fitpack. In the process I noticed two things: 1) I have (what seems to me) a different recursive algorithm for calculating derivatives of B-splines than I could find in fitpack. 2) I have developed a different way to determine the K-1 extra degrees of freedom for Kth-order spline fitting than I have seen before. The SciPy Journal would be a great place to document both of these things while describing the spline interpolation design of scipy.interpolate It is true that I could submit this stuff to other journals, but it seems like that doing that makes the information harder to find in the future and not easier. I'm also dissatisfied with how information exclusionary academic journals seem to be. They are catching up, but they are still not as accessible as other things available on the internet. Given the open nature of most scientific research, it is remarkable that getting access to the information is not as easy as it should be with modern search engines (if your internet domain does not subscribe to the e-journal). Comments and feedback is welcome. -Travis From nwagner at iam.uni-stuttgart.de Thu May 31 05:05:00 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 31 May 2007 11:05:00 +0200 Subject: [SciPy-user] IndexError: tuple index out of range Message-ID: <465E8FBC.1070707@iam.uni-stuttgart.de> Hi all, I have installed scipy on different machines running SuSE Linux 9.3, 10.0, 10.2. What is the reason for the IndexError: tuple index out of range which I can observe when I run scipy.test(1) under 10.2 ? ====================================================================== ERROR: test_explicit (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 46, in test_explicit out = explicit_odr.run() File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 1049, in run self.output = Output(apply(odr, args, kwds)) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 576, in __init__ self.stopreason = report_error(self.info) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 143, in report_error 'Iteration limit reached')[info % 10] IndexError: tuple index out of range For version 9.3 I get failures like ====================================================================== FAIL: test_explicit (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/odr/tests/test_odr.py", line 49, in test_explicit np.array([ 1.2646548050648876e+03, -5.4018409956678255e+01, File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.26462971e+03, -5.42545890e+01, -8.64250389e-02]) y: array([ 1.26465481e+03, -5.40184100e+01, -8.78497122e-02]) Nils From nicolas.pettiaux at ael.be Thu May 31 06:09:09 2007 From: nicolas.pettiaux at ael.be (Nicolas Pettiaux) Date: Thu, 31 May 2007 12:09:09 +0200 Subject: [SciPy-user] SciPy Journal In-Reply-To: <465E5D58.9030107@ieee.org> References: <465E5D58.9030107@ieee.org> Message-ID: 2007/5/31, Travis Oliphant : > 1) I'd like to get it going so that we can push out an electronic issue > after the SciPy conference (in September) Such a journal is a very good idea indeed. This would also support the credibility of python/scipy/numpy for an academic audience that legitimates scientific productions mostly by articles in journals. > 2) I think it's scope should be limited to papers that describe > algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we > could also accept papers that describe code that depends on NumPy / > SciPy that is also easily available. More generally, examples of uses of scipy / numpy ... would be interesting in such a journal, as well as simply the proceedings of the scipy conferences. > It is true that I could submit this stuff to other journals, but it > seems like that doing that makes the information harder to find in the > future and not easier. I'm also dissatisfied with how information > exclusionary academic journals seem to be. They are catching up, but > they are still not as accessible as other things available on the internet. having *one* main place where much information and documentation, with peer reviewed validation, could be found is IMHO very interesting. Regards, Nicolas -- Nicolas Pettiaux - email: nicolas.pettiaux at ael.be Utiliser des formats ouverts et des logiciels libres - http://www.passeralinux.org From nicolas.pettiaux at ael.be Thu May 31 06:09:09 2007 From: nicolas.pettiaux at ael.be (Nicolas Pettiaux) Date: Thu, 31 May 2007 12:09:09 +0200 Subject: [SciPy-user] SciPy Journal In-Reply-To: <465E5D58.9030107@ieee.org> References: <465E5D58.9030107@ieee.org> Message-ID: 2007/5/31, Travis Oliphant : > 1) I'd like to get it going so that we can push out an electronic issue > after the SciPy conference (in September) Such a journal is a very good idea indeed. This would also support the credibility of python/scipy/numpy for an academic audience that legitimates scientific productions mostly by articles in journals. > 2) I think it's scope should be limited to papers that describe > algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we > could also accept papers that describe code that depends on NumPy / > SciPy that is also easily available. More generally, examples of uses of scipy / numpy ... would be interesting in such a journal, as well as simply the proceedings of the scipy conferences. > It is true that I could submit this stuff to other journals, but it > seems like that doing that makes the information harder to find in the > future and not easier. I'm also dissatisfied with how information > exclusionary academic journals seem to be. They are catching up, but > they are still not as accessible as other things available on the internet. having *one* main place where much information and documentation, with peer reviewed validation, could be found is IMHO very interesting. Regards, Nicolas -- Nicolas Pettiaux - email: nicolas.pettiaux at ael.be Utiliser des formats ouverts et des logiciels libres - http://www.passeralinux.org From lou_boog2000 at yahoo.com Thu May 31 10:04:54 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 31 May 2007 07:04:54 -0700 (PDT) Subject: [SciPy-user] SciPy Journal In-Reply-To: <465E5D58.9030107@ieee.org> Message-ID: <769426.51104.qm@web34415.mail.mud.yahoo.com> I agree with this idea. Very good. Although I also agree with Anne Archibald that the requirement of an article in the journal to submit code is not a good idea. I would be willing to contribute an article on writing C extensions that use numpy arrays. I already have something on this on the SciPy cookbook, but I bet it would reach more people in a journal. I also suggest that articles on using packages like matplotlib/pylab for scientific purposes also be included. -- Lou Pecora, my views are my own. --------------- Great spirits have always encountered violent opposition from mediocre minds. -Albert Einstein ____________________________________________________________________________________ Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz From lou_boog2000 at yahoo.com Thu May 31 10:04:54 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 31 May 2007 07:04:54 -0700 (PDT) Subject: [SciPy-user] SciPy Journal In-Reply-To: <465E5D58.9030107@ieee.org> Message-ID: <769426.51104.qm@web34415.mail.mud.yahoo.com> I agree with this idea. Very good. Although I also agree with Anne Archibald that the requirement of an article in the journal to submit code is not a good idea. I would be willing to contribute an article on writing C extensions that use numpy arrays. I already have something on this on the SciPy cookbook, but I bet it would reach more people in a journal. I also suggest that articles on using packages like matplotlib/pylab for scientific purposes also be included. -- Lou Pecora, my views are my own. --------------- Great spirits have always encountered violent opposition from mediocre minds. -Albert Einstein ____________________________________________________________________________________ Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz From massimo.sandal at unibo.it Thu May 31 10:47:50 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 31 May 2007 16:47:50 +0200 Subject: [SciPy-user] SciPy Journal In-Reply-To: <465E5D58.9030107@ieee.org> References: <465E5D58.9030107@ieee.org> Message-ID: <465EE016.3070900@unibo.it> Hi, As a humble but happy SciPy user, I have some reservations about this project. I didn't notice the discussion before, so maybe I'm talking about points already discussed; however... > Perhaps we > could also accept papers that describe code that depends on NumPy / > SciPy that is also easily available. I'd second that for sure. People should see what it is possible to do with scipy, not only see what is in scipy. > 3) I'd like to make a requirement for inclusion of new code in SciPy > that it have an associated journal article describing the algorithms, > design approach, etc. I don't see this journal article as being > user-interface documentation for the code. I see this is as a place to > describe why the code is organized as it is and to detail any algorithms > that are used. This looks hellish to me. People have already (often) an hard time contributing to OSS projects, for varous reasons, time being one of the most scarce resources. Requiring them *not only* to write documentation (which is good and must be required) but to write an article too seems at risk of putting a lot of people off. Why description of algorithms etc. cannot be made inside official documentation/wiki/website? > 4) The purpose of the journal as I see it is to > > a) provide someplace to document what is actually done in SciPy and > related software. Why cannot this be done in a wiki/website/changelog? > b) provide a teaching tool of numerical methods with actual "people > use-it" code that would be > useful to researchers, students, and professionals. Why cannot this be done in a wiki/website? > c) hopefully clever new algorithms will be developed for SciPy by > people using Python > that could be show-cased here Why having a journal will help developing clever new algorithms? > d) provide a peer-review publication opportunity for people who > contribute to open-source software This is probably the most interesting aspect. Are you willing to do an official, peer-reviewed academic journal that can be, for example, cited and that will be indexed by databases like PubMed, Scirus and ISI Web Of Science, for example? If yes, I second that (albeit I find a Scipy Journal an extremely small niche -perhaps a Python Scientific Programming Journal, extended to non-scipy projects, would be better). If not, I don't understand what's the point. > The SciPy Journal would be a great place to document both of these > things while describing the spline interpolation design of scipy.interpolate Why a well made wiki page cannot be a great place? > It is true that I could submit this stuff to other journals, but it > seems like that doing that makes the information harder to find in the > future and not easier. I'm also dissatisfied with how information > exclusionary academic journals seem to be. They are catching up, but > they are still not as accessible as other things available on the internet. > > Given the open nature of most scientific research, it is remarkable that > getting access to the information is not as easy as it should be with > modern search engines (if your internet domain does not subscribe to the > e-journal). That's exactly why I dislike the idea of a journal. Scientific research is beginning to move very slowly but steadily away from the classic "academic journal" idea. arXiv, PLoS and company are just the beginning. In the not-so-distant future probably most scientific debate and publishing will go purely web, on collaborative websites, with journals just being places where already done discussions etc. will be somehow officialized. So that's why I do not second the idea of a journal. It just makes sense no more. A well done, collaborative (wiki or wiki-like) web site would be much friendlier and better. > Comments and feedback is welcome. Here is mine, even if not so positive :) m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From jh at physics.ucf.edu Thu May 31 10:49:46 2007 From: jh at physics.ucf.edu (Joe Harrington) Date: Thu, 31 May 2007 10:49:46 -0400 Subject: [SciPy-user] SciPy Journal In-Reply-To: (numpy-discussion-request@scipy.org) References: Message-ID: <200705311449.l4VEnk4d008389@glup.physics.ucf.edu> Thoughts: 1. Let's at least consolidate discussion on scipy-users, so we don't have 3 threads going. It's about scipy and scikits as well as numpy, so it belongs on one of the scipy lists, and it's more than just for developers. The users are the customers of the articles, and potential editors. In the future, it would be good for a cross-poster to identify a single list to contain followup discussion. 2. A journal is a significant time and people effort. While it's a laudable goal, we have a *serious* deficiency in the areas of release packaging (i.e., installs for various OS releases that just work) and user documentation. Do we really want to divert our efforts to this journal while the majority of people interested in using the software are still sitting on the sidelines waiting for us to get our house in order on these more basic things? As you pointed out, you can submit these articles elsewhere for now. 3. If one-stop-shopping is a concern, we can easily put up a web page listing packages and containing pointers to articles that describe the algorithms, wherever they are presented. We can use arxiv.org to post articles for free, reviewed or not. 4. I agree that we want our articles freely available to all without any subscription. There are some journals that fit the bill, and we could put links to those on the index page of item 3, for those desiring the legitimacy of peer review. 5. I agree about the problem of raising the bar. On the other hand, I think raising the bar on some things is desirable. The index page of item 3 could be a place for the community to call for a reviewed article on a major package. 6. Creating our own journal has the benefit of making a place for community to work, but we already have that with these lists. On the other hand, it sequesters our good work from the wider community, which would otherwise see it and perhaps get interested in it if it were published in existing journals. Bottom line: I think we should start with an index page today, and suggest that people send their articles to places that will allow them to be re-posted freely on the net (or negotiate that permission in advance, which I have done for all my research articles). We can then consolidate them on the index page and even republish them in the future, when we have the critical mass for both up-to-date documentation and packaging and a journal. --jh-- From matthieu.brucher at gmail.com Thu May 31 10:58:40 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 31 May 2007 16:58:40 +0200 Subject: [SciPy-user] SciPy Journal In-Reply-To: <465EE016.3070900@unibo.it> References: <465E5D58.9030107@ieee.org> <465EE016.3070900@unibo.it> Message-ID: > > > c) hopefully clever new algorithms will be developed for SciPy by > > people using Python > > that could be show-cased here > > Why having a journal will help developing clever new algorithms? The purpose of research is developping new algorithms, having a journal which, hopefully, will be a "free" reference in a field will involve more people looking at Scipy, using Scipy and actually developping new algorithms. This is probably the most interesting aspect. Are you willing to do an > official, peer-reviewed academic journal that can be, for example, cited > and that will be indexed by databases like PubMed, Scirus and ISI Web Of > Science, for example? > > If yes, I second that (albeit I find a Scipy Journal an extremely small > niche -perhaps a Python Scientific Programming Journal, extended to > non-scipy projects, would be better). If not, I don't understand what's > the point. Same here, as stated before. That's exactly why I dislike the idea of a journal. Scientific research > is beginning to move very slowly but steadily away from the classic > "academic journal" idea. arXiv, PLoS and company are just the beginning. > In the not-so-distant future probably most scientific debate and > publishing will go purely web, on collaborative websites, with journals > just being places where already done discussions etc. will be somehow > officialized. So that's why I do not second the idea of a journal. It > just makes sense no more. A well done, collaborative (wiki or wiki-like) > web site would be much friendlier and better. Here in France, publications in journals indicate how much money a lab will have. It is flawed approach, but we have to live with it. Moreover, students and researchers have to make publications in journals, the former so as to have a job in the near future, the latter so as to catch good elements for their lab. Having a journal will help scientists in discovering Python and scipy, and having such a journal will drag good scientists too, it's a virtuous circle, if it can be started. And having a good journal will be a good opportunity for scientists to make something different than a documentation, something they want to spend time for. Hindawi journals use a similar principle - although it is not free for publication, it is free for access - Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sandal at unibo.it Thu May 31 11:23:55 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 31 May 2007 17:23:55 +0200 Subject: [SciPy-user] SciPy Journal In-Reply-To: References: <465E5D58.9030107@ieee.org> <465EE016.3070900@unibo.it> Message-ID: <465EE88B.2070409@unibo.it> Matthieu Brucher ha scritto: > > c) hopefully clever new algorithms will be developed for SciPy by > > people using Python > > that could be show-cased here > > Why having a journal will help developing clever new algorithms? > The purpose of research is developping new algorithms, having a journal > which, hopefully, will be a "free" reference in a field will involve > more people looking at Scipy, using Scipy and actually developping new > algorithms. But who would publish good purely algorithmic research on "Scipy J."? If you have a new, clever algorithm, you publish that on a computational science journal, I think, and you can show a scipy implementation in the article itself. Much more mainstream, and same visibility for scipy. Where am I wrong? > Here in France, publications in journals indicate how much money a lab > will have. It is flawed approach, but we have to live with it. Moreover, > students and researchers have to make publications in journals, the > former so as to have a job in the near future, the latter so as to catch > good elements for their lab. Same here. The problem is, will someone publish it? Will it gain academic respectability? An academic journal revolving around a single software library seems very odd to me -is there a, let's say, "GLIBC Journal" somewhere? Maybe that's just me being ignorant. Also: what can be published on Scipy J. that couldn't be fit well in a)bioinformatics/astro-informatics/$DISCIPLINE-informatics journals b)computer science journals? > Having a journal will help scientists in discovering Python and scipy, > and having such a journal will drag good scientists too, it's a virtuous > circle, if it can be started. And having a good journal will be a good > opportunity for scientists to make something different than a > documentation, something they want to spend time for. Yes, the problem is: can it ever be a decent journal? -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From erendisaldarion at gmail.com Thu May 31 13:06:06 2007 From: erendisaldarion at gmail.com (Aldarion) Date: Fri, 01 Jun 2007 01:06:06 +0800 Subject: [SciPy-user] [Numpy-discussion] SciPy Journal In-Reply-To: <769426.51104.qm@web34415.mail.mud.yahoo.com> References: <769426.51104.qm@web34415.mail.mud.yahoo.com> Message-ID: <465F007E.8040505@gmail.com> Lou Pecora wrote: > I agree with this idea. Very good. Although I also > agree with Anne Archibald that the requirement of an > article in the journal to submit code is not a good > idea. I would be willing to contribute an article on > writing C extensions that use numpy arrays. I > already have something on this on the SciPy cookbook, > but I bet it would reach more people in a journal. > > I also suggest that articles on using packages like > matplotlib/pylab for scientific purposes also be > included. and Ipython(Ipython1) :). From erendisaldarion at gmail.com Thu May 31 13:06:06 2007 From: erendisaldarion at gmail.com (Aldarion) Date: Fri, 01 Jun 2007 01:06:06 +0800 Subject: [SciPy-user] [Numpy-discussion] SciPy Journal In-Reply-To: <769426.51104.qm@web34415.mail.mud.yahoo.com> References: <769426.51104.qm@web34415.mail.mud.yahoo.com> Message-ID: <465F007E.8040505@gmail.com> Lou Pecora wrote: > I agree with this idea. Very good. Although I also > agree with Anne Archibald that the requirement of an > article in the journal to submit code is not a good > idea. I would be willing to contribute an article on > writing C extensions that use numpy arrays. I > already have something on this on the SciPy cookbook, > but I bet it would reach more people in a journal. > > I also suggest that articles on using packages like > matplotlib/pylab for scientific purposes also be > included. and Ipython(Ipython1) :). From robert.kern at gmail.com Thu May 31 13:08:05 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 31 May 2007 12:08:05 -0500 Subject: [SciPy-user] SciPy Journal In-Reply-To: <465EE88B.2070409@unibo.it> References: <465E5D58.9030107@ieee.org> <465EE016.3070900@unibo.it> <465EE88B.2070409@unibo.it> Message-ID: <465F00F5.7080209@gmail.com> massimo sandal wrote: > Same here. The problem is, will someone publish it? Will it gain > academic respectability? An academic journal revolving around a single > software library seems very odd to me -is there a, let's say, "GLIBC > Journal" somewhere? Maybe that's just me being ignorant. Look at the "Journal of Statistical Software". Its name might as well be the "Journal of R". http://www.jstatsoft.org/ Note that we are intending the Scipy Journal to be freely published online, too. There is no question of finding someone to publish it for us. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From anand at cse.ucsc.edu Thu May 31 15:30:27 2007 From: anand at cse.ucsc.edu (Anand Patil) Date: Thu, 31 May 2007 12:30:27 -0700 Subject: [SciPy-user] Scipy Journal Message-ID: <2bc7a5a50705311230y2d851a26maf0fe2ad5839224e@mail.gmail.com> I agree broadly with Massimo. The requirement of writing a journal article in addition to documentation seems onerous, and competing with existing computational journals in scope seems like a difficult way to go. In addition, when a document actually helps me solve a problem, 75% of the time it's either a tech report someone just decided to write and put on the web or a book. Journal articles' constraints make them fairly ineffective as teaching tools for my brain. What journals are particularly good for is getting the word out about new ideas. I'm sure many people on this list have wished at various times that they could just download how magical Python can be into colleagues' brains, and an organized collection of well-written horizon-expanding articles on computation with Python could be just the thing. So, I think an successful journal would: 1) Be limited in scope to Python-specific ideas. General algorithmic stuff should go elsewhere, unless Python makes the algorithm possible. 2) Have articles similar in intent to Science/Nature papers or conference posters. They shouldn't have to get readers to the point where they can start coding, and they definitely shouldn't be taken as replacements or even complements for documentation. They should just try to concisely communicate what's cool about the idea. Perhaps an unreviewed newsletter would be a good place to start? - Hide quoted text - Massimo Sandal wrote: > > Matthieu Brucher ha scritto: > > > c) hopefully clever new algorithms will be developed for > SciPy by > > > people using Python > > > that could be show-cased here > > > > Why having a journal will help developing clever new algorithms? > > > The purpose of research is developping new algorithms, having a journal > > which, hopefully, will be a "free" reference in a field will involve > > more people looking at Scipy, using Scipy and actually developping new > > algorithms. > > But who would publish good purely algorithmic research on "Scipy J."? If > you have a new, clever algorithm, you publish that on a computational > science journal, I think, and you can show a scipy implementation in the > article itself. Much more mainstream, and same visibility for scipy. > Where am I wrong? > > > > Here in France, publications in journals indicate how much money a lab > > will have. It is flawed approach, but we have to live with it. Moreover, > > students and researchers have to make publications in journals, the > > former so as to have a job in the near future, the latter so as to catch > > good elements for their lab. > > Same here. The problem is, will someone publish it? Will it gain > academic respectability? An academic journal revolving around a single > software library seems very odd to me -is there a, let's say, "GLIBC > Journal" somewhere? Maybe that's just me being ignorant. > > Also: what can be published on Scipy J. that couldn't be fit well in > a)bioinformatics/astro-informatics/$DISCIPLINE-informatics journals > b)computer science journals? > > > Having a journal will help scientists in discovering Python and scipy, > > and having such a journal will drag good scientists too, it's a virtuous > > > circle, if it can be started. And having a good journal will be a good > > opportunity for scientists to make something different than a > > documentation, something they want to spend time for. > > Yes, the problem is: can it ever be a decent journal? > -------------- next part -------------- An HTML attachment was scrubbed... URL: