From robert.vergnes at yahoo.fr Sat Sep 1 01:31:02 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Sat, 1 Sep 2007 07:31:02 +0200 (CEST) Subject: [SciPy-user] RE : Re: RE : Re: RE : Re: RE : Re: How to free unused memory by Python In-Reply-To: <20070831115125.GA15718@clipper.ens.fr> Message-ID: <827225.92921.qm@web27408.mail.ukl.yahoo.com> All my example where in Python 2.5.1 so it is NOT solved it seems...unfortunately. Any workaround ? Gael Varoquaux a ?crit : On Fri, Aug 31, 2007 at 01:49:10PM +0200, Robert VERGNES wrote: > Used memory in linux or windows is displayed on by the windows task > manager ( win) (ctrl+alt+del) or by the system memory manager (or Task > Manager) ( depending on your linux version i Think). So you can see how > much ofyour physical memory is used while running progs. > So apprently gc cannot redeem memory to the OS... so it seems without > solution for the moment - apart from out-process the task which load > memory too much. And kill it each it when it has done its work so the > memory is given back to the OS. > Any other ideas ? Use python 2.5, where this problem is solved ? Ga?l _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.vergnes at yahoo.fr Sat Sep 1 01:43:01 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Sat, 1 Sep 2007 07:43:01 +0200 (CEST) Subject: [SciPy-user] How to free unused memory by Python In-Reply-To: Message-ID: <712475.31193.qm@web27412.mail.ukl.yahoo.com> Anne, Yes the issue is related with many numpy arrays ( not especially small>2 to 7million items in array). And I do have a crash usually while creating a new array. (MemoryError). To check this out, I made a small test to understand how memory is working in Python and got to see that even with a 'mylist=arange()' the memory is not freed back to the OS when 'mylist' is deleted...which triggered my original question ' How to free unused memory ..'. But as I read from you and other guys, the only way out of this issue - ie to avoid crash -probably due to malloc()- then I must free memory before and for that I need to process out my recurring calculation process which is memory heavy temporarily and must kill my process to release memory after work... I did notice that if I use huge list -and only a standard python list- , then yes the OS pages normally the memory but when I mix list and numpy arrays are involved than I do have a crash when I run near the limit of my physical memory - no more paging possible....and a MemoryError crash happens. Probably due to the way malloc() request the memory for the numpy array... Thanx for the help. Robert Anne Archibald a ?crit : On 31/08/2007, Robert VERGNES wrote: > Used memory in linux or windows is displayed on by the windows task manager > ( win) (ctrl+alt+del) or by the system memory manager (or Task Manager) ( > depending on your linux version i Think). So you can see how much ofyour > physical memory is used while running progs. > > So apprently gc cannot redeem memory to the OS... so it seems without > solution for the moment - apart from out-process the task which load memory > too much. And kill it each it when it has done its work so the memory is > given back to the OS. > > Any other ideas ? Make sure you have lots of swap space. If python has freed some memory, python will reuse that before requesting more from the OS, so there's no problem of memory use growing without bound. If you don't reuse the memory, it will just sit there unused. If you run into memory pressure from other applications, the OS (well, most OSes) will page it out to disk until you actually use it again. So a python process that has a gigabyte allocated but is only using a hundred megabytes of that will, if something else wants to use some of the physical RAM in your machine, simply occupy nine hundred megabytes in your swap file. Who cares? Also worth knowing is that even on old versions of python, on some OSes (probably all) numpy arrays suffer from this problem to a much lesser degree. When you allocate a numpy array, there's a relatively small python object describing it, and a chunk of memory to contain the values. This chunk of memory is allocated with malloc(). The malloc() implementation on Linux (and probably on other systems) provides big chunks by requesting them directly from the operating system, so that they can be returned to the OS when done. Even if you're using many small arrays, you should be aware that the memory needed by numpy array data is allocated by malloc() and not python's allocators, so whether it is freed back to the system is a separate question from whether the memory needed by python objects goes back to the system. Anne _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sat Sep 1 04:04:39 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 1 Sep 2007 10:04:39 +0200 Subject: [SciPy-user] How to free unused memory by Python In-Reply-To: <712475.31193.qm@web27412.mail.ukl.yahoo.com> References: <712475.31193.qm@web27412.mail.ukl.yahoo.com> Message-ID: <20070901080438.GA5683@clipper.ens.fr> Can you send us a simplified version of your code, reflecting the way you use both numpy arrays and lists, that triggers the crash? We can have a look at the problem, that way. Ga?l On Sat, Sep 01, 2007 at 07:43:01AM +0200, Robert VERGNES wrote: > Yes the issue is related with many numpy arrays ( not especially small>2 > to 7million items in array). And I do have a crash usually while creating > a new array. (MemoryError). To check this out, I made a small test to > understand how memory is working in Python and got to see that even with a > 'mylist=arange()' the memory is not freed back to the OS when 'mylist' is > deleted...which triggered my original question ' How to free unused memory > ..'. But as I read from you and other guys, the only way out of this issue > - ie to avoid crash -probably due to malloc()- then I must free memory > before and for that I need to process out my recurring calculation process > which is memory heavy temporarily and must kill my process to release > memory after work... > I did notice that if I use huge list -and only a standard python list- , > then yes the OS pages normally the memory but when I mix list and numpy > arrays are involved than I do have a crash when I run near the limit of > my physical memory - no more paging possible....and a MemoryError crash > happens. Probably due to the way malloc() request the memory for the numpy > array... From lorenzo.isella at gmail.com Sat Sep 1 08:58:10 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sat, 01 Sep 2007 14:58:10 +0200 Subject: [SciPy-user] Again on Double Precision Message-ID: <46D961E2.3060604@gmail.com> Dear All, I know this is related to a thread which had been going on for a while, but I am about to publish some results of a simulation making use of integrate.odeint and I would like to be sure I have not misunderstood anything fundamental. I was using all my arrays and functions to be dealt with by integrate.odeint without ever bothering too much about the details, i.e. I never specified explicitly the "type" of arrays I was using.. I assumed that integrate.odeint was a thin layer to some Fortran routine and it would automatically convert to Fortran double-precision all the due quantities. Is this what is happening really? I actually have no reason to think that my results are somehow inaccurate, but a you never know. I was getting worried after looking at: http://www.scipy.org/Cookbook/BuildingArrays Apologies if this is too basic for the forum, but in Fortran I always used double precision as a standard and in R all the numbers/arrays are stored as double precision objects and you do not have to worry (practically the only languages I use apart from Python). In the end of the day, double precision is a specific case of floating point numbers and I wonder if, when working with the default floating arrays in SciPy, I attain the same accuracy I would get with double-precision Fortran arrays. Many thanks for any enlightening comment. Lorenzo From robert.kern at gmail.com Sat Sep 1 15:16:57 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 01 Sep 2007 14:16:57 -0500 Subject: [SciPy-user] Again on Double Precision In-Reply-To: <46D961E2.3060604@gmail.com> References: <46D961E2.3060604@gmail.com> Message-ID: <46D9BAA9.2080403@gmail.com> Lorenzo Isella wrote: > Dear All, > I know this is related to a thread which had been going on for a while, > but I am about to publish some results of a simulation making use of > integrate.odeint and I would like to be sure I have not misunderstood > anything fundamental. > I was using all my arrays and functions to be dealt with by > integrate.odeint without ever bothering too much about the details, i.e. > I never specified explicitly the "type" of arrays I was using.. > I assumed that integrate.odeint was a thin layer to some Fortran > routine and it would automatically convert to Fortran double-precision > all the due quantities. > Is this what is happening really? I actually have no reason to think > that my results are somehow inaccurate, but a you never know. > I was getting worried after looking at: > http://www.scipy.org/Cookbook/BuildingArrays > > Apologies if this is too basic for the forum, but in Fortran I always > used double precision as a standard and in R all the numbers/arrays > are stored as double precision objects and you do not have to worry > (practically the only languages I use apart from Python). In the end of > the day, double precision is a specific case of floating point numbers > and I wonder if, when working with the default floating arrays in SciPy, > I attain the same accuracy I would get with double-precision Fortran > arrays. The default floating point type in Python, numpy, and scipy is double-precision. Unless if you have explicitly constructed arrays using float32, your calculations will be done in double-precision. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Sat Sep 1 17:25:56 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 1 Sep 2007 17:25:56 -0400 Subject: [SciPy-user] How to free unused memory by Python In-Reply-To: <712475.31193.qm@web27412.mail.ukl.yahoo.com> References: <712475.31193.qm@web27412.mail.ukl.yahoo.com> Message-ID: On 01/09/07, Robert VERGNES wrote: > Yes the issue is related with many numpy arrays ( not especially small>2 to > 7million items in array). And I do have a crash usually while creating a new > array. (MemoryError). To check this out, I made a small test to understand > how memory is working in Python and got to see that even with a > 'mylist=arange()' the memory is not freed back to the OS when 'mylist' is > deleted...which triggered my original question ' How to free unused memory > ..'. But as I read from you and other guys, the only way out of this issue - > ie to avoid crash -probably due to malloc()- then I must free memory before > and for that I need to process out my recurring calculation process which is > memory heavy temporarily and must kill my process to release memory after > work... > I did notice that if I use huge list -and only a standard python list- , > then yes the OS pages normally the memory but when I mix list and numpy > arrays are involved than I do have a crash when I run near the limit of my > physical memory - no more paging possible....and a MemoryError crash > happens. Probably due to the way malloc() request the memory for the numpy > array... There are two problems here: * python or numpy not shinking its virtual memory use * python crashing during allocation Why do you think they are related? The first is a known limitation of most dynamic memory allocation schemes. Modern malloc()s are generally pretty good about avoiding memory fragmentation, but the ways in which you can release memory back to the operating system are often extremely limited. This problem, that the virtual memory size of a process may remain large even when most of that memory is unused, arises in raw C programs as well. That said, I know that for arrays that large, when they are freed the glibc malloc() that is used under Linux will definitely release that memory back to the OS. Are you sure the arrays are actually being freed? Remember that numpy often creates views of arrays that avoid copying the data but keep the original array alive: A = ones(1000000) B = A[2:4] del A Here the memory for A cannot be deallocated because B still points to it, even though B only needs a few bytes of the many megabytes in A. To cure this there are various choices, for example: B = copy(B) This duplicates the memory and forgets the reference to A (as it is no longer needed). As for the crashing, what sort of crash is it? What exception gets raised (or is it a segfault)? If it is memory exhaustion, all this business about "not freeing memory back to the OS" is a red herring. No matter how old your version of python and how little memory it ever releases back to the OS, new objects will be allocated from the memory the python process already has. If your process keeps growing indefinitely, that's not malloc, that's your code keeping references to more and more data so that it cannot be free()d. Perhaps look into tools for debugging memory leaks in python? Anne From David.L.Goldsmith at noaa.gov Sat Sep 1 22:35:50 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Sat, 01 Sep 2007 19:35:50 -0700 Subject: [SciPy-user] Again on Double Precision In-Reply-To: <46D9BAA9.2080403@gmail.com> References: <46D961E2.3060604@gmail.com> <46D9BAA9.2080403@gmail.com> Message-ID: <46DA2186.5000703@noaa.gov> Robert Kern wrote: > Lorenzo Isella wrote: > >> Dear All, >> I know this is related to a thread which had been going on for a while, >> but I am about to publish some results of a simulation making use of >> integrate.odeint and I would like to be sure I have not misunderstood >> anything fundamental. >> I was using all my arrays and functions to be dealt with by >> integrate.odeint without ever bothering too much about the details, i.e. >> I never specified explicitly the "type" of arrays I was using.. >> I assumed that integrate.odeint was a thin layer to some Fortran >> routine and it would automatically convert to Fortran double-precision >> all the due quantities. >> Is this what is happening really? I actually have no reason to think >> that my results are somehow inaccurate, but a you never know. >> I was getting worried after looking at: >> http://www.scipy.org/Cookbook/BuildingArrays >> >> Apologies if this is too basic for the forum, but in Fortran I always >> used double precision as a standard and in R all the numbers/arrays >> are stored as double precision objects and you do not have to worry >> (practically the only languages I use apart from Python). In the end of >> the day, double precision is a specific case of floating point numbers >> and I wonder if, when working with the default floating arrays in SciPy, >> I attain the same accuracy I would get with double-precision Fortran >> arrays. >> > > The default floating point type in Python, numpy, and scipy is double-precision. > Unless if you have explicitly constructed arrays using float32, your > calculations will be done in double-precision. > But, if _all_ the array elements are integers (numerically speaking), then he has to specify that the array elements are float in some concrete way (be it w/ an otherwise superfluous decimal point, a dtype=double, or whatever), correct? DG From stefano.borini at ferrara.linux.it Sun Sep 2 11:57:31 2007 From: stefano.borini at ferrara.linux.it (stefano borini) Date: Sun, 02 Sep 2007 17:57:31 +0200 Subject: [SciPy-user] dstevx not implemented Message-ID: <46DADD6B.40006@ferrara.linux.it> Good morning to all, I just started tinkering with SciPy, and it looks very interesting. However, I have a question and the searches I performed on the mailing list or the web returned nothing valuable. I want to find some of the eigenvalues and eigenvectors of a quite large tridiagonal matrix (around 20000x20000, maybe more). I used to do this task in fortran using dstevx, but I noted that this function is not exported to python, according to a comment in flapack_esv.pyf.src . I would like to ask is: - is this is due to technical or human-resources issues? - If any other of you had my same need, how did you solve the problem so to have an efficient code? Thanks in advance -- Kind regards, Stefano Borini From robert.kern at gmail.com Sun Sep 2 15:52:10 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 02 Sep 2007 14:52:10 -0500 Subject: [SciPy-user] dstevx not implemented In-Reply-To: <46DADD6B.40006@ferrara.linux.it> References: <46DADD6B.40006@ferrara.linux.it> Message-ID: <46DB146A.3040701@gmail.com> stefano borini wrote: > Good morning to all, > > I just started tinkering with SciPy, and it looks very interesting. > However, I have a question and the searches I performed on the mailing > list or the web returned nothing valuable. > > I want to find some of the eigenvalues and eigenvectors of a quite large > tridiagonal matrix (around 20000x20000, maybe more). I used to do this > task in fortran using dstevx, but I noted that this function is not > exported to python, according to a comment in flapack_esv.pyf.src . > > I would like to ask is: > - is this is due to technical or human-resources issues? Human resources. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From massimo.sandal at unibo.it Mon Sep 3 05:22:36 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 03 Sep 2007 11:22:36 +0200 Subject: [SciPy-user] How to free unused memory by Python In-Reply-To: References: <712475.31193.qm@web27412.mail.ukl.yahoo.com> Message-ID: <46DBD25C.5060107@unibo.it> Anne Archibald ha scritto: > If it is memory exhaustion, all this business about "not freeing > memory back to the OS" is a red herring. No matter how old your > version of python and how little memory it ever releases back to the > OS, new objects will be allocated from the memory the python process > already has. If your process keeps growing indefinitely, that's not > malloc, that's your code keeping references to more and more data so > that it cannot be free()d. Perhaps look into tools for debugging > memory leaks in python? I'd love to find one. I have memory leaks here and there in my code (no doubt due to dangling references) but it is often damn hard to debug them. I asked on comp.lang.python but I found no useful answers. If you know of a memory debugging tool for python, let us know! m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From william.ratcliff at gmail.com Mon Sep 3 09:35:31 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Mon, 3 Sep 2007 09:35:31 -0400 Subject: [SciPy-user] How to free unused memory by Python In-Reply-To: <46DBD25C.5060107@unibo.it> References: <712475.31193.qm@web27412.mail.ukl.yahoo.com> <46DBD25C.5060107@unibo.it> Message-ID: <827183970709030635o5eb9d351r6f540d03be9de142@mail.gmail.com> I never used it, but have you tried using valgrind? Cheers, William On 9/3/07, massimo sandal wrote: > > Anne Archibald ha scritto: > > If it is memory exhaustion, all this business about "not freeing > > memory back to the OS" is a red herring. No matter how old your > > version of python and how little memory it ever releases back to the > > OS, new objects will be allocated from the memory the python process > > already has. If your process keeps growing indefinitely, that's not > > malloc, that's your code keeping references to more and more data so > > that it cannot be free()d. Perhaps look into tools for debugging > > memory leaks in python? > > I'd love to find one. I have memory leaks here and there in my code (no > doubt due to dangling references) but it is often damn hard to debug > them. I asked on comp.lang.python but I found no useful answers. If you > know of a memory debugging tool for python, let us know! > > m. > > -- > Massimo Sandal > University of Bologna > Department of Biochemistry "G.Moruzzi" > > snail mail: > Via Irnerio 48, 40126 Bologna, Italy > > email: > massimo.sandal at unibo.it > > tel: +39-051-2094388 > fax: +39-051-2094387 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sandal at unibo.it Mon Sep 3 10:25:43 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 03 Sep 2007 16:25:43 +0200 Subject: [SciPy-user] How to free unused memory by Python In-Reply-To: <827183970709030635o5eb9d351r6f540d03be9de142@mail.gmail.com> References: <712475.31193.qm@web27412.mail.ukl.yahoo.com> <46DBD25C.5060107@unibo.it> <827183970709030635o5eb9d351r6f540d03be9de142@mail.gmail.com> Message-ID: <46DC1967.4070900@unibo.it> william ratcliff ha scritto: > I never used it, but have you tried using valgrind? No. Does it work Python code? I thought it only helped to debug executables. But I guess maybe the output of it running Python running a program can give hints... what do people think about? m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From ellisonbg.net at gmail.com Mon Sep 3 13:24:28 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Mon, 3 Sep 2007 11:24:28 -0600 Subject: [SciPy-user] How to free unused memory by Python In-Reply-To: <46DC1967.4070900@unibo.it> References: <712475.31193.qm@web27412.mail.ukl.yahoo.com> <46DBD25C.5060107@unibo.it> <827183970709030635o5eb9d351r6f540d03be9de142@mail.gmail.com> <46DC1967.4070900@unibo.it> Message-ID: <6ce0ac130709031024w60a251dfy674a94eb0726f9eb@mail.gmail.com> I understand that valgrind does work with python. But, because of the way Python manages memory, you need to use a custom valgrind configuration file that comes with the python source. You can google around for the file: valgrind-python.supp Brian On 9/3/07, massimo sandal wrote: > william ratcliff ha scritto: > > I never used it, but have you tried using valgrind? > > No. Does it work Python code? I thought it only helped to debug > executables. But I guess maybe the output of it running Python running a > program can give hints... what do people think about? > > m. > > -- > Massimo Sandal > University of Bologna > Department of Biochemistry "G.Moruzzi" > > snail mail: > Via Irnerio 48, 40126 Bologna, Italy > > email: > massimo.sandal at unibo.it > > tel: +39-051-2094388 > fax: +39-051-2094387 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From jks at iki.fi Mon Sep 3 13:30:45 2007 From: jks at iki.fi (=?iso-8859-1?Q?Jouni_K=2E_Sepp=E4nen?=) Date: Mon, 03 Sep 2007 20:30:45 +0300 Subject: [SciPy-user] How to free unused memory by Python References: <712475.31193.qm@web27412.mail.ukl.yahoo.com> <46DBD25C.5060107@unibo.it> Message-ID: massimo sandal writes: > If you know of a memory debugging tool for python, let us know! See Michael Droettboom's description of how he fixed a bunch of leaks in matplotlib: http://article.gmane.org/gmane.comp.python.matplotlib.devel/2804 -- Jouni K. Sepp?nen http://www.iki.fi/jks From jallikattu at googlemail.com Tue Sep 4 01:46:35 2007 From: jallikattu at googlemail.com (morovia morovia) Date: Tue, 4 Sep 2007 07:46:35 +0200 Subject: [SciPy-user] vectorizing a function inside a class Message-ID: <72da94d60709032246k54663858ne8fba65a7c26a29c@mail.gmail.com> Hello all, I am trying to vectorize a function which resides in the class. Can any one help me out ? I have appended the test code below. from scipy import vectorize ************************************************ def square(x): return x.x v_sq = vectorize(square) result = v_sq(x) ************************************************ Just new to class !! I just want to mimic the above function, which works. But, inside the class structure, it does not ! from scipy import vectorize class a: def __init__(self,x): self.x = x def square(self): return self.x*self.x def v_sq(self): vect_sq = vectorize(square) return vect_sq(self) x = [1,2,3] fn = a(x) fn.v_sq() Thanks in advance, Morovia. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjerk.heijboer at unibas.ch Tue Sep 4 01:51:49 2007 From: tjerk.heijboer at unibas.ch (tjerk heijboer) Date: Tue, 4 Sep 2007 07:51:49 +0200 Subject: [SciPy-user] vectorizing a function inside a class In-Reply-To: <72da94d60709032246k54663858ne8fba65a7c26a29c@mail.gmail.com> References: <72da94d60709032246k54663858ne8fba65a7c26a29c@mail.gmail.com> Message-ID: <125A1959-83A1-4C94-B3F9-AD03CA70D7C6@unibas.ch> use vectorize(self.square) ? cheers tjerk On 04 Sep 2007, at 07:46, morovia morovia wrote: > > Hello all, > > I am trying to vectorize a function which resides in the class. > Can any one help me out ? I have appended the test code below. > > from scipy import vectorize > > ************************************************ > def square(x): > return x.x > > v_sq = vectorize(square) > > result = v_sq(x) > > ************************************************ > Just new to class !! > > I just want to mimic the above function, > which works. But, inside the class structure, > it does not ! > > from scipy import vectorize > > class a: > def __init__(self,x): > self.x = x > def square(self): > return self.x*self.x > def v_sq(self): > vect_sq = vectorize(square) > return vect_sq(self) > > x = [1,2,3] > fn = a(x) > fn.v_sq() > > > Thanks in advance, > Morovia. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From ckkart at hoc.net Tue Sep 4 03:22:40 2007 From: ckkart at hoc.net (Christian K.) Date: Tue, 4 Sep 2007 07:22:40 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?fmin=5Fcobyla_hangs?= Message-ID: Hi, sometimes when running fmin_cobyla it never returns but keeps the cpu load high. The last output (with iprint=2) is the following: The initial value of RHO is 1.000000E+00 and PARMU is set to zero. NFVALS = 1 F = 1.635286E+03 MAXCV = 5.335792E-02 X =-2.000000E-02 9.600000E-02 2.120000E-01 3.280000E-01 4.440000E-01 5.600000E-01 Increase in PARMU to 1.344559E+03 Increase in PARMU to 4.872087E+04 Does anybody know enough of the internals of fmin_cobyla to imagine what is going on? In case it helps, I will try to build a sample script/data package. Christian From openopt at ukr.net Tue Sep 4 03:28:28 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 04 Sep 2007 10:28:28 +0300 Subject: [SciPy-user] fmin_cobyla hangs In-Reply-To: References: Message-ID: <46DD091C.1070105@ukr.net> W/o code sample we can't say anything. Try scikits.openopt ALGENCAN or lincher solver (latter requires CVXOPT installed http://www.ee.ucla.edu/~vandenbe/cvxopt/, former requires ALGENCAN with Python connection installed, see http://www.ime.usp.br/~egbirgin/tango/py.php) Get OpenOpt from svn: svn co http://svn.scipy.org/svn/scikits/trunk/openopt openopt (then see help(scikits.openopt.NLP)) However, afaik scipy server is down for now. Regards, D. Christian K. wrote: > Hi, > sometimes when running fmin_cobyla it never returns but keeps the cpu load high. > The last output (with iprint=2) is the following: > > The initial value of RHO is 1.000000E+00 and PARMU is set to zero. > > NFVALS = 1 F = 1.635286E+03 MAXCV = 5.335792E-02 > X =-2.000000E-02 9.600000E-02 2.120000E-01 3.280000E-01 4.440000E-01 > 5.600000E-01 > > Increase in PARMU to 1.344559E+03 > > Increase in PARMU to 4.872087E+04 > > Does anybody know enough of the internals of fmin_cobyla to imagine what is > going on? In case it helps, I will try to build a sample script/data package. > > Christian > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From matthieu.brucher at gmail.com Tue Sep 4 04:43:36 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 4 Sep 2007 10:43:36 +0200 Subject: [SciPy-user] vectorizing a function inside a class In-Reply-To: <72da94d60709032246k54663858ne8fba65a7c26a29c@mail.gmail.com> References: <72da94d60709032246k54663858ne8fba65a7c26a29c@mail.gmail.com> Message-ID: Hi, Vectorize() vectorizes a function that takes one argument (or more) arguments that are the values to be used. Then the vectorized function will take the same number of arguments, this time it will be arrays. For what you want to do, vectorize can't help you, it can't know that what it has to do is to put the argument in self.x instead of passing the argument. Matthieu 2007/9/4, morovia morovia : > > > Hello all, > > I am trying to vectorize a function which resides in the class. > Can any one help me out ? I have appended the test code below. > > from scipy import vectorize > > ************************************************ > def square(x): > return x.x > > v_sq = vectorize(square) > > result = v_sq(x) > > ************************************************ > Just new to class !! > > I just want to mimic the above function, > which works. But, inside the class structure, > it does not ! > > from scipy import vectorize > > class a: > def __init__(self,x): > self.x = x > def square(self): > return self.x*self.x > def v_sq(self): > vect_sq = vectorize(square) > return vect_sq(self) > > x = [1,2,3] > fn = a(x) > fn.v_sq() > > > Thanks in advance, > Morovia. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From meesters at uni-mainz.de Tue Sep 4 06:59:44 2007 From: meesters at uni-mainz.de (Christian Meesters) Date: Tue, 4 Sep 2007 12:59:44 +0200 Subject: [SciPy-user] www.scipy.org down? Message-ID: <1188903584.4990.21.camel@cmeesters> Hi, The scipy web page ( www.scipy.org ) seems to be down since a while. Usually this is fixed rather soon, so I'm writing just in case it got overlooked. (And I couldn't see any message here on the list related to this.) Christian From openopt at ukr.net Tue Sep 4 10:02:03 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 04 Sep 2007 17:02:03 +0300 Subject: [SciPy-user] www.scipy.org down? In-Reply-To: <1188903584.4990.21.camel@cmeesters> References: <1188903584.4990.21.camel@cmeesters> Message-ID: <46DD655B.9000808@ukr.net> Christian Meesters wrote: > Hi, > > The scipy web page ( www.scipy.org ) seems to be down since a while. > Usually this is fixed rather soon, so I'm writing just in case it got > overlooked. yes, it has been down at least ~25-30 hours since you write the letter > (And I couldn't see any message here on the list related to > this.) > I also had mentioned it today > Christian > > So now it's up again. However, it falls down toooo often. Maybe due to great amount of students that were directed by their tutors to scipy. I wonder why can't scipy.org be duplicated (i.e. have mirror server, that will be activated automatically when 1st server is down). Also, I think it would be nice if a script was running (on other than scipy.org server), that would check scipy.org (for example, ping) and if no response - send a letter to server admin and/or scipy dev maillist automatically. D. From massimo.sandal at unibo.it Tue Sep 4 10:21:01 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Tue, 04 Sep 2007 16:21:01 +0200 Subject: [SciPy-user] www.scipy.org down? In-Reply-To: <46DD655B.9000808@ukr.net> References: <1188903584.4990.21.camel@cmeesters> <46DD655B.9000808@ukr.net> Message-ID: <46DD69CD.2080307@unibo.it> dmitrey ha scritto: > I wonder why can't scipy.org be duplicated (i.e. have mirror server, > that will be activated automatically when 1st server is down). Because it costs? :) m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From eric at enthought.com Tue Sep 4 10:29:32 2007 From: eric at enthought.com (eric jones) Date: Tue, 04 Sep 2007 09:29:32 -0500 Subject: [SciPy-user] www.scipy.org down? In-Reply-To: <46DD655B.9000808@ukr.net> References: <1188903584.4990.21.camel@cmeesters> <46DD655B.9000808@ukr.net> Message-ID: <46DD6BCC.6080807@enthought.com> Huge apologies about this. Having the server down for a short period is undesirable. Having it down for half a day is unacceptable. On the good side, the SciPy servers are apparently getting a whole lot more traffic this year than last year. On the bad side, the server/software hasn't scaled as well as one would hope. Jarrod talked to me about doing some sys admin work on scipy.org while we were at the SciPy conference. We discussed upgrading the OS, getting better monitor/re-start mechanisms in place, and, most importantly, getting a group of people from multiple different time zones in place to help in case of problems. Jeff Strunk is going to work on getting this organized, and he'll post something here soon. I believe he's going to set up a separate distribution to keep sys-admin issues off this list. Again, sorry about the down time, eric dmitrey wrote: > Christian Meesters wrote: > >> Hi, >> >> The scipy web page ( www.scipy.org ) seems to be down since a while. >> Usually this is fixed rather soon, so I'm writing just in case it got >> overlooked. >> > yes, it has been down at least ~25-30 hours since you write the letter > >> (And I couldn't see any message here on the list related to >> this.) >> >> > I also had mentioned it today > >> Christian >> >> >> > So now it's up again. > However, it falls down toooo often. > Maybe due to great amount of students that were directed by their tutors > to scipy. > I wonder why can't scipy.org be duplicated (i.e. have mirror server, > that will be activated automatically when 1st server is down). > Also, I think it would be nice if a script was running (on other than > scipy.org server), that would check scipy.org (for example, ping) and if > no response - send a letter to server admin and/or scipy dev maillist > automatically. > D. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From ckkart at hoc.net Tue Sep 4 11:17:44 2007 From: ckkart at hoc.net (Christian K) Date: Wed, 05 Sep 2007 00:17:44 +0900 Subject: [SciPy-user] fmin_cobyla hangs In-Reply-To: <46DD091C.1070105@ukr.net> References: <46DD091C.1070105@ukr.net> Message-ID: dmitrey wrote: > W/o code sample we can't say anything. Sure. Some explanations on the code: The goal is to find a smooth baseline for scattered data f(x). Therefore I let fmin_cobyla minimize the difference between the data and a n-point spline subject to the constraint data-spline > 0. Additionally constraints on the first and second derivative are applied. To reduce minimisation time the difference and the derivatives are only calculated at a subset of the data points. The strange thing is, that when running the minimisation separated from the application within which it is usually run, it still hangs, but under different conditions (different number of constraints). In the sample code the minimisation is repeated within a for loop while one parameter is changed (which is the number of points taken to calculate the difference). On my system, when run within the application, fmin_cobyla hangs for i=78, the stand alone script however hangs for i=15. Script and data are attached. Thanks, Christian -------------- next part -------------- A non-text attachment was scrubbed... Name: cobyla_hangs.tar.gz Type: application/x-gzip Size: 3138 bytes Desc: not available URL: From fperez.net at gmail.com Tue Sep 4 13:28:26 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 4 Sep 2007 11:28:26 -0600 Subject: [SciPy-user] www.scipy.org down? In-Reply-To: <46DD6BCC.6080807@enthought.com> References: <1188903584.4990.21.camel@cmeesters> <46DD655B.9000808@ukr.net> <46DD6BCC.6080807@enthought.com> Message-ID: On 9/4/07, eric jones wrote: > Having the server down for a short period is undesirable. Having it > down for half a day is unacceptable. On the good side, the SciPy > servers are apparently getting a whole lot more traffic this year than > last year. On the bad side, the server/software hasn't scaled as well > as one would hope. And it's only going to get better/worse :) > Jarrod talked to me about doing some sys admin work on scipy.org while > we were at the SciPy conference. We discussed upgrading the OS, getting > better monitor/re-start mechanisms in place, and, most importantly, > getting a group of people from multiple different time zones in place to > help in case of problems. Jeff Strunk is going to work on getting this > organized, and he'll post something here soon. I believe he's going to > set up a separate distribution to keep sys-admin issues off this list. Mirroring? Scipy is becoming important enough to academic institutions that it might not be too hard to setup a few mirrors for at least the static content, that are geographically distributed. One or two big universities, one or two national labs and a few institutions in Europe (CERN perhaps, I know they use python a lot and they hosted EuroPython)? We have enough people on this list from academia and research labs that if it's technically viable (I'm not sure how to deal with Moin/Trac for that, if at all), it might be possible to get the ball rolling. Just an idea... Cheers, f From openopt at ukr.net Tue Sep 4 15:39:39 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 04 Sep 2007 22:39:39 +0300 Subject: [SciPy-user] fmin_cobyla hangs In-Reply-To: References: <46DD091C.1070105@ukr.net> Message-ID: <46DDB47B.8090705@ukr.net> Unfortunately, I can't run the script - I always got error why I try to import something from scipy.interpolate module. However, one thing I would pay attention: your function is non-smooth (since you have abs() in objective func, line 40 of your py-file). I had already encountered non-smooth funcs that made solvers intended for smooth funcs hang up. Of course, partially it shows absence of some loop stop criteria checks, but on the other hand, you take inappropriate solver. Of course, you may have other causes as well - for example, your problem becomes unbounded for some cases. Regards, D. Christian K wrote: > dmitrey wrote: > >> W/o code sample we can't say anything. >> > > Sure. Some explanations on the code: > The goal is to find a smooth baseline for scattered data f(x). Therefore I let > fmin_cobyla minimize the difference between the data and a n-point spline > subject to the constraint data-spline > 0. Additionally constraints on the first > and second derivative are applied. To reduce minimisation time the difference > and the derivatives are only calculated at a subset of the data points. > > The strange thing is, that when running the minimisation separated from the > application within which it is usually run, it still hangs, but under different > conditions (different number of constraints). In the sample code the > minimisation is repeated within a for loop while one parameter is changed (which > is the number of points taken to calculate the difference). On my system, when > run within the application, fmin_cobyla hangs for i=78, the stand alone script > however hangs for i=15. > > Script and data are attached. > > Thanks, Christian > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From jstrunk at enthought.com Tue Sep 4 17:40:43 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Tue, 4 Sep 2007 16:40:43 -0500 Subject: [SciPy-user] www.scipy.org down? In-Reply-To: References: <1188903584.4990.21.camel@cmeesters> <46DD6BCC.6080807@enthought.com> Message-ID: <200709041640.43524.jstrunk@enthought.com> If you are interested in helping to reorganize and maintain the Scipy servers, please review http://scipy.org/EnthoughtHosting/Reorganization and join the mailing list at http://projects.scipy.org/mailman/listinfo/administration . Thank you, Jeff On Tuesday 04 September 2007 12:28 pm, Fernando Perez wrote: > On 9/4/07, eric jones wrote: > > Having the server down for a short period is undesirable. Having it > > down for half a day is unacceptable. On the good side, the SciPy > > servers are apparently getting a whole lot more traffic this year than > > last year. On the bad side, the server/software hasn't scaled as well > > as one would hope. > > And it's only going to get better/worse :) > > > Jarrod talked to me about doing some sys admin work on scipy.org while > > we were at the SciPy conference. We discussed upgrading the OS, getting > > better monitor/re-start mechanisms in place, and, most importantly, > > getting a group of people from multiple different time zones in place to > > help in case of problems. Jeff Strunk is going to work on getting this > > organized, and he'll post something here soon. I believe he's going to > > set up a separate distribution to keep sys-admin issues off this list. > > Mirroring? Scipy is becoming important enough to academic > institutions that it might not be too hard to setup a few mirrors for > at least the static content, that are geographically distributed. One > or two big universities, one or two national labs and a few > institutions in Europe (CERN perhaps, I know they use python a lot and > they hosted EuroPython)? > > We have enough people on this list from academia and research labs > that if it's technically viable (I'm not sure how to deal with > Moin/Trac for that, if at all), it might be possible to get the ball > rolling. > > Just an idea... > > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From ckkart at hoc.net Wed Sep 5 00:59:37 2007 From: ckkart at hoc.net (Christian K.) Date: Wed, 05 Sep 2007 13:59:37 +0900 Subject: [SciPy-user] fmin_cobyla hangs In-Reply-To: <46DDB47B.8090705@ukr.net> References: <46DD091C.1070105@ukr.net> <46DDB47B.8090705@ukr.net> Message-ID: dmitrey wrote: > Unfortunately, I can't run the script - I always got error why I try to > import something from scipy.interpolate module. > However, one thing I would pay attention: your function is non-smooth > (since you have abs() in objective func, line 40 of your py-file). I I replaced abs by **2 but it still hangs. Any other ideas? I'd prefer not to introduce more dependencies so I would really like to stick with any scipy optimizer. Thanks anyway for your comments, Christian From ckkart at hoc.net Wed Sep 5 01:29:42 2007 From: ckkart at hoc.net (Christian K.) Date: Wed, 05 Sep 2007 14:29:42 +0900 Subject: [SciPy-user] fmin_cobyla hangs -- doesn't on windows! In-Reply-To: References: <46DD091C.1070105@ukr.net> Message-ID: Christian K wrote: > dmitrey wrote: >> W/o code sample we can't say anything. > > Sure. Some explanations on the code: > The goal is to find a smooth baseline for scattered data f(x). Therefore I let > fmin_cobyla minimize the difference between the data and a n-point spline > subject to the constraint data-spline > 0. Additionally constraints on the first > and second derivative are applied. To reduce minimisation time the difference > and the derivatives are only calculated at a subset of the data points. > > The strange thing is, that when running the minimisation separated from the > application within which it is usually run, it still hangs, but under different > conditions (different number of constraints). In the sample code the > minimisation is repeated within a for loop while one parameter is changed (which > is the number of points taken to calculate the difference). On my system, when > run within the application, fmin_cobyla hangs for i=78, the stand alone script > however hangs for i=15. > > Script and data are attached. I just noticed that it won't hang on windows (python 2.5, scipy 0.5.2, numpy 1.0.3). Or at least I haven't managed so far. Christian From lucasjb at csse.unimelb.edu.au Wed Sep 5 05:17:09 2007 From: lucasjb at csse.unimelb.edu.au (Lucas Barbuto) Date: Wed, 5 Sep 2007 19:17:09 +1000 Subject: [SciPy-user] building numpy/scipy on Solaris Message-ID: Hi, Sorry to try to revive a thread from March, but I wonder if David or Raphael got any further with the Solaris build and install. I've been trying to build SciPy for Solaris9/x86 but I've got no prior experience with it, so I've struggled to get anywhere either with the general build instructions or trying to use the Sun Performance Library. Any pointers would be helpful, some kind of step-by-step would great if such a thing exists? Regards, -- Lucas Barbuto From ckkart at hoc.net Wed Sep 5 07:49:00 2007 From: ckkart at hoc.net (Christian K) Date: Wed, 05 Sep 2007 20:49:00 +0900 Subject: [SciPy-user] fmin_cobyla hangs -- doesn't on windows! In-Reply-To: References: <46DD091C.1070105@ukr.net> Message-ID: Christian K. wrote: > > I just noticed that it won't hang on windows (python 2.5, scipy 0.5.2, > numpy 1.0.3). Or at least I haven't managed so far. I just upgraded to those versions on linux too and the problem seems to be solved. Sorry for the noise. Christian From ckkart at hoc.net Wed Sep 5 07:50:43 2007 From: ckkart at hoc.net (Christian K) Date: Wed, 05 Sep 2007 20:50:43 +0900 Subject: [SciPy-user] missing info.py in odr, scipy 0.5.2.1 Message-ID: Hi, I think the info.py file is missing in the odr dir of scipy 0.5.2.1: Python 2.5.1 (r251:54863, May 2 2007, 16:56:35) [GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.__version__ '0.5.2.1' >>> from scipy import odr Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.5/site-packages/scipy/odr/__init__.py", line 5, in from info import __doc__ ImportError: No module named info Christian From david at ar.media.kyoto-u.ac.jp Wed Sep 5 07:51:32 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 05 Sep 2007 20:51:32 +0900 Subject: [SciPy-user] How to free unused memory by Python In-Reply-To: References: <712475.31193.qm@web27412.mail.ukl.yahoo.com> <46DBD25C.5060107@unibo.it> Message-ID: <46DE9844.1030106@ar.media.kyoto-u.ac.jp> Jouni K. Sepp?nen wrote: > massimo sandal writes: > > >> If you know of a memory debugging tool for python, let us know! >> > > See Michael Droettboom's description of how he fixed a bunch of leaks in > matplotlib: > > http://article.gmane.org/gmane.comp.python.matplotlib.devel/2804 > > One really useful tool with valgrind to look for memory leak is massif: http://valgrind.org/docs/manual/ms-manual.html It is really easy to use, and can be used to detect and find a memory leak if you can easily reproduce the problem. I've used it myself to squash some memory leak in scipy/numpy (C extension mostly, but not only). It is extremely useful to improve the performances of some code too ( to detect if you create/recreate temporaries). David From ryan2057 at gmx.de Wed Sep 5 10:30:55 2007 From: ryan2057 at gmx.de (J. K.) Date: Wed, 05 Sep 2007 16:30:55 +0200 Subject: [SciPy-user] ODR example code needed Message-ID: Hi, I am trying to use odr. I was able to get my programme to run on my data, I can see the data using pprint. Now the newbie problem: How do I use the data? I want need to calculate stuff with it, I want to save it to a file. I was trying to figure out the odr.Output, but I failed. Could someone copy and paste a working example code, so that I can work with that and the help included in the help? Thanks a lot, Jack K. From pepe_kawumi at yahoo.co.uk Wed Sep 5 11:06:41 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Wed, 5 Sep 2007 15:06:41 +0000 (GMT) Subject: [SciPy-user] Breaking up a vector into it's components Message-ID: <615783.98568.qm@web27714.mail.ukl.yahoo.com> Hullo, I have solved a problem in matlab but want to convert it to python. Vec_field(ii,jj,:) returns the components of the vector. The matlab operation is shown below. I have just used dummy variables to explain what I want to do as shown below. vec_field(ii,jj,:)=[n m] where the right hand side is a 1*2 matrix Im having a problem finding the write python syntax that will be able to do the same pattern as shown below. vec_field(1,1,:)=[4 2] vec_field(:,:,1) = 4 vec_field(:,:,2) = 2 >> vec_field(1,2,:)=[3 4] vec_field(:,:,1) = 4 3 vec_field(:,:,2) = 2 4 >> vec_field(2,1,:)=[5 7] vec_field(:,:,1) = 4 3 5 0 vec_field(:,:,2) = 2 4 7 0 >> vec_field(2,2,:)=[8 9] vec_field(:,:,1) = 4 3 5 8 vec_field(:,:,2) = 2 4 7 9 Is there a command in python that will enable me to do this? Thanks Perez ___________________________________________________________ Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdu.xiaojf at gmail.com Wed Sep 5 11:33:03 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Wed, 05 Sep 2007 23:33:03 +0800 Subject: [SciPy-user] Solving an equation using scipy.optimize.newton Message-ID: <46DECC2F.3090000@gmail.com> Hi all, I'm trying to solve an equation f(x) = 0 with scipy.optimize.newton. However the problem isn't so simple. There are bound constraints for my equation: the equation cannot be evaluated when x is out of [Min, Max], but the root is always in the interval of [Min, Max] When newton() iterates to find a root, it sometimes try to evaluate the equation with a x out of [Min, Max], and then error occurs. How to solve this problem ? I couldn't easily find two points with different signs every time, so methods like brentq don't work here. Thanks a lot! Regards From openopt at ukr.net Wed Sep 5 11:41:53 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 05 Sep 2007 18:41:53 +0300 Subject: [SciPy-user] Solving an equation using scipy.optimize.newton In-Reply-To: <46DECC2F.3090000@gmail.com> References: <46DECC2F.3090000@gmail.com> Message-ID: <46DECE41.7020507@ukr.net> one of possible solutions - try to minimize f(x)^2, subjected to constraints lb, ub, via solver lbfgsb or tnc D fdu.xiaojf at gmail.com wrote: > Hi all, > > I'm trying to solve an equation f(x) = 0 with scipy.optimize.newton. > > However the problem isn't so simple. There are bound constraints for my > equation: the equation cannot be evaluated when x is out of [Min, Max], but > the root is always in the interval of [Min, Max] > > When newton() iterates to find a root, it sometimes try to evaluate the > equation with a x out of [Min, Max], and then error occurs. > > How to solve this problem ? > > I couldn't easily find two points with different signs every time, so methods > like brentq don't work here. > > Thanks a lot! > > Regards > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From eric at enthought.com Wed Sep 5 12:27:23 2007 From: eric at enthought.com (eric jones) Date: Wed, 05 Sep 2007 11:27:23 -0500 Subject: [SciPy-user] www.scipy.org down? In-Reply-To: References: <1188903584.4990.21.camel@cmeesters> <46DD655B.9000808@ukr.net> <46DD6BCC.6080807@enthought.com> Message-ID: <46DED8EB.1070509@enthought.com> Hey Fernando, That is certainly a good option, and we've already gotten one offer in this regard which is cool. Once we get the current site in good shape we can look into all the options. see ya, eric Fernando Perez wrote: > On 9/4/07, eric jones wrote: > > >> Having the server down for a short period is undesirable. Having it >> down for half a day is unacceptable. On the good side, the SciPy >> servers are apparently getting a whole lot more traffic this year than >> last year. On the bad side, the server/software hasn't scaled as well >> as one would hope. >> > > And it's only going to get better/worse :) > > >> Jarrod talked to me about doing some sys admin work on scipy.org while >> we were at the SciPy conference. We discussed upgrading the OS, getting >> better monitor/re-start mechanisms in place, and, most importantly, >> getting a group of people from multiple different time zones in place to >> help in case of problems. Jeff Strunk is going to work on getting this >> organized, and he'll post something here soon. I believe he's going to >> set up a separate distribution to keep sys-admin issues off this list. >> > > Mirroring? Scipy is becoming important enough to academic > institutions that it might not be too hard to setup a few mirrors for > at least the static content, that are geographically distributed. One > or two big universities, one or two national labs and a few > institutions in Europe (CERN perhaps, I know they use python a lot and > they hosted EuroPython)? > > We have enough people on this list from academia and research labs > that if it's technically viable (I'm not sure how to deal with > Moin/Trac for that, if at all), it might be possible to get the ball > rolling. > > Just an idea... > > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From sandricionut at yahoo.com Wed Sep 5 12:42:47 2007 From: sandricionut at yahoo.com (sandric ionut) Date: Wed, 5 Sep 2007 09:42:47 -0700 (PDT) Subject: [SciPy-user] StandardError: 'NoneType' object has no attribute '_o' Message-ID: <771845.97405.qm@web51310.mail.re2.yahoo.com> Hi: I am using Gdal 142 version on Windows XP sp2 with Numeric-24.2, SciPy 0.5.2.1, numpy 1.0.3.1. for Python 2.4. I want to use genericfunctions from scipy.ndimage. I have tried with the example provided in numarray users' manual release 1.5: >>> def fnc(iline, oline): ... oline[...] = iline[:-2] + 2 * iline[1:-1] + 3 * iline[2:] ... >>> print generic_filter1d(a, fnc, 3) [[ 3 8 14 17] [27 32 38 41] [51 56 62 65]] Everything works OK and I get the proper result, but when I try to use gdalnumeric to save the array to a image format (tif for example, or any other format supported by gdal) I get the following error: File "C:\Python24\Lib\site-packages\gdalnumeric.py", line 123, in SaveArray return driver.CreateCopy( filename, OpenArray(src_array,prototype) ) File "C:\Python24\Lib\site-packages\gdal.py", line 592, in CreateCopy target_ds = _gdal.GDALCreateCopy( self._o, filename, source_ds._o, StandardError: 'NoneType' object has no attribute '_o' How can I save an array into a image format supported by Gdal I don't know what I do wrong Thank you Ionut ____________________________________________________________________________________ Luggage? GPS? Comic books? Check out fitting gifts for grads at Yahoo! Search http://search.yahoo.com/search?fr=oni_on_mail&p=graduation+gifts&cs=bz -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrywark at gmail.com Wed Sep 5 13:55:30 2007 From: barrywark at gmail.com (Barry Wark) Date: Wed, 5 Sep 2007 10:55:30 -0700 Subject: [SciPy-user] ODR example code needed In-Reply-To: References: Message-ID: Here's a routine to fit an exponential of the form sign*(A*e^{-t/tau} + offset) using scipy.optimize.odr. def fitExponentialODR(y, x0, t, sign, fitType, verbose, maxIterations): """ Fit exponential of form y = sign * (A*e^(-t/tau) + offset) using S.optimize.odr y: target array (must be rank 1) x0: initial guess of (A, tau, offset) t: array sign: +/-1 fitType: ODR_LSQ or ODR_EXP @returns (A, tau, offset, fit, sd_amp, sd_tau, sd_offset) """ assert(0<=fitType<=2) R = fitODR(exponentialDecay, t, y, x0, fitType, verbose, maxIterations, sign) if(R==None): return None else: (output, fit) = R A = output.beta[0] tau = output.beta[1] offset = output.beta[2] sd_amp = output.sd_beta[0] sd_tau = output.sd_beta[1] sd_offset = output.sd_beta[2] return (A, tau, offset, fit, sd_amp, sd_tau, sd_offset) def fitODR(fn, x, y, x0, fitType, verbose, maxIterations, *args): """ fn function to fit x function input y target function output x0 initial parameters fitType One of ODR_LSQ (least squares) or ODR_EXP (explicit ODR) verbose If true, print verbose output on error. Else, fail silently (return None) maxIterations Maximum number of iterations without convergence. Re-run will use double @returns (odr.output, fit) """ assert(0 <= fitType <= 2) model = S.odr.Model(fn, extra_args=args) data = S.odr.Data(x, y) odr = S.odr.ODR(data, model, x0, maxit=maxIterations) odr.set_job(fit_type=fitType) output = odr.run() if(output.info >= 4): #no convergence or ODRPACK thinks results are "questionable" if(verbose): print 'ODRPACK unable to find solution:\n', '\n'.join(output.stopreason) print 'Retrying fit...' odr.maxit = 2*maxIterations output = odr.restart() if(output.info >= 4): if(verbose): print 'ODRPACK still unable to find solution: \n', '\n\t\t'.join(output.stopreason) return None return (output, fn(output.beta, x, *args)) On 9/5/07, J. K. wrote: > Hi, > > I am trying to use odr. I was able to get my programme to run on my > data, I can see the data using pprint. > Now the newbie problem: How do I use the data? I want need to calculate > stuff with it, I want to save it to a file. I was trying to figure out > the odr.Output, but I failed. > Could someone copy and paste a working example code, so that I can work > with that and the help included in the help? > > Thanks a lot, > Jack K. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed Sep 5 14:06:09 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 05 Sep 2007 13:06:09 -0500 Subject: [SciPy-user] ODR example code needed In-Reply-To: References: Message-ID: <46DEF011.9090208@gmail.com> J. K. wrote: > Hi, > > I am trying to use odr. I was able to get my programme to run on my > data, I can see the data using pprint. > Now the newbie problem: How do I use the data? I want need to calculate > stuff with it, I want to save it to a file. I was trying to figure out > the odr.Output, but I failed. > Could someone copy and paste a working example code, so that I can work > with that and the help included in the help? I don't know how to make the Output docstring any clearer than it is. And without knowing what you want to "use the data" for, I can't give you a meaningful example. Can you be more specific about what is confusing? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From elcorto at gmx.net Wed Sep 5 15:17:37 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 05 Sep 2007 21:17:37 +0200 Subject: [SciPy-user] scipy r3303: test fails: AttributeError: 'test_pilutil' object has no attribute '_exc_info' Message-ID: <46DF00D1.1090307@gmx.net> Hi all I encountered 2 problems with the latest svn version (which I report in 2 posts): The first one: scipy.test() fails. Attached is the output of scipy.test(). Before installing, I removed any old build dir and install dir (site-packages/{numpy|scipy}) and everything. I can send build/install logs of numpy/scipy if needed. Versions are: In [2]: scipy.__version__ Out[2]: '0.7.0.dev3303' In [3]: scipy.__numpy_version__ Out[3]: '1.0.4.dev4026' TIA -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy.test.log Type: text/x-log Size: 5228 bytes Desc: not available URL: From elcorto at gmx.net Wed Sep 5 15:18:28 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 05 Sep 2007 21:18:28 +0200 Subject: [SciPy-user] scipy r3303: from scipy import fftpack fails Message-ID: <46DF0104.3050508@gmx.net> Hi This is the second problem I found (with 0.7.0.dev3303): $ python -c "from scipy import fftpack" Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/__init__.py", line 10, in ? from basic import * File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 13, in ? import _fftpack as fftpack ImportError: /usr/local/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so: undefined symbol: zfftnd_fftw -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From robert.kern at gmail.com Wed Sep 5 15:21:49 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 05 Sep 2007 14:21:49 -0500 Subject: [SciPy-user] scipy r3303: from scipy import fftpack fails In-Reply-To: <46DF0104.3050508@gmx.net> References: <46DF0104.3050508@gmx.net> Message-ID: <46DF01CD.2050305@gmail.com> Steve Schmerler wrote: > Hi > > This is the second problem I found (with 0.7.0.dev3303): > > $ python -c "from scipy import fftpack" > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/__init__.py", line > 10, in ? > from basic import * > File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/basic.py", line > 13, in ? > import _fftpack as fftpack > ImportError: /usr/local/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so: > undefined symbol: zfftnd_fftw This is a build problem on your end. In order to help you debug it, we'll have to know how you configured your scipy (i.e. your site.cfg and possibly any relevant environment variables you might have set), the locations of your FFTW libraries, and the output of "python setup.py config". -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From elcorto at gmx.net Wed Sep 5 15:34:32 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 05 Sep 2007 21:34:32 +0200 Subject: [SciPy-user] scipy r3303: from scipy import fftpack fails In-Reply-To: <46DF01CD.2050305@gmail.com> References: <46DF0104.3050508@gmx.net> <46DF01CD.2050305@gmail.com> Message-ID: <46DF04C8.8000602@gmx.net> Robert Kern wrote: > Steve Schmerler wrote: >> Hi >> >> This is the second problem I found (with 0.7.0.dev3303): >> >> $ python -c "from scipy import fftpack" >> Traceback (most recent call last): >> File "", line 1, in ? >> File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/__init__.py", line >> 10, in ? >> from basic import * >> File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/basic.py", line >> 13, in ? >> import _fftpack as fftpack >> ImportError: /usr/local/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so: >> undefined symbol: zfftnd_fftw > > This is a build problem on your end. In order to help you debug it, we'll have > to know how you configured your scipy (i.e. your site.cfg and possibly any > relevant environment variables you might have set), the locations of your FFTW > libraries, and the output of "python setup.py config". > OK, here's the log of pyhon setup.py config. My stie.cfg only points setup.py to /usr/include/suitesparse/ on Debian. Thanks! -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: site.cfg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: setup.py_config.log Type: text/x-log Size: 5348 bytes Desc: not available URL: From robert.kern at gmail.com Wed Sep 5 15:44:29 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 05 Sep 2007 14:44:29 -0500 Subject: [SciPy-user] scipy r3303: from scipy import fftpack fails In-Reply-To: <46DF0104.3050508@gmx.net> References: <46DF0104.3050508@gmx.net> Message-ID: <46DF071D.4010500@gmail.com> Steve Schmerler wrote: > Hi > > This is the second problem I found (with 0.7.0.dev3303): > > $ python -c "from scipy import fftpack" > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/__init__.py", line > 10, in ? > from basic import * > File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/basic.py", line > 13, in ? > import _fftpack as fftpack > ImportError: /usr/local/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so: > undefined symbol: zfftnd_fftw My apologies, I was wrong. This is one of our symbols, not one from the FFTW libraries. I'm not sure what could be wrong, here. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dominique.orban at gmail.com Wed Sep 5 17:33:07 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Wed, 5 Sep 2007 17:33:07 -0400 Subject: [SciPy-user] Solving an equation using scipy.optimize.newton In-Reply-To: <46DECC2F.3090000@gmail.com> References: <46DECC2F.3090000@gmail.com> Message-ID: <8793ae6e0709051433l6276f61ar23c621e027b1757e@mail.gmail.com> On 9/5/07, fdu.xiaojf at gmail.com wrote: > > Hi all, > > I'm trying to solve an equation f(x) = 0 with scipy.optimize.newton. > > However the problem isn't so simple. There are bound constraints for my > equation: the equation cannot be evaluated when x is out of [Min, Max], > but > the root is always in the interval of [Min, Max] > > When newton() iterates to find a root, it sometimes try to evaluate the > equation with a x out of [Min, Max], and then error occurs. > > How to solve this problem ? > > I couldn't easily find two points with different signs every time, so > methods > like brentq don't work here. There are variants of Newton's method that can handle bound constraints. However, another way to treat your problem would be to solve the optimization problem: minimize 0 subject to f(x) = 0, and Min <= x <= Max. The objective function of this problem is constant, so any x satisfying the constraints is optimal, and is what you are looking for. Note however that you now have an optimization problem with nonlinear equality constraints. If a solver isn't able to identify a point satisfying the constraints, it will usually guarantee some sort of 'proximity property', i.e., the final iterate will minimize the residual of constraints in some sense. Dominique -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdu.xiaojf at gmail.com Wed Sep 5 21:57:41 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Thu, 06 Sep 2007 09:57:41 +0800 Subject: [SciPy-user] Solving an equation using scipy.optimize.newton In-Reply-To: <8793ae6e0709051433l6276f61ar23c621e027b1757e@mail.gmail.com> References: <46DECC2F.3090000@gmail.com> <8793ae6e0709051433l6276f61ar23c621e027b1757e@mail.gmail.com> Message-ID: <46DF5E95.8010800@gmail.com> Thanks Dominique and dmitrey, Dominique Orban wrote: > > > On 9/5/07, *fdu.xiaojf at gmail.com * > > wrote: > > Hi all, > > I'm trying to solve an equation f(x) = 0 with scipy.optimize.newton . > > However the problem isn't so simple. There are bound constraints for my > equation: the equation cannot be evaluated when x is out of [Min, > Max], but > the root is always in the interval of [Min, Max] > > When newton() iterates to find a root, it sometimes try to evaluate the > equation with a x out of [Min, Max], and then error occurs. > > How to solve this problem ? > > I couldn't easily find two points with different signs every time, > so methods > like brentq don't work here. > > > There are variants of Newton's method that can handle bound constraints. > However, another way to treat your problem would be to solve the > optimization problem: Are there any trust region optimization method with python interface ? > > minimize 0 > subject to f(x) = 0, and Min <= x <= Max. > > The objective function of this problem is constant, so any x satisfying > the constraints is optimal, and is what you are looking for. Note > however that you now have an optimization problem with nonlinear > equality constraints. > > If a solver isn't able to identify a point satisfying the constraints, > it will usually guarantee some sort of 'proximity property', i.e., the > final iterate will minimize the residual of constraints in some sense. > > Dominique > I can transform my problem to a bound constrained optimization problem, but I still have to find an optimization solver who only evaluate my equation in trust region([Min, Max]). Will lbfgsb or tnc meet the requirement ? Regards, From peridot.faceted at gmail.com Wed Sep 5 22:21:50 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 5 Sep 2007 22:21:50 -0400 Subject: [SciPy-user] Solving an equation using scipy.optimize.newton In-Reply-To: <46DECC2F.3090000@gmail.com> References: <46DECC2F.3090000@gmail.com> Message-ID: On 05/09/07, fdu.xiaojf at gmail.com wrote: > I'm trying to solve an equation f(x) = 0 with scipy.optimize.newton. > > However the problem isn't so simple. There are bound constraints for my > equation: the equation cannot be evaluated when x is out of [Min, Max], but > the root is always in the interval of [Min, Max] > > When newton() iterates to find a root, it sometimes try to evaluate the > equation with a x out of [Min, Max], and then error occurs. > > How to solve this problem ? > > I couldn't easily find two points with different signs every time, so methods > like brentq don't work here. Are you sure your function has a zero at all? If it's something like a polynomial, you may find that sometimes it fails to have a root, which will of course be a problem for a root-finding algorithm. It's probably a good idea to look at this as two problems: * Find points of opposite sign in your interval. * Narrow this down to an actual root. Once you've done the first, the second can be done using (say) brentq without worrying that you're going to leave the interval of interest. So how do you find a place where your function crosses the y-axis? Ideally you'd know something about it analytically. But it sounds like you've tried that, to no avail. You could blindly evaluate the function, perhaps on a grid, a pseudorandom or subrandom sequence of points, hoping to find one that gave a negative value. You could run a one-dimensional minimizer, with a wrapper around your function that raises an exception as soon as it sees a negative value, but here too to get started you need three points where the middle one is the lowest. If you're really stuck, you can try one of the constrained multidimensional minimizers (but be warned some of them evaluate at points that violate the constraints!), again with the exception-raising trick to bail out as soon as you've found a point with a negative value. Anne M. Archibald From matthieu.brucher at gmail.com Thu Sep 6 03:36:38 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 6 Sep 2007 09:36:38 +0200 Subject: [SciPy-user] Solving an equation using scipy.optimize.newton In-Reply-To: <46DF5E95.8010800@gmail.com> References: <46DECC2F.3090000@gmail.com> <8793ae6e0709051433l6276f61ar23c621e027b1757e@mail.gmail.com> <46DF5E95.8010800@gmail.com> Message-ID: > > Are there any trust region optimization method with python interface ? > The openopt scikit will have one of those in the future ;) Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From schut at sarvision.nl Thu Sep 6 03:46:15 2007 From: schut at sarvision.nl (Vincent Schut) Date: Thu, 06 Sep 2007 09:46:15 +0200 Subject: [SciPy-user] StandardError: 'NoneType' object has no attribute '_o' In-Reply-To: <771845.97405.qm@web51310.mail.re2.yahoo.com> References: <771845.97405.qm@web51310.mail.re2.yahoo.com> Message-ID: <46DFB047.5000203@sarvision.nl> Sandric, I'm forwarding your message to the gdal mailing list, because this seems merely related to gdal, less to numpy. Btw, if you are mainly using numpy arrays instead of (deprecated) numeric arrays, you might want to look into gdal_array.py, which links gdal to numpy, instead of gdalnumeric.py, which links gdal to numeric. Though I don't know if that is provided on windows binary builds by default, being a linux-only user... Someone on the gdal list will know, probably. Vincent. sandric ionut wrote: > Hi: > I am using Gdal 142 version on Windows XP sp2 with Numeric-24.2, SciPy > 0.5.2.1, numpy 1.0.3.1. for Python 2.4. > I want to use genericfunctions from scipy.ndimage. I have tried with > the example provided in numarray users' manual release 1.5: > >>> def fnc(iline, oline): > ... oline[...] = iline[:-2] + 2 * iline[1:-1] + 3 * iline[2:] > ... > >>> print generic_filter1d(a, fnc, 3) > [[ 3 8 14 17] > [27 32 38 41] > [51 56 62 65]] > > Everything works OK and I get the proper result, but when I try to use > gdalnumeric to save the array to a image format (tif for example, or > any other format supported by gdal) I get the following error: > > File "C:\Python24\Lib\site-packages\gdalnumeric.py", line 123, in > SaveArray > return driver.CreateCopy( filename, OpenArray(src_array,prototype) ) > File "C:\Python24\Lib\site-packages\gdal.py", line 592, in CreateCopy > target_ds = _gdal.GDALCreateCopy( self._o, filename, source_ds._o, > StandardError: 'NoneType' object has no attribute '_o' > How can I save an array into a image format supported by Gdal > I don't know what I do wrong > > Thank you > > Ionut > > ------------------------------------------------------------------------ > Need a vacation? Get great deals to amazing places > on > Yahoo! Travel. > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryan2057 at gmx.de Thu Sep 6 04:18:08 2007 From: ryan2057 at gmx.de (J. K.) Date: Thu, 06 Sep 2007 10:18:08 +0200 Subject: [SciPy-user] ODR example code needed In-Reply-To: References: Message-ID: Thank you for your code example, I now understand how to grab the output. Jack K. From david at ar.media.kyoto-u.ac.jp Thu Sep 6 04:20:51 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 06 Sep 2007 17:20:51 +0900 Subject: [SciPy-user] scipy r3303: from scipy import fftpack fails In-Reply-To: <46DF071D.4010500@gmail.com> References: <46DF0104.3050508@gmx.net> <46DF071D.4010500@gmail.com> Message-ID: <46DFB863.7000503@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > Steve Schmerler wrote: > >> Hi >> >> This is the second problem I found (with 0.7.0.dev3303): >> >> $ python -c "from scipy import fftpack" >> Traceback (most recent call last): >> File "", line 1, in ? >> File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/__init__.py", line >> 10, in ? >> from basic import * >> File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/basic.py", line >> 13, in ? >> import _fftpack as fftpack >> ImportError: /usr/local/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so: >> undefined symbol: zfftnd_fftw >> > > My apologies, I was wrong. This is one of our symbols, not one from the FFTW > libraries. I'm not sure what could be wrong, here. > > I should be the one to apologize. There was a typo when I cleaned up some source; I thought I checked in the fix, but forgot to do it. This should be fixed in r3306. cheers, David From ryan2057 at gmx.de Thu Sep 6 04:29:03 2007 From: ryan2057 at gmx.de (J. K.) Date: Thu, 06 Sep 2007 10:29:03 +0200 Subject: [SciPy-user] ODR example code needed In-Reply-To: <46DEF011.9090208@gmail.com> References: <46DEF011.9090208@gmail.com> Message-ID: > I don't know how to make the Output docstring any clearer than it is. And > without knowing what you want to "use the data" for, I can't give you a > meaningful example. Can you be more specific about what is confusing? I think it is a combination of me being new to python and the fact that english is not my mother tongue. While I understand everyday english (movies, books, newspapers), technical english is much harder to understand. That's why examples in the man pages help me most. I learned how to input data into odr by using the short example given. Maybe you could expand the example in the "basic use" section a bit: mybeta = myoutput.beta[1] I know, my problem is pretty basic, but I studied chemistry and I am just now getting acquainted to Python. (I need to process a large number of files and fit a polynom to it) Jack K. PS: I am pretty amazed that the developer of the module answers question and helps end users in a newsgroup. That's one of the major reasons I love Open Source. Thank you. From fredmfp at gmail.com Thu Sep 6 06:07:21 2007 From: fredmfp at gmail.com (fred) Date: Thu, 06 Sep 2007 12:07:21 +0200 Subject: [SciPy-user] mean of arrays... Message-ID: <46DFD159.6070100@gmail.com> Hi, I want to compute the element wise mean of 2D or 3D arrays (~100). Is there a scipy function to do this ? TIA. Cheers, -- http://scipy.org/FredericPetit From matthieu.brucher at gmail.com Thu Sep 6 06:12:16 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 6 Sep 2007 12:12:16 +0200 Subject: [SciPy-user] mean of arrays... In-Reply-To: <46DFD159.6070100@gmail.com> References: <46DFD159.6070100@gmail.com> Message-ID: What about numpy.mean() ? Matthieu 2007/9/6, fred : > > Hi, > > I want to compute the element wise mean of 2D or 3D arrays (~100). > > Is there a scipy function to do this ? > > TIA. > > Cheers, > > -- > http://scipy.org/FredericPetit > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Thu Sep 6 06:16:10 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 6 Sep 2007 12:16:10 +0200 Subject: [SciPy-user] mean of arrays... In-Reply-To: <46DFD159.6070100@gmail.com> References: <46DFD159.6070100@gmail.com> Message-ID: <20070906101610.GL20366@clipper.ens.fr> On Thu, Sep 06, 2007 at 12:07:21PM +0200, fred wrote: > I want to compute the element wise mean of 2D or 3D arrays (~100). Stack all these n-arrays along an n+1 dimension, and use the numpy.mean function, specifying the axis as n+1. Ga?l From fredmfp at gmail.com Thu Sep 6 08:19:59 2007 From: fredmfp at gmail.com (fred) Date: Thu, 06 Sep 2007 14:19:59 +0200 Subject: [SciPy-user] mean of arrays... In-Reply-To: <20070906101610.GL20366@clipper.ens.fr> References: <46DFD159.6070100@gmail.com> <20070906101610.GL20366@clipper.ens.fr> Message-ID: <46DFF06F.3020005@gmail.com> Gael Varoquaux a ?crit : > Stack all these n-arrays along an n+1 dimension, and use the numpy.mean > function, specifying the axis as n+1. > Nickel ! Thanks. -- http://scipy.org/FredericPetit From fredmfp at gmail.com Thu Sep 6 08:33:35 2007 From: fredmfp at gmail.com (fred) Date: Thu, 06 Sep 2007 14:33:35 +0200 Subject: [SciPy-user] mean of arrays... In-Reply-To: <20070906101610.GL20366@clipper.ens.fr> References: <46DFD159.6070100@gmail.com> <20070906101610.GL20366@clipper.ens.fr> Message-ID: <46DFF39F.7050203@gmail.com> Gael Varoquaux a ?crit : > On Thu, Sep 06, 2007 at 12:07:21PM +0200, fred wrote: > >> I want to compute the element wise mean of 2D or 3D arrays (~100). >> > > Stack all these n-arrays along an n+1 dimension, and use the numpy.mean > function, specifying the axis as n+1. > I can get it working by hand, if dimension is fixed, no problem. But how can I do this for a n+1 dimension array ?: - create the array; - fill the array. I have one solution: test if my arrays are 2D or 3D. If this is the only one solution, I get it, but I would like to have a solution for whatever dimension my arrays have. TIA. -- http://scipy.org/FredericPetit From matthieu.brucher at gmail.com Thu Sep 6 08:40:39 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 6 Sep 2007 14:40:39 +0200 Subject: [SciPy-user] mean of arrays... In-Reply-To: <46DFF39F.7050203@gmail.com> References: <46DFD159.6070100@gmail.com> <20070906101610.GL20366@clipper.ens.fr> <46DFF39F.7050203@gmail.com> Message-ID: If the mean must be done on the last dimension, add axis=-1, it should work. Matthieu 2007/9/6, fred : > > Gael Varoquaux a ?crit : > > On Thu, Sep 06, 2007 at 12:07:21PM +0200, fred wrote: > > > >> I want to compute the element wise mean of 2D or 3D arrays (~100). > >> > > > > Stack all these n-arrays along an n+1 dimension, and use the numpy.mean > > function, specifying the axis as n+1. > > > I can get it working by hand, if dimension is fixed, no problem. > But how can I do this for a n+1 dimension array ?: > - create the array; > - fill the array. > > I have one solution: test if my arrays are 2D or 3D. > If this is the only one solution, I get it, > but I would like to have a solution for whatever dimension > my arrays have. > > TIA. > > > -- > http://scipy.org/FredericPetit > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Thu Sep 6 08:47:06 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 6 Sep 2007 14:47:06 +0200 Subject: [SciPy-user] mean of arrays... In-Reply-To: <46DFF39F.7050203@gmail.com> References: <46DFD159.6070100@gmail.com> <20070906101610.GL20366@clipper.ens.fr> <46DFF39F.7050203@gmail.com> Message-ID: <20070906124706.GR20366@clipper.ens.fr> On Thu, Sep 06, 2007 at 02:33:35PM +0200, fred wrote: > Gael Varoquaux a ?crit : > > On Thu, Sep 06, 2007 at 12:07:21PM +0200, fred wrote: > >> I want to compute the element wise mean of 2D or 3D arrays (~100). > > Stack all these n-arrays along an n+1 dimension, and use the numpy.mean > > function, specifying the axis as n+1. > I can get it working by hand, if dimension is fixed, no problem. > But how can I do this for a n+1 dimension array ?: > - create the array; > - fill the array. > I have one solution: test if my arrays are 2D or 3D. > If this is the only one solution, I get it, > but I would like to have a solution for whatever dimension > my arrays have. listmean = lambda l: concatenate([a[..., newaxis] for a in l], axis=-1).mean(axis=-1) :->. Ga?l From gary.pajer at gmail.com Thu Sep 6 09:34:17 2007 From: gary.pajer at gmail.com (Gary Pajer) Date: Thu, 6 Sep 2007 09:34:17 -0400 Subject: [SciPy-user] mean of arrays... In-Reply-To: <20070906124706.GR20366@clipper.ens.fr> References: <46DFD159.6070100@gmail.com> <20070906101610.GL20366@clipper.ens.fr> <46DFF39F.7050203@gmail.com> <20070906124706.GR20366@clipper.ens.fr> Message-ID: <88fe22a0709060634o1080fce5g176b5d18be98aa18@mail.gmail.com> On 9/6/07, Gael Varoquaux wrote: > On Thu, Sep 06, 2007 at 02:33:35PM +0200, fred wrote: > > Gael Varoquaux a ?crit : > > > On Thu, Sep 06, 2007 at 12:07:21PM +0200, fred wrote: > > > >> I want to compute the element wise mean of 2D or 3D arrays (~100). > > > > > Stack all these n-arrays along an n+1 dimension, and use the numpy.mean > > > function, specifying the axis as n+1. [...] > listmean = lambda l: concatenate([a[..., newaxis] for a in l], > axis=-1).mean(axis=-1) > > :->. > > Ga?l Am I missing something here? doesn't numpy.mean() find the mean of all elements regardless of dimension? -gary From gael.varoquaux at normalesup.org Thu Sep 6 09:36:23 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 6 Sep 2007 15:36:23 +0200 Subject: [SciPy-user] mean of arrays... In-Reply-To: <88fe22a0709060634o1080fce5g176b5d18be98aa18@mail.gmail.com> References: <46DFD159.6070100@gmail.com> <20070906101610.GL20366@clipper.ens.fr> <46DFF39F.7050203@gmail.com> <20070906124706.GR20366@clipper.ens.fr> <88fe22a0709060634o1080fce5g176b5d18be98aa18@mail.gmail.com> Message-ID: <20070906133622.GU20366@clipper.ens.fr> On Thu, Sep 06, 2007 at 09:34:17AM -0400, Gary Pajer wrote: > > listmean = lambda l: concatenate([a[..., newaxis] for a in l], > > axis=-1).mean(axis=-1) > Am I missing something here? > doesn't numpy.mean() find the mean of all elements regardless of dimension? That's what the "axis=-1" is for. Did I make an error ? This should be working, I tested it. Ga?l From fredmfp at gmail.com Thu Sep 6 09:56:21 2007 From: fredmfp at gmail.com (fred) Date: Thu, 06 Sep 2007 15:56:21 +0200 Subject: [SciPy-user] mean of arrays... In-Reply-To: <20070906133622.GU20366@clipper.ens.fr> References: <46DFD159.6070100@gmail.com> <20070906101610.GL20366@clipper.ens.fr> <46DFF39F.7050203@gmail.com> <20070906124706.GR20366@clipper.ens.fr> <88fe22a0709060634o1080fce5g176b5d18be98aa18@mail.gmail.com> <20070906133622.GU20366@clipper.ens.fr> Message-ID: <46E00705.2020301@gmail.com> Gael Varoquaux a ?crit : > On Thu, Sep 06, 2007 at 09:34:17AM -0400, Gary Pajer wrote: > >>> listmean = lambda l: concatenate([a[..., newaxis] for a in l], >>> axis=-1).mean(axis=-1) >>> > > >> Am I missing something here? >> doesn't numpy.mean() find the mean of all elements regardless of dimension? >> > > That's what the "axis=-1" is for. > > Did I make an error ? This should be working, I tested it. > As my arrays are set in a list (data), here's my solution: data_mean = array(data).transpose().mean(axis=-1).transpose() Thanks to all. Cheers, -- http://scipy.org/FredericPetit From matthieu.brucher at gmail.com Thu Sep 6 10:04:02 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 6 Sep 2007 16:04:02 +0200 Subject: [SciPy-user] mean of arrays... In-Reply-To: <46E00705.2020301@gmail.com> References: <46DFD159.6070100@gmail.com> <20070906101610.GL20366@clipper.ens.fr> <46DFF39F.7050203@gmail.com> <20070906124706.GR20366@clipper.ens.fr> <88fe22a0709060634o1080fce5g176b5d18be98aa18@mail.gmail.com> <20070906133622.GU20366@clipper.ens.fr> <46E00705.2020301@gmail.com> Message-ID: > > As my arrays are set in a list (data), here's my solution: > > data_mean = array(data).transpose().mean(axis=-1).transpose() > > Thanks to all. > In that case, use axis = 0 instead of the transpositions. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Sep 6 10:09:27 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 6 Sep 2007 10:09:27 -0400 Subject: [SciPy-user] mean of arrays... In-Reply-To: <88fe22a0709060634o1080fce5g176b5d18be98aa18@mail.gmail.com> References: <46DFD159.6070100@gmail.com><20070906101610.GL20366@clipper.ens.fr> <46DFF39F.7050203@gmail.com><20070906124706.GR20366@clipper.ens.fr><88fe22a0709060634o1080fce5g176b5d18be98aa18@mail.gmail.com> Message-ID: On Thu, 6 Sep 2007, Gary Pajer apparently wrote: > Am I missing something here? doesn't numpy.mean() find > the mean of all elements regardless of dimension? Right. The only thing you are missing is: where in the conversation was it specified that the OP wanted a mean along an axis rather than the mean of all elements. I missed it too, but such seems to be the case. Cheers, Alan Isaac >>> x array([[[1, 1], [2, 2]], [[3, 3], [4, 4]]]) >>> x.shape (2, 2, 2) >>> x.mean() 2.5 >>> x.mean(-1) array([[ 1., 2.], [ 3., 4.]]) From fredmfp at gmail.com Thu Sep 6 11:18:58 2007 From: fredmfp at gmail.com (fred) Date: Thu, 06 Sep 2007 17:18:58 +0200 Subject: [SciPy-user] mean of arrays... In-Reply-To: References: <46DFD159.6070100@gmail.com> <20070906101610.GL20366@clipper.ens.fr> <46DFF39F.7050203@gmail.com> <20070906124706.GR20366@clipper.ens.fr> <88fe22a0709060634o1080fce5g176b5d18be98aa18@mail.gmail.com> <20070906133622.GU20366@clipper.ens.fr> <46E00705.2020301@gmail.com> Message-ID: <46E01A62.7070307@gmail.com> Matthieu Brucher a ?crit : > > In that case, use axis = 0 instead of the transpositions. Thanks ! -- http://scipy.org/FredericPetit From fredmfp at gmail.com Thu Sep 6 11:21:24 2007 From: fredmfp at gmail.com (fred) Date: Thu, 06 Sep 2007 17:21:24 +0200 Subject: [SciPy-user] mean of arrays... In-Reply-To: References: <46DFD159.6070100@gmail.com><20070906101610.GL20366@clipper.ens.fr> <46DFF39F.7050203@gmail.com><20070906124706.GR20366@clipper.ens.fr><88fe22a0709060634o1080fce5g176b5d18be98aa18@mail.gmail.com> Message-ID: <46E01AF4.8000505@gmail.com> Alan G Isaac a ?crit : > On Thu, 6 Sep 2007, Gary Pajer apparently wrote: > >> Am I missing something here? doesn't numpy.mean() find >> the mean of all elements regardless of dimension? >> > > Right. The only thing you are missing is: > where in the conversation was it specified that > the OP wanted a mean along an axis rather than > the mean of all elements. I missed it too, but > such seems to be the case. > You're right ;-) Cheers, -- http://scipy.org/FredericPetit From dominique.orban at gmail.com Thu Sep 6 12:29:12 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Thu, 6 Sep 2007 12:29:12 -0400 Subject: [SciPy-user] Solving an equation using scipy.optimize.newton In-Reply-To: <46DF5E95.8010800@gmail.com> References: <46DECC2F.3090000@gmail.com> <8793ae6e0709051433l6276f61ar23c621e027b1757e@mail.gmail.com> <46DF5E95.8010800@gmail.com> Message-ID: <8793ae6e0709060929t7f4aee1by13c6a7a1c72f4038@mail.gmail.com> On 9/5/07, fdu.xiaojf at gmail.com wrote: > > > Are there any trust region optimization method with python interface ? > > I can transform my problem to a bound constrained optimization problem, > but I still have to find an optimization solver who only evaluate my > equation in trust region([Min, Max]). Will lbfgsb or tnc meet the > requirement ? There is one in NLPy, but I am not sure trust regions are what you are looking for. Trust regions change with the iterations. What you have is bound constraints. Anyways, I am not sure what you are trying to minimize. Dominique -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Sep 6 14:43:36 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 6 Sep 2007 20:43:36 +0200 Subject: [SciPy-user] scipy r3303: test fails: AttributeError: 'test_pilutil' object has no attribute '_exc_info' In-Reply-To: <46DF00D1.1090307@gmx.net> References: <46DF00D1.1090307@gmx.net> Message-ID: <20070906184336.GC8994@mentat.za.net> Hi Steve On Wed, Sep 05, 2007 at 09:17:37PM +0200, Steve Schmerler wrote: > The first one: scipy.test() fails. > Attached is the output of scipy.test(). Before installing, I removed > any old build dir and install dir (site-packages/{numpy|scipy}) and everything. > I can send build/install logs of numpy/scipy if needed. > > Versions are: > In [2]: scipy.__version__ > Out[2]: '0.7.0.dev3303' > > In [3]: scipy.__numpy_version__ > Out[3]: '1.0.4.dev4026' Which version of Python are you using? On which platform? St?fan From yosh_6 at yahoo.com Fri Sep 7 00:07:30 2007 From: yosh_6 at yahoo.com (Josh Gottlieb) Date: Thu, 6 Sep 2007 21:07:30 -0700 (PDT) Subject: [SciPy-user] problem with fmin_cobyla Message-ID: <289643.58571.qm@web52504.mail.re2.yahoo.com> Hey, A bit of a newbie to this, but I have a problem which requires a dynamic set of constraints (some of which are non-linear) and I tried two versions using fmin_cobyla (both examples attached)-- one generates these constraints using lambda functions, but fmin seems to violate them (example in code). Then I tried generating them on the fly using exec on strings of functions, which observed the constraints, but failed to find the most optimal solution. (the third permutation should be higher) Can anyone help? Could not find any examples online which were more than trivial, and the docs dont seem very good. (apologies for the cryptic coding, I tried to minimize a real-world example into a shorter script) Thanx in advance, Josh ____________________________________________________________________________________ Pinpoint customers who are looking for what you sell. http://searchmarketing.yahoo.com/ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cobyla1.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cobyla2.py URL: From peridot.faceted at gmail.com Fri Sep 7 00:26:45 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 7 Sep 2007 00:26:45 -0400 Subject: [SciPy-user] problem with fmin_cobyla In-Reply-To: <289643.58571.qm@web52504.mail.re2.yahoo.com> References: <289643.58571.qm@web52504.mail.re2.yahoo.com> Message-ID: On 07/09/07, Josh Gottlieb wrote: > Hey, > A bit of a newbie to this, but I have a problem which > requires a dynamic set of constraints (some of which > are non-linear) and I tried two versions using > fmin_cobyla (both examples attached)-- > one generates these constraints using lambda > functions, but fmin seems to violate them (example in > code). > Then I tried generating them on the fly using exec on > strings of functions, which observed the constraints, > but failed to find the most optimal solution. (the > third permutation should be higher) > Can anyone help? > Could not find any examples online which were more > than trivial, and the docs dont seem very good. > (apologies for the cryptic coding, I tried to minimize > a real-world example into a shorter script) Whipping up functions on the fly, with lambda or by using def() inside a function, is a perfectly reasonable way to implement constraints. You can also use a single constraint function that takes an extra argument to tell it which constraints it should use, and pass that extra argument in through fmin_cobyla. Be warned that it's a non-trivial technical problem to implement a constrained minimizer that never evaluates the function at a point that violates the constraints; fmin_cobyla may do this, and I think the other choices in scipy may as well. Finally, remember that all the fmin_* functions are only *local* minimizers - they are supposed to find points that are local minima of the function, but if the function is not concave up, it may have many minima, and the solver doesn't even try to arrange you wind up at the lowest. You can improve your chances by starting close to the minimum, but if there are several unknown minima, you need to look for a global optimizer, which is a much more difficult problem. Scipy has a couple of rudimentary ones. Good luck, Anne M. Archibald From elcorto at gmx.net Fri Sep 7 02:41:56 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 07 Sep 2007 08:41:56 +0200 Subject: [SciPy-user] scipy r3303: test fails: AttributeError: 'test_pilutil' object has no attribute '_exc_info' In-Reply-To: <20070906184336.GC8994@mentat.za.net> References: <46DF00D1.1090307@gmx.net> <20070906184336.GC8994@mentat.za.net> Message-ID: <46E0F2B4.70101@gmx.net> Stefan van der Walt wrote: > Hi Steve > > On Wed, Sep 05, 2007 at 09:17:37PM +0200, Steve Schmerler wrote: >> The first one: scipy.test() fails. >> Attached is the output of scipy.test(). Before installing, I removed >> any old build dir and install dir (site-packages/{numpy|scipy}) and everything. >> I can send build/install logs of numpy/scipy if needed. >> >> Versions are: >> In [2]: scipy.__version__ >> Out[2]: '0.7.0.dev3303' >> >> In [3]: scipy.__numpy_version__ >> Out[3]: '1.0.4.dev4026' > > Which version of Python are you using? On which platform? > > St?fan > Ups, forgot that ... Python 2.4.4 (#2, Jul 21 2007, 11:00:24) [GCC 4.1.3 20070718 (prerelease) (Debian 4.1.2-14)] on linux2 TIA -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From elcorto at gmx.net Fri Sep 7 02:42:38 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 07 Sep 2007 08:42:38 +0200 Subject: [SciPy-user] scipy r3303: from scipy import fftpack fails In-Reply-To: <46DFB863.7000503@ar.media.kyoto-u.ac.jp> References: <46DF0104.3050508@gmx.net> <46DF071D.4010500@gmail.com> <46DFB863.7000503@ar.media.kyoto-u.ac.jp> Message-ID: <46E0F2DE.6070103@gmx.net> David Cournapeau wrote: > Robert Kern wrote: >> Steve Schmerler wrote: >> >>> Hi >>> >>> This is the second problem I found (with 0.7.0.dev3303): >>> >>> $ python -c "from scipy import fftpack" >>> Traceback (most recent call last): >>> File "", line 1, in ? >>> File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/__init__.py", line >>> 10, in ? >>> from basic import * >>> File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/basic.py", line >>> 13, in ? >>> import _fftpack as fftpack >>> ImportError: /usr/local/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so: >>> undefined symbol: zfftnd_fftw >>> >> My apologies, I was wrong. This is one of our symbols, not one from the FFTW >> libraries. I'm not sure what could be wrong, here. >> >> > I should be the one to apologize. There was a typo when I cleaned up > some source; I thought I checked in the fix, but forgot to do it. This > should be fixed in r3306. > Thanks! Works now. -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From pepe_kawumi at yahoo.co.uk Fri Sep 7 03:00:14 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Fri, 7 Sep 2007 07:00:14 +0000 (GMT) Subject: [SciPy-user] eigenvalues and eigen vectors Message-ID: <605540.52842.qm@web27712.mail.ukl.yahoo.com> Hi, Im tring to solve an equation that will give me both the eigen values and the eigen vectors. It's returning the right eigen values but not all the eigen vectors are correct. Just don't know if I'm doing this the right way. This is what im doing [eigvalues,eigvectors] = linalg.eig(S,T) Thanks Perez ___________________________________________________________ Win a BlackBerry device from O2 with Yahoo!. Enter now. http://www.yahoo.co.uk/blackberry -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Fri Sep 7 03:03:52 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 7 Sep 2007 09:03:52 +0200 Subject: [SciPy-user] eigenvalues and eigen vectors In-Reply-To: <605540.52842.qm@web27712.mail.ukl.yahoo.com> References: <605540.52842.qm@web27712.mail.ukl.yahoo.com> Message-ID: <80c99e790709070003yac9d348oe4ae0297cc68220e@mail.gmail.com> can you post a piece of code? L. On 9/7/07, Perez Kawumi wrote: > > Hi, > Im tring to solve an equation that will give me both the eigen values and > the eigen vectors. It's returning the right eigen values but not all the > eigen vectors are correct. > Just don't know if I'm doing this the right way. This is what im doing > > [eigvalues,eigvectors] = linalg.eig(S,T) > > Thanks Perez > > ------------------------------ > *Yahoo! Photos*? > NEW, now offering a quality print servicefrom just 8p a photo. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Sep 7 03:09:21 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 07 Sep 2007 16:09:21 +0900 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: References: Message-ID: <46E0F921.2040305@ar.media.kyoto-u.ac.jp> Lucas Barbuto wrote: > Hi, > > Sorry to try to revive a thread from March, but I wonder if David or > Raphael got any further with the Solaris build and install. I've > been trying to build SciPy for Solaris9/x86 but I've got no prior > experience with it, so I've struggled to get anywhere either with the > general build instructions or trying to use the Sun Performance Library. > > Any pointers would be helpful, some kind of step-by-step would great > if such a thing exists? > > Did you manage to build numpy at least ? Could you provide us the exact steps you followed until the failure ? cheers, David From pepe_kawumi at yahoo.co.uk Fri Sep 7 03:27:48 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Fri, 7 Sep 2007 07:27:48 +0000 (GMT) Subject: [SciPy-user] generalised eigenvalues and corresponding eigen vectors Message-ID: <567253.95921.qm@web27707.mail.ukl.yahoo.com> Hi, Just thought I'd try and rephrase my question. For my problem S is an 84*84 matrix and T is also an 84*84 matrix. I want to find the eigen values and vectors of this matrix. Im not very conversant with python but this should return the generalized eigen values and their corresponding eigen vectors( i think). Sorry can't post code my file is quite long Im tring to solve an equation that will give me both the eigen values and the eigen vectors. It's returning the right eigen values but not all the eigen vectors are correct. Just don't know if I'm doing this the right way. This is what im doing [eigvalues,eigvectors] = linalg.eig(S,T) Thanks Perez ----- Original Message ---- From: lorenzo bolla To: SciPy Users List Sent: Friday, 7 September, 2007 9:03:52 AM Subject: Re: [SciPy-user] eigenvalues and eigen vectors can you post a piece of code? L. On 9/7/07, Perez Kawumi wrote: Hi, Im tring to solve an equation that will give me both the eigen values and the eigen vectors. It's returning the right eigen values but not all the eigen vectors are correct. Just don't know if I'm doing this the right way. This is what im doing [eigvalues,eigvectors] = linalg.eig(S,T) Thanks Perez Yahoo! Photos ? NEW, now offering a quality print service from just 8p a photo. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user ___________________________________________________________ Yahoo! Answers - Got a question? Someone out there knows the answer. Try it now. http://uk.answers.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Fri Sep 7 04:08:58 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 7 Sep 2007 10:08:58 +0200 Subject: [SciPy-user] generalised eigenvalues and corresponding eigen vectors In-Reply-To: <567253.95921.qm@web27707.mail.ukl.yahoo.com> References: <567253.95921.qm@web27707.mail.ukl.yahoo.com> Message-ID: <80c99e790709070108r6f071cf7n3a10b98071ec158b@mail.gmail.com> well, I think it's correct if you are using scipy.linalg.eig, but not if you are using numpy.linalg.eig. hth, L. On 9/7/07, Perez Kawumi wrote: > > Hi, > Just thought I'd try and rephrase my question. > For my problem S is an 84*84 matrix and T is also an 84*84 matrix. I want > to find the eigen values and vectors of this matrix. Im not very conversant > with python but this should return the generalized eigen values and their > corresponding eigen vectors( i think). > Sorry can't post code my file is quite long > > Im tring to solve an equation that will give me both the eigen values and > the eigen vectors. It's returning the right eigen values but not all the > eigen vectors are correct. > Just don't know if I'm doing this the right way. This is what im doing > > [eigvalues,eigvectors] = linalg.eig(S,T) > > Thanks Perez > > > ----- Original Message ---- > From: lorenzo bolla > To: SciPy Users List > Sent: Friday, 7 September, 2007 9:03:52 AM > Subject: Re: [SciPy-user] eigenvalues and eigen vectors > > can you post a piece of code? > L. > > > On 9/7/07, Perez Kawumi wrote: > > > > Hi, > > Im tring to solve an equation that will give me both the eigen values > > and the eigen vectors. It's returning the right eigen values but not all the > > eigen vectors are correct. > > Just don't know if I'm doing this the right way. This is what im doing > > > > [eigvalues,eigvectors] = linalg.eig(S,T) > > > > Thanks Perez > > > > ------------------------------ > > *Yahoo! Photos*? > > NEW, now offering a quality print service > > from > > just 8p a photo. > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > ------------------------------ > For ideas on reducing your carbon footprint visit Yahoo! For Goodthis month. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Fri Sep 7 04:46:05 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 07 Sep 2007 11:46:05 +0300 Subject: [SciPy-user] problem with fmin_cobyla In-Reply-To: <289643.58571.qm@web52504.mail.re2.yahoo.com> References: <289643.58571.qm@web52504.mail.re2.yahoo.com> Message-ID: <46E10FCD.4010400@ukr.net> So what's wrong with your example? According to my results obtained from the one (1st py-file), max constraint violation is 2.37501074363e-10 Don't you forget that fmin_cobyla uses c(x)>=0 constraints? (As for me it's one more reason to use universal frameworks that cut down such edges of solvers, no needs to study each one deeply). Regards, D. Josh Gottlieb wrote: > Hey, > A bit of a newbie to this, but I have a problem which > requires a dynamic set of constraints (some of which > are non-linear) and I tried two versions using > fmin_cobyla (both examples attached)-- > one generates these constraints using lambda > functions, but fmin seems to violate them (example in > code). > Then I tried generating them on the fly using exec on > strings of functions, which observed the constraints, > but failed to find the most optimal solution. (the > third permutation should be higher) > Can anyone help? > Could not find any examples online which were more > than trivial, and the docs dont seem very good. > (apologies for the cryptic coding, I tried to minimize > a real-world example into a shorter script) > > Thanx in advance, > Josh > > > > ____________________________________________________________________________________ > Pinpoint customers who are looking for what you sell. > http://searchmarketing.yahoo.com/ > ------------------------------------------------------------------------ > > import calendar > import datetime > import numpy > from scipy.optimize import fmin_cobyla > > def Solve(): > maxS = 17000000 > dates = [datetime.date(2008,4,1),datetime.date(2008,5,1), datetime.date(2008,6,1)] > vals = [.024925574678, .128886905103,.0447355248121] > # this is a set of date lists, each one corresponding to one of the vals above and one of the unknown vars to solve for > permutes = [[dates[1],dates[0]],[dates[2],dates[0]],[dates[2],dates[1]]] > # daily limits (in the constraints, we use these at a monthly level) > upLim = [114033,114033,114033] > dnLim = [159646,159646,159646] > > allCons = genCons( dates,permutes,0,maxS,upLim,dnLim ) > k = fmin_cobyla(minFunc, [5000000 for x in permutes], allCons, args=(permutes, vals),consargs=(),rhobeg=10000,rhoend=500,iprint=3,maxfun=100000) > print permutes, vals > # change to max and print > print -1 * minFunc(k,permutes,vals) > return k > > def minFunc( x, permutes, vals ): > # we really need max, so we multiply by -1 > return sum([-1*x[k]*vals[k] for k in xrange(len(permutes))]) > > def genCons( dates, permutes, initial, maxS, upLim, dnLim ): > # generate constraints dynamically since normally we dont know how many dates there are > maxSet = [] > minSet = [] > iSet = [] > wSet = [] > signSet = [] > upperSet = [] > lowerSet = [] > fullRelList = [] > fullSignList = [] > for i,d in enumerate( dates ): > iLimit = upLim[i] * ( calendar.mdays[d.month] + ( calendar.isleap(d.year) and d.month == 2 ) ) > wLimit = dnLim[i] * ( calendar.mdays[d.month] + ( calendar.isleap(d.year) and d.month == 2 ) ) > relList = [ n for n,t in enumerate( permutes ) if d in t ] > signList = [ (t[0] == d and -1) or (t[1] == d and 1) for t in permutes if d in t ] > #print iLimit,wLimit > fullRelList.append(relList) > fullSignList.append(signList) > #print signList, relList > iFn = lambda x: iLimit-sum([x[k]*signList[t] for t,k in enumerate(relList)]) > # violated constraint!!! > print iFn([6500000, 4949026, 0]) > iSet.append(iFn) > wFn = lambda x: sum([x[k]*signList[t] for t,k in enumerate(relList)])+wLimit > wSet.append(wFn) > maxFn = lambda x: maxS-initial-sum(numpy.concatenate([[x[k]*fullSignList[j][t] for t,k in enumerate(fullRelList[j])] for j in xrange(len(fullRelList))])) > maxSet.append(maxFn) > minFn = lambda x: sum(numpy.concatenate([[x[k]*fullSignList[j][t] for t,k in enumerate(fullRelList[j])] for j in xrange(len(fullRelList))]))-(0-initial) > minSet.append(minFn) > signPairs = numpy.concatenate( [ [ (x,y,relList.index(x),relList.index(y)) for y in relList if y!=x ] for x in relList ] ).tolist() > for j,pair in enumerate( signPairs ): > signFn = lambda x: x[pair[0]]*signList[pair[2]]*x[pair[1]]*signList[pair[3]] > signSet.append(signFn) > for j,item in enumerate( permutes ): > iLimit = upLim[dates.index(item[0])] * (calendar.mdays[item[0].month]+(calendar.isleap(item[0].year) and item[0].month==2)) > wLimit = dnLim[dates.index(item[1])] * (calendar.mdays[item[1].month]+(calendar.isleap(item[1].year) and item[1].month==2)) > upperFn = lambda x: max(iLimit,wLimit) - x[j] > lowerFn = lambda x: x[j] > upperSet.append(upperFn) > lowerSet.append(lowerFn) > fullSet = iSet+wSet+maxSet+minSet+signSet+upperSet+lowerSet > #print fullSet, len(fullSet) > return fullSet > > ------------------------------------------------------------------------ > > import calendar > import datetime > import numpy > from scipy.optimize import fmin_cobyla > > def Solve(): > maxS = 17000000 > dates = [datetime.date(2008,4,1),datetime.date(2008,5,1), datetime.date(2008,6,1)] > vals = [.024925574678, .128886905103,.0447355248121] > # this is a set of date lists, each one corresponding to one of the vals above and one of the unknown vars to solve for > permutes = [[dates[1],dates[0]],[dates[2],dates[0]],[dates[2],dates[1]]] > # daily limits (in the constraints, we use these at a monthly level) > upLim = [114033,114033,114033] > dnLim = [159646,159646,159646] > > allCons = genCons( dates,permutes,0,maxS,upLim,dnLim ) > k = fmin_cobyla(minFunc, [5000000 for x in permutes], allCons, args=(permutes, vals),consargs=(),rhobeg=10000,rhoend=500,iprint=3,maxfun=100000) > print permutes, vals > # change to max and print > print -1 * minFunc(k,permutes,vals) > return k > > def minFunc( x, permutes, vals ): > # we really need max, so we multiply by -1 > return sum([-1*x[k]*vals[k] for k in xrange(len(permutes))]) > > def genCons( dates, permutes, initial, maxS, upLim, dnLim ): > # generate constraints dynamically since normally we dont know how many dates there are > maxSet = [] > minSet = [] > iSet = [] > wSet = [] > signSet = [] > upperSet = [] > lowerSet = [] > fullRelList = [] > fullSignList = [] > for i,d in enumerate( dates ): > iLimit = upLim[i] * ( calendar.mdays[d.month] + ( calendar.isleap(d.year) and d.month == 2 ) ) > wLimit = dnLim[i] * ( calendar.mdays[d.month] + ( calendar.isleap(d.year) and d.month == 2 ) ) > relList = [ n for n,t in enumerate( permutes ) if d in t ] > signList = [ (t[0] == d and -1) or (t[1] == d and 1) for t in permutes if d in t ] > #print iLimit,wLimit > fullRelList.append(relList) > fullSignList.append(signList) > #print signList, relList > iFn = lambda x: iLimit-sum([x[k]*signList[t] for t,k in enumerate(relList)]) > # violated constraint!!! > print iFn([6500000, 4949026, 0]) > iSet.append(iFn) > wFn = lambda x: sum([x[k]*signList[t] for t,k in enumerate(relList)])+wLimit > wSet.append(wFn) > maxFn = lambda x: maxS-initial-sum(numpy.concatenate([[x[k]*fullSignList[j][t] for t,k in enumerate(fullRelList[j])] for j in xrange(len(fullRelList))])) > maxSet.append(maxFn) > minFn = lambda x: sum(numpy.concatenate([[x[k]*fullSignList[j][t] for t,k in enumerate(fullRelList[j])] for j in xrange(len(fullRelList))]))-(0-initial) > minSet.append(minFn) > signPairs = numpy.concatenate( [ [ (x,y,relList.index(x),relList.index(y)) for y in relList if y!=x ] for x in relList ] ).tolist() > for j,pair in enumerate( signPairs ): > signFn = lambda x: x[pair[0]]*signList[pair[2]]*x[pair[1]]*signList[pair[3]] > signSet.append(signFn) > for j,item in enumerate( permutes ): > iLimit = upLim[dates.index(item[0])] * (calendar.mdays[item[0].month]+(calendar.isleap(item[0].year) and item[0].month==2)) > wLimit = dnLim[dates.index(item[1])] * (calendar.mdays[item[1].month]+(calendar.isleap(item[1].year) and item[1].month==2)) > upperFn = lambda x: max(iLimit,wLimit) - x[j] > lowerFn = lambda x: x[j] > upperSet.append(upperFn) > lowerSet.append(lowerFn) > fullSet = iSet+wSet+maxSet+minSet+signSet+upperSet+lowerSet > #print fullSet, len(fullSet) > return fullSet > > def main(): > return Solve() > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From markbak at gmail.com Fri Sep 7 07:55:30 2007 From: markbak at gmail.com (Mark Bakker) Date: Fri, 7 Sep 2007 13:55:30 +0200 Subject: [SciPy-user] bug in modified Bessel function iv In-Reply-To: <6946b9500709070453l77de7b20w8eb1dc0920d44453@mail.gmail.com> References: <6946b9500709070453l77de7b20w8eb1dc0920d44453@mail.gmail.com> Message-ID: <6946b9500709070455w3f6e6fe1q71ff9c66a6d6b04b@mail.gmail.com> Hello - It seems that the modified Bessel function iv returns incorrect values for large argument. iv(0,100) gives 5.72185663838e+041 while the equivalent jv(0,complex(0,100)) (1.07375170713e+042+0j ) This latter results is the correct answer, as verified in Abramowitz and Stegun, Table 9.11, Page 428. Using the jv is a work-around, but this should probably be fixed. The two results start to deviate for arguments over about 50, with jv giving the correct answer. On a related note, there are implementations for Bessel functions of integer order (jn, kn) but not for the modified Bessel function In. I guess this is because the function would be called 'in', but it would be nice to have a special function for integer order, and I am pretty sure they are around. Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From markbak at gmail.com Fri Sep 7 08:03:23 2007 From: markbak at gmail.com (Mark Bakker) Date: Fri, 7 Sep 2007 14:03:23 +0200 Subject: [SciPy-user] eigenvalues and eigen vectors Message-ID: <6946b9500709070503t6072fbcah1ae9f1a6909fff02@mail.gmail.com> Is there a difference between numpy.linalg.eig and scipy.linalg.eig? I have been using numpy.linalg.eig quite a bit, and it seems to work quite well for me. They did change the eigenvectors (columns vs. rows) as compared to Numeric, but that was not a bad idea, Mark From: "lorenzo bolla" > > > well, I think it's correct if you are using scipy.linalg.eig, but not if > you > are using numpy.linalg.eig. > hth, > L. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yosh_6 at yahoo.com Fri Sep 7 08:15:26 2007 From: yosh_6 at yahoo.com (Josh Gottlieb) Date: Fri, 7 Sep 2007 05:15:26 -0700 (PDT) Subject: [SciPy-user] problem with fmin_cobyla Message-ID: <138390.21021.qm@web52503.mail.re2.yahoo.com> So in the first example (I didnt re-include the second one here)-- see the printout from the line 'print iFn([6500000, 4949026, 0])' (I used the rounded results that the solver put out as inputs) and it comes up with a negative number-- violating that constraint... The second example, as I said, does not violate any constraints, but is not the optimal solution, tho it seems that perhaps fmin doesnt look for the most minimal solution it can find? interestingly, I can replicate the second result even with more function calls, closer initial guesses, larger or smaller rho params, etc... so seems like it should be a real minima. Thanks, Josh >So what's wrong with your example? >According to my results obtained from the one (1st >py-file), max >constraint violation is 2.37501074363e-10 >Don't you forget that fmin_cobyla uses c(x)>=0 >constraints? (As for me >it's one more reason to use universal frameworks that >cut down such >edges of solvers, no needs to study each one deeply). >Regards, D. Josh Gottlieb wrote: > Hey, > A bit of a newbie to this, but I have a problem which > requires a dynamic set of constraints (some of which > are non-linear) and I tried two versions using > fmin_cobyla (both examples attached)-- > one generates these constraints using lambda > functions, but fmin seems to violate them (example in > code). > Then I tried generating them on the fly using exec on > strings of functions, which observed the constraints, > but failed to find the most optimal solution. (the > third permutation should be higher) > Can anyone help? > Could not find any examples online which were more > than trivial, and the docs dont seem very good. > (apologies for the cryptic coding, I tried to minimize > a real-world example into a shorter script) > > Thanx in advance, > Josh > > > > ____________________________________________________________________________________ > Pinpoint customers who are looking for what you sell. > http://searchmarketing.yahoo.com/ > ------------------------------------------------------------------------ > > import calendar > import datetime > import numpy > from scipy.optimize import fmin_cobyla > > def Solve(): > maxS = 17000000 > dates = [datetime.date(2008,4,1),datetime.date(2008,5,1), datetime.date(2008,6,1)] > vals = [.024925574678, .128886905103,.0447355248121] > # this is a set of date lists, each one corresponding to one of the vals above and one of the unknown vars to solve for > permutes = [[dates[1],dates[0]],[dates[2],dates[0]],[dates[2],dates[1]]] > # daily limits (in the constraints, we use these at a monthly level) > upLim = [114033,114033,114033] > dnLim = [159646,159646,159646] > > allCons = genCons( dates,permutes,0,maxS,upLim,dnLim ) > k = fmin_cobyla(minFunc, [5000000 for x in permutes], allCons, args=(permutes, vals),consargs=(),rhobeg=10000,rhoend=500,iprint=3,maxfun=100000) > print permutes, vals > # change to max and print > print -1 * minFunc(k,permutes,vals) > return k > > def minFunc( x, permutes, vals ): > # we really need max, so we multiply by -1 > return sum([-1*x[k]*vals[k] for k in xrange(len(permutes))]) > > def genCons( dates, permutes, initial, maxS, upLim, dnLim ): > # generate constraints dynamically since normally we dont know how many dates there are > maxSet = [] > minSet = [] > iSet = [] > wSet = [] > signSet = [] > upperSet = [] > lowerSet = [] > fullRelList = [] > fullSignList = [] > for i,d in enumerate( dates ): > iLimit = upLim[i] * ( calendar.mdays[d.month] + ( calendar.isleap(d.year) and d.month == 2 ) ) > wLimit = dnLim[i] * ( calendar.mdays[d.month] + ( calendar.isleap(d.year) and d.month == 2 ) ) > relList = [ n for n,t in enumerate( permutes ) if d in t ] > signList = [ (t[0] == d and -1) or (t[1] == d and 1) for t in permutes if d in t ] > #print iLimit,wLimit > fullRelList.append(relList) > fullSignList.append(signList) > #print signList, relList > iFn = lambda x: iLimit-sum([x[k]*signList[t] for t,k in enumerate(relList)]) > # violated constraint!!! > print iFn([6500000, 4949026, 0]) > iSet.append(iFn) > wFn = lambda x: sum([x[k]*signList[t] for t,k in enumerate(relList)])+wLimit > wSet.append(wFn) > maxFn = lambda x: maxS-initial-sum(numpy.concatenate([[x[k]*fullSignList[j][t] for t,k in enumerate(fullRelList[j])] for j in xrange(len(fullRelList))])) > maxSet.append(maxFn) > minFn = lambda x: sum(numpy.concatenate([[x[k]*fullSignList[j][t] for t,k in enumerate(fullRelList[j])] for j in xrange(len(fullRelList))]))-(0-initial) > minSet.append(minFn) > signPairs = numpy.concatenate( [ [ (x,y,relList.index(x),relList.index(y)) for y in relList if y!=x ] for x in relList ] ).tolist() > for j,pair in enumerate( signPairs ): > signFn = lambda x: x[pair[0]]*signList[pair[2]]*x[pair[1]]*signList[pair[3]] > signSet.append(signFn) > for j,item in enumerate( permutes ): > iLimit = upLim[dates.index(item[0])] * (calendar.mdays[item[0].month]+(calendar.isleap(item[0].year) and item[0].month==2)) > wLimit = dnLim[dates.index(item[1])] * (calendar.mdays[item[1].month]+(calendar.isleap(item[1].year) and item[1].month==2)) > upperFn = lambda x: max(iLimit,wLimit) - x[j] > lowerFn = lambda x: x[j] > upperSet.append(upperFn) > lowerSet.append(lowerFn) > fullSet = iSet+wSet+maxSet+minSet+signSet+upperSet+lowerSet > #print fullSet, len(fullSet) > return fullSet ____________________________________________________________________________________ Luggage? GPS? Comic books? Check out fitting gifts for grads at Yahoo! Search http://search.yahoo.com/search?fr=oni_on_mail&p=graduation+gifts&cs=bz From matthieu.brucher at gmail.com Fri Sep 7 08:22:18 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 7 Sep 2007 14:22:18 +0200 Subject: [SciPy-user] problem with fmin_cobyla In-Reply-To: <138390.21021.qm@web52503.mail.re2.yahoo.com> References: <138390.21021.qm@web52503.mail.re2.yahoo.com> Message-ID: > > The second example, as I said, does not violate any > constraints, but is not the optimal solution, tho it > seems that perhaps fmin doesnt look for the most > minimal solution it can find? interestingly, I can > replicate the second result even with more function > calls, closer initial guesses, larger or smaller rho > params, etc... so seems like it should be a real > minima. > The usual optimization procedures are only local optimizers, they cannot find the global optimizer, only a local one. Global optimization is a very hard field as finding the correct set of parameters requires most of the time an exhaustive search. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Fri Sep 7 09:05:11 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 07 Sep 2007 16:05:11 +0300 Subject: [SciPy-user] problem with fmin_cobyla In-Reply-To: <138390.21021.qm@web52503.mail.re2.yahoo.com> References: <138390.21021.qm@web52503.mail.re2.yahoo.com> Message-ID: <46E14C87.9000400@ukr.net> Josh Gottlieb wrote: > So in the first example (I didnt re-include the second > one here)-- see the printout from the line 'print > iFn([6500000, 4949026, 0])' (I used the rounded > results that the solver put out as inputs) and it > comes up with a negative number-- violating that > constraint... > I don't know what those iFn([6500000, 4949026, 0]) means but I just add 2 lines after k = fmin_cobyla(...) (here k is optimal solution obtained by cobyla) constraints = [allCons[i](k) for i in xrange(len(allCons))] print 'max v:', min(constraints) so it prints max v: -2.37501074363e-10 and hence all constraints are positive, as it is required by cobyla. Regards, D. From Peter.Bienstman at ugent.be Fri Sep 7 09:07:28 2007 From: Peter.Bienstman at ugent.be (Peter Bienstman) Date: Fri, 7 Sep 2007 15:07:28 +0200 Subject: [SciPy-user] spline problem Message-ID: <200709071507.31417.Peter.Bienstman@ugent.be> Hi, I'm trying to do spline interpolation of my data. When I try the final example of http://www.scipy.org/Cookbook/Interpolation, everything works fine. However, as soon as I adapt it to use my own data, I get this Traceback (most recent call last): File "test.py", line 26, in ? tckp,u = splprep([x,y],s=s,k=k,nest=-1,quiet=0) File "/usr/lib64/python2.4/site-packages/scipy/interpolate/fitpack.py", line 223, in splprep nest,wrk,iwrk,per) SystemError: error return without exception set This is on a core duo, with the latest scipy. Here is the code: =============== from numpy import * from scipy.interpolate import splprep, splev x = array([10807., 10806, 10808, 10810, 10812, 10814, 10817, 10821, 10825, 10830, 10835, 10839, 10842, 10846, 10849, 10851, 10853, 10855, 10855, 10855, 10854, 10852, 10851, 10847, 10844, 10841, 10839, 10837, 10837, 10835, 10835, 10835, 10837, 10838, 10841, 10843, 10845, 10847, 10848, 10847, 10846, 10843, 10838, 10833, 10825, 10819, 10809, 10801, 10791, 10782, 10774, 10759, 10749, 10738, 10729, 10721, 10714, 10706, 10701, 10696, 10693, 10691, 10693, 10693, 10693, 10698, 10699, 10704, 10670, 10682, 10690, 10696, 10696]) y = array([272., 272, 272, 272, 272, 273, 274, 279, 279, 281, 282, 284, 284, 284, 284, 286, 285, 288, 289, 289, 291, 296, 300, 305, 313, 324, 334, 343, 351, 357, 362, 367, 372, 376, 379, 381, 383, 383, 383, 383, 382, 381, 381, 381, 383, 387, 391, 398, 403, 411, 416, 421, 426, 430, 433, 434, 431, 424, 416, 404, 390, 377, 365, 354, 345, 338, 332, 329, 325, 325, 323, 317, 317]) # spline parameters s=3.0 # smoothness parameter k=2 # spline order nest=-1 # estimate of number of knots needed (-1 = maximal) # find the knot points tckp,u = splprep([x,y],s=s,k=k,nest=-1,quiet=0) # evaluate spline, including interpolated points xnew,ynew = splev(linspace(0,1,400),tckp) import pylab data,=pylab.plot(x,y,'bo-',label='data') fit,=pylab.plot(xnew,ynew,'r-',label='fit') pylab.legend() pylab.xlabel('x') pylab.ylabel('y') pylab.show() Thanks! Peter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 307 bytes Desc: not available URL: From unpingco at osc.edu Fri Sep 7 09:41:01 2007 From: unpingco at osc.edu (Jose Unpingco) Date: Fri, 07 Sep 2007 09:41:01 -0400 Subject: [SciPy-user] Linear algebra benchmarks in SciPy versus MATLAB In-Reply-To: References: Message-ID: <46E0F278.AA84.0083.0@osc.edu> After some excellent feedback from the participants on this list, I have revised and corrected my original benchmark results. See the following link: https://www.osc.edu/blogs/index.php/sip/2007/08/30/p37 It would be great if somebody could independently run these benchmarks. Constructive criticism is always appreciated. Please contact me if you have questions or need more information. Thanks! Jose Unpingco, Ph.D. (619)553-2922 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Peter.Bienstman at ugent.be Fri Sep 7 11:10:28 2007 From: Peter.Bienstman at ugent.be (Peter Bienstman) Date: Fri, 7 Sep 2007 17:10:28 +0200 Subject: [SciPy-user] weave doesn't recognise complex? Message-ID: <200709071710.31970.Peter.Bienstman@ugent.be> Consider the following script: ------------------ from numpy import * from scipy.weave import inline alpha = sqrt(2) - 1j inline("1.0 / alpha;", ['alpha']) ------------------- This doesn't compile: /home/pbienst/.python24_compiled/sc_f7fc5c122cc2b740c6482ea58b8bdeb10.cpp:663: error: ambiguous overload for 'operator/' in '1.0e+0 / alpha' Looking in the generated code, it seems weave hasn't detected that alpha is a complex: py::object alpha = convert_to_catchall(py_alpha,"alpha"); What does work is changing sqrt(2) by 1.41 or replacing 'from numpy import *' by 'from math import *' Cheers, Peter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 307 bytes Desc: not available URL: From millman at berkeley.edu Fri Sep 7 16:27:24 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 7 Sep 2007 15:27:24 -0500 Subject: [SciPy-user] missing info.py in odr, scipy 0.5.2.1 In-Reply-To: References: Message-ID: Sorry about that. I will be releasing 0.6.0 sometime next week, which has the info.py file. Jarrod On 9/5/07, Christian K wrote: > > Hi, > > I think the info.py file is missing in the odr dir of scipy 0.5.2.1: > > Python 2.5.1 (r251:54863, May 2 2007, 16:56:35) > [GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > >>> scipy.__version__ > '0.5.2.1' > >>> from scipy import odr > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.5/site-packages/scipy/odr/__init__.py", line 5, in > from info import __doc__ > ImportError: No module named info > > Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From eric at enthought.com Fri Sep 7 18:47:30 2007 From: eric at enthought.com (eric jones) Date: Fri, 07 Sep 2007 17:47:30 -0500 Subject: [SciPy-user] weave doesn't recognise complex? In-Reply-To: <200709071710.31970.Peter.Bienstman@ugent.be> References: <200709071710.31970.Peter.Bienstman@ugent.be> Message-ID: <46E1D502.4010901@enthought.com> Here is the issue. This one isn't detected correctly by weave. In [23]: a = sqrt(2) - 1j In [24]: type(a) Out[24]: While this one is: In [25]: a=1+1j In [26]: type(a) Out[26]: In [27]: weave.inline("return_val = 1.0/a; ", ['a']) /home/eric/.python25_compiled/sc_76f8a7fbb0d61e690b0da4302d274e260.cpp:5: warning: ignoring #pragma warning /home/eric/.python25_compiled/sc_76f8a7fbb0d61e690b0da4302d274e260.cpp:6: warning: ignoring #pragma warning /home/eric/.python25_compiled/sc_76f8a7fbb0d61e690b0da4302d274e260.cpp: In function ?PyObject* file_to_py(FILE*, char*, char*)?: /home/eric/.python25_compiled/sc_76f8a7fbb0d61e690b0da4302d274e260.cpp:403: warning: unused variable ?py_obj? Out[27]: (0.5-0.5j) Notice the types are different because sqrt returns a numpy scalar type. So, it looks like weave isn't detecting and converting the numpy types correctly. A ticket has been created. http://scipy.org/scipy/scipy/ticket/496 As a stop gap until this is fixed, you can cast the value back to a complex before calling weave. eric Peter Bienstman wrote: > Consider the following script: > > ------------------ > from numpy import * > from scipy.weave import inline > > alpha = sqrt(2) - 1j > > inline("1.0 / alpha;", ['alpha']) > ------------------- > > This doesn't compile: > /home/pbienst/.python24_compiled/sc_f7fc5c122cc2b740c6482ea58b8bdeb10.cpp:663: > error: ambiguous overload for 'operator/' in '1.0e+0 / alpha' > > Looking in the generated code, it seems weave hasn't detected that alpha is a > complex: > > py::object alpha = convert_to_catchall(py_alpha,"alpha"); > > What does work is changing sqrt(2) by 1.41 or replacing 'from numpy import *' > by 'from math import *' > > Cheers, > > Peter > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From pepe_kawumi at yahoo.co.uk Sat Sep 8 07:15:42 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Sat, 8 Sep 2007 11:15:42 +0000 (GMT) Subject: [SciPy-user] returning floats from a function Message-ID: <648127.10189.qm@web27703.mail.ukl.yahoo.com> Hi, im having problems returning floating point numbers in the whitney_function matrices at the end of my program. The answers are being rounded off to whole numbers. I think it might be the way im defining my whitney_functions initially(the first highlighted block). Please help. Thanks Perez def whitney(elem_num,x,y): global ELEMENTS, NODE_COORD whitney_functions = [] whitney_functions = zeros((3,2)) perez = simplex2D(elem_num,x,y) trinodes = ELEMENTS[[elem_num],:]-1 x1 = NODE_COORD[trinodes[0,0],0] y1 = NODE_COORD[trinodes[0,0],1] x2 = NODE_COORD[trinodes[0,1],0] y2 = NODE_COORD[trinodes[0,1],1] x3 = NODE_COORD[trinodes[0,2],0] y3 = NODE_COORD[trinodes[0,2],1] #area = 0.5*abs(det([1 x1 y1; 1 x2 y2; 1 x3 y3])); temp = linalg.inv(([x1,x2,x3],[y1,y2,y3],[1,1,1])) b = temp[:,[0]] c = temp[:,[1]] nabla_lambda = temp[:,0:2] whitney_functions[[0],:] = perez[0]*nabla_lambda[[1],:] - perez[1]*nabla_lambda[[0],:] whitney_functions[[1],:] = perez[0]*nabla_lambda[[2],:] - perez[2]*nabla_lambda[[0],:] whitney_functions[[2],:] = perez[1]*nabla_lambda[[2],:] - perez[2]*nabla_lambda[[1],:] return whitney_functions ___________________________________________________________ Want ideas for reducing your carbon footprint? Visit Yahoo! For Good http://uk.promotions.yahoo.com/forgood/environment.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sat Sep 8 07:18:21 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 8 Sep 2007 13:18:21 +0200 Subject: [SciPy-user] returning floats from a function In-Reply-To: <648127.10189.qm@web27703.mail.ukl.yahoo.com> References: <648127.10189.qm@web27703.mail.ukl.yahoo.com> Message-ID: <20070908111821.GC29009@clipper.ens.fr> You probably have an old version of scipy, where "zeros" returns by default an array of lfoats. You force it to return float by using it like this: zeros((3,2), dtype="f") HTH, Ga?l On Sat, Sep 08, 2007 at 11:15:42AM +0000, Perez Kawumi wrote: > def whitney(elem_num,x,y): > global ELEMENTS, NODE_COORD > whitney_functions = [] > whitney_functions = zeros((3,2)) > perez = simplex2D(elem_num,x,y) > trinodes = ELEMENTS[[elem_num],:]-1 > x1 = NODE_COORD[trinodes[0,0],0] > y1 = NODE_COORD[trinodes[0,0],1] > x2 = NODE_COORD[trinodes[0,1],0] > y2 = NODE_COORD[trinodes[0,1],1] > x3 = NODE_COORD[trinodes[0,2],0] > y3 = NODE_COORD[trinodes[0,2],1] > #area = 0.5*abs(det([1 x1 y1; 1 x2 y2; 1 x3 y3])); > temp = linalg.inv(([x1,x2,x3],[y1,y2,y3],[1,1,1])) > b = temp[:,[0]] > c = temp[:,[1]] > nabla_lambda = temp[:,0:2] > whitney_functions[[0],:] = perez[0]*nabla_lambda[[1],:] - > perez[1]*nabla_lambda[[0],:] > whitney_functions[[1],:] = perez[0]*nabla_lambda[[2],:] - > perez[2]*nabla_lambda[[0],:] > whitney_functions[[2],:] = perez[1]*nabla_lambda[[2],:] - > perez[2]*nabla_lambda[[1],:] > return whitney_functions From fredmfp at gmail.com Sat Sep 8 09:48:28 2007 From: fredmfp at gmail.com (fred) Date: Sat, 08 Sep 2007 15:48:28 +0200 Subject: [SciPy-user] arrays mean & NaN... Message-ID: <46E2A82C.5070707@gmail.com> Hi, When I compute the mean of several arrays (the mean is an array), and if one of several of these arrays has NaN in a given cell, the mean of this cell is also NaN. I would like to have the mean of the all non-NaN is this cell. How could I do this ? PS: To be more efficient, I don't mind if it is not the "right" mean, ie divided by the real number of non-NaN, but the number of arrays. But, just curious, I'm also interested by a solution for the right mean ;-) TIA Cheers, -- http://scipy.org/FredericPetit From matthieu.brucher at gmail.com Sat Sep 8 09:52:21 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 8 Sep 2007 15:52:21 +0200 Subject: [SciPy-user] arrays mean & NaN... In-Reply-To: <46E2A82C.5070707@gmail.com> References: <46E2A82C.5070707@gmail.com> Message-ID: 2007/9/8, fred : > > Hi, > > When I compute the mean of several arrays (the mean is an array), > and if one of several of these arrays has NaN in a given cell, the > mean of this cell is also NaN. > > I would like to have the mean of the all non-NaN is this cell. > > How could I do this ? > > PS: To be more efficient, I don't mind if it is not the "right" mean, ie > divided by the real > number of non-NaN, but the number of arrays. > > But, just curious, I'm also interested by a solution for the right mean > ;-) Hi, You could use numpy.nansum()/numpy.sum(~numpy.isnan()) or something like this ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Sat Sep 8 10:21:39 2007 From: fredmfp at gmail.com (fred) Date: Sat, 08 Sep 2007 16:21:39 +0200 Subject: [SciPy-user] arrays mean & NaN... In-Reply-To: References: <46E2A82C.5070707@gmail.com> Message-ID: <46E2AFF3.9060507@gmail.com> Matthieu Brucher a ?crit : > You could use numpy.nansum()/numpy.sum(~numpy.isnan()) or something > like this ? Arf, did not even think that there was a sum for nan, altough I do know & use nanmin() & nanmax(). Sorry for the noise, I'm working on too many things at same time ;-) Thanks. -- http://scipy.org/FredericPetit From eric at enthought.com Sat Sep 8 14:42:48 2007 From: eric at enthought.com (eric) Date: Sat, 08 Sep 2007 13:42:48 -0500 Subject: [SciPy-user] arrays mean & NaN... In-Reply-To: <46E2A82C.5070707@gmail.com> References: <46E2A82C.5070707@gmail.com> Message-ID: <46E2ED28.9000108@enthought.com> Travis O., Robert K.,and I were discussing adding nanmean, etc.to the nanmin, nanmax methods early this week... Sounds like others might find this useful as well. eric fred wrote: > Hi, > > When I compute the mean of several arrays (the mean is an array), > and if one of several of these arrays has NaN in a given cell, the > mean of this cell is also NaN. > > I would like to have the mean of the all non-NaN is this cell. > > How could I do this ? > > PS: To be more efficient, I don't mind if it is not the "right" mean, ie > divided by the real > number of non-NaN, but the number of arrays. > > But, just curious, I'm also interested by a solution for the right mean ;-) > > TIA > > Cheers, > > From fredmfp at gmail.com Sat Sep 8 18:43:31 2007 From: fredmfp at gmail.com (fred) Date: Sun, 09 Sep 2007 00:43:31 +0200 Subject: [SciPy-user] arrays mean & NaN... In-Reply-To: <46E2ED28.9000108@enthought.com> References: <46E2A82C.5070707@gmail.com> <46E2ED28.9000108@enthought.com> Message-ID: <46E32593.5000001@gmail.com> eric a ?crit : > Travis O., Robert K.,and I were discussing adding nanmean, etc.to the > nanmin, nanmax methods early this week... > > Sounds like others might find this useful as well. > Should be great ;-) Cheers, -- http://scipy.org/FredericPetit From lucasjb at csse.unimelb.edu.au Fri Sep 7 05:01:51 2007 From: lucasjb at csse.unimelb.edu.au (Lucas Barbuto) Date: Fri, 7 Sep 2007 19:01:51 +1000 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: <46E0F921.2040305@ar.media.kyoto-u.ac.jp> References: <46E0F921.2040305@ar.media.kyoto-u.ac.jp> Message-ID: <01FCD5EB-12CE-49CE-9996-28CBB70B873B@csse.unimelb.edu.au> On 07/09/2007, at 5:09 PM, David Cournapeau wrote: > Did you manage to build numpy at least ? Could you provide us the > exact > steps you followed until the failure ? Yes, NumPy 1.0.3 has been built and installed separately but without reference to any optimised BLAS or LAPACK libraries so I assume that "a slower default version is used". If I try to rebuild NumPy referencing Sun's Performance Library I have the same problems as with SciPy, so I suppose if I solve one, I'll have solved the other! So, I've unpacked Sun Studio 12 and the interesting bits live in / local as per below. I've got a pretty basic environment which finds GCC 3.4.5 as my default C compiler and G77 as my default Fortran compiler. I don't have ATLAS installed. My site.cfg: [DEFAULT] library_dirs = /local/cat2/apps-archive/SUNWspro-12/prod/lib:/usr/ local/lib include_dirs = /local/cat2/apps-archive/SUNWspro-12/prod/include:/usr/ local/include [blas_opt] blas_libs = sunperf [lapack_opt] lapack_libs = sunperf And then I simply run 'python setup.py build'. The config script appears to find libsunperf.a OK... blas_info: FOUND: libraries = ['blas'] library_dirs = ['/local/cat2/apps-archive/SUNWspro-12/prod/lib'] language = f77 FOUND: libraries = ['blas'] library_dirs = ['/local/cat2/apps-archive/SUNWspro-12/prod/lib'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 ... and similarly for LAPACK. But I get symbol reference errors, output pasted below. Hope that's the appropriate information! -- Lucas Barbuto building 'numpy.core._dotblas' extension compiling C sources C compiler: gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC creating build/temp.solaris-2.9-i86pc-2.5/numpy/core/blasdot compile options: '-DNO_ATLAS_INFO=1 -Inumpy/core/blasdot -Inumpy/core/ include -Ibuild/src.solaris-2.9-i86pc-2.5/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/local/apps/python-2.5.0/include/python2.5 -c' gcc: numpy/core/blasdot/_dotblas.c /usr/local/bin/g77 build/temp.solaris-2.9-i86pc-2.5/numpy/core/ blasdot/_dotblas.o -L/local/cat2/apps-archive/SUNWspro-12/prod/lib -L/ local/solaris86/apps/gcc-3.4.5/bin/../lib/gcc/i386-pc- solaris2.9/3.4.5 -lblas -lg2c -o build/lib.solaris-2.9-i86pc-2.5/ numpy/core/_dotblas.so Undefined first referenced symbol in file PyExc_ImportError build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyCObject_AsVoidPtr build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyArg_ParseTuple build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyExc_RuntimeError build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyEval_SaveThread build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyObject_GetAttrString build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o __f95_error_message_and_abort /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(caxpy.o) PyExc_ValueError build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o MAIN__ /local/solaris86/apps/gcc-3.4.5/ bin/../lib/gcc/i386-pc-solaris2.9/3.4.5/../../../libfrtbegin.a (frtbegin.o) PyErr_SetString build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o __mt_get_next_chunk_invoke_mfunc_once_int_ /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(cgemv.o) PyErr_Format build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyCObject_Type build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyTuple_New build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyErr_Print build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o __omp_in_parallel_ /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(using_threads.o) __f90_allocate2 /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(cgemm.o) PyImport_ImportModule build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o Py_InitModule4 build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o _Py_NoneStruct build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o __omp_get_max_threads_ /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(using_threads.o) __mt_MasterFunction_rtc_ /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(cgemv.o) __f90_deallocate /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(cgemm.o) PyEval_RestoreThread build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o ld: fatal: Symbol referencing errors. No output written to build/ lib.solaris-2.9-i86pc-2.5/numpy/core/_dotblas.so collect2: ld returned 1 exit status Undefined first referenced symbol in file PyExc_ImportError build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyCObject_AsVoidPtr build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyArg_ParseTuple build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyExc_RuntimeError build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyEval_SaveThread build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyObject_GetAttrString build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o __f95_error_message_and_abort /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(caxpy.o) PyExc_ValueError build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o MAIN__ /local/solaris86/apps/gcc-3.4.5/ bin/../lib/gcc/i386-pc-solaris2.9/3.4.5/../../../libfrtbegin.a (frtbegin.o) PyErr_SetString build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o __mt_get_next_chunk_invoke_mfunc_once_int_ /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(cgemv.o) PyErr_Format build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyCObject_Type build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyTuple_New build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o PyErr_Print build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o __omp_in_parallel_ /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(using_threads.o) __f90_allocate2 /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(cgemm.o) PyImport_ImportModule build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o Py_InitModule4 build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o _Py_NoneStruct build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o __omp_get_max_threads_ /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(using_threads.o) __mt_MasterFunction_rtc_ /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(cgemv.o) __f90_deallocate /local/cat2/apps-archive/ SUNWspro-12/prod/lib/libblas.a(cgemm.o) PyEval_RestoreThread build/temp.solaris-2.9-i86pc-2.5/ numpy/core/blasdot/_dotblas.o ld: fatal: Symbol referencing errors. No output written to build/ lib.solaris-2.9-i86pc-2.5/numpy/core/_dotblas.so collect2: ld returned 1 exit status From david at ar.media.kyoto-u.ac.jp Mon Sep 10 02:30:30 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 10 Sep 2007 15:30:30 +0900 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: <01FCD5EB-12CE-49CE-9996-28CBB70B873B@csse.unimelb.edu.au> References: <46E0F921.2040305@ar.media.kyoto-u.ac.jp> <01FCD5EB-12CE-49CE-9996-28CBB70B873B@csse.unimelb.edu.au> Message-ID: <46E4E486.100@ar.media.kyoto-u.ac.jp> Lucas Barbuto wrote: > On 07/09/2007, at 5:09 PM, David Cournapeau wrote: >> Did you manage to build numpy at least ? Could you provide us the >> exact >> steps you followed until the failure ? > > Yes, NumPy 1.0.3 has been built and installed separately but without > reference to any optimised BLAS or LAPACK libraries so I assume that > "a slower default version is used". If I try to rebuild NumPy > referencing Sun's Performance Library I have the same problems as > with SciPy, so I suppose if I solve one, I'll have solved the other! > > So, I've unpacked Sun Studio 12 and the interesting bits live in / > local as per below. I've got a pretty basic environment which finds > GCC 3.4.5 as my default C compiler and G77 as my default Fortran > compiler. I don't have ATLAS installed. Do I understand correctly that you want to compiler numpy/scipy with gcc, using sunperf ? I am not familiar with non gnu devtools under solaris, so I don't know if sunperf libraries are supposed to work with gcc ? My main guess, though, would be that sunperf requires more than just -lblas option to link; generally, you need some other link flags. Since the default error message of the linker is non explanatory, we need more info. What does nm /local/cat2/apps-archive/SUNWspro-12/prod/lib/libblas.so returns (assuming libblas.so is the name of the library) ? cheers, David From stefan at sun.ac.za Mon Sep 10 03:32:27 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 10 Sep 2007 09:32:27 +0200 Subject: [SciPy-user] spline problem In-Reply-To: <200709071507.31417.Peter.Bienstman@ugent.be> References: <200709071507.31417.Peter.Bienstman@ugent.be> Message-ID: <20070910073227.GA10568@mentat.za.net> Hi Peter On Fri, Sep 07, 2007 at 03:07:28PM +0200, Peter Bienstman wrote: > I'm trying to do spline interpolation of my data. When I try the final example > of http://www.scipy.org/Cookbook/Interpolation, everything works fine. > However, as soon as I adapt it to use my own data, I get this > > Traceback (most recent call last): > File "test.py", line 26, in ? > tckp,u = splprep([x,y],s=s,k=k,nest=-1,quiet=0) > File "/usr/lib64/python2.4/site-packages/scipy/interpolate/fitpack.py", line > 223, in splprep > nest,wrk,iwrk,per) > SystemError: error return without exception set Your code contains duplicate data points. You can find them by doing mask = ((diff(x) == 0) & (diff(y) == 0)) print x[mask] print y[mask] If I change or remove those values, everything works well (see attached). Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: peter.py Type: text/x-python Size: 1798 bytes Desc: not available URL: From aisaac at american.edu Mon Sep 10 11:07:23 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 10 Sep 2007 11:07:23 -0400 Subject: [SciPy-user] announcement: OpenOpt and GenericOpt Message-ID: OpenOpt and GenericOpt ====================== Introducing two new optimization packages. OpenOpt and GenericOpt are 100% Python with a single dependency: NumPy. For more detail see below and also OpenOpt ------- OpenOpt is new open source optimization framework. OpenOpt is released under the BSD license. The primary author and current maintainer of OpenOpt is Dmitrey Kroshko (Optimization Department, Cybernetics Institute, Ukrainian Science Academy) OpenOpt goal: provide an open source alternative to TomOpt TOMLAB (optimization framework for MATLAB) and related optimization frameworks. Currently OpenOpt offers connections to a variety of open source solvers, primarily for unconstrained optimization. (See below.) OpenOpt provides provides connections a vraity of solvers, including those in GenericOpt, which are included. (Users will need to download other solvers; we provide URLs for the downloads.) GenericOpt ---------- GenericOpt is a toolkit for building specialized optimizers. GenericOpt is released under the BSD license. The primary author and current maintainer of GenericOpt is Matthieu Brucher GenericOpt goal: provide an open source, extensible toolit for "component-wise" construction of specialized optimizers. GenericOpt allows users who want detailed control to construct their own solvers by choosing among a variety of algorithm components (currently, most choices are among step and line-search algorithms.) Usage: see Matthieu Brucher's tutorial . Limitation: currently GenericOpt provides only unconstrained solvers. SciKits ------- The SciPy project is developing a collection of open source packages for scientific computing which are allowed to have more dependencies and more varied licenses than those allowed for SciPy proper. In contrast to SciPy, the related scikits may host any OSI-approved licensed code. See OpenOpt and GenericOpt are available together as a SciPy scikit. The provides a unified optimization framework along with a collection of solvers. However, neither depends on the other. OpenOpt Details --------------- Key feature: a unified calling interface for all solvers, a variety of pure Python solvers, and connections to numerous external solvers. Example:: from scikits.openopt import NLP p = NLP(lambda x: (x-1)**2, 4) r = p.solve('ralg') In this example, the objective function is (x-1)^2, the start point is x0=4, and 'ralg' specifies the name of solver involved. See much more detailed example here OpenOpt Connected External Solvers ---------------------------------- Non-linear problems (NLP) ~~~~~~~~~~~~~~~~~~~~~~~~~ - ALGENCAN (GPL) - lincher (BSD) (all types of constraints and 1st derivatives), - ralg (BSD) (currently unconstrained only) - scipy_tnc and scipy_lbfgsb (box-bounded, requires scipy installed, BSD) Non-smooth problems (NSP) ~~~~~~~~~~~~~~~~~~~~~~~~~ - ralg (BSD) - ShorEllipsoid (BSD) (for small-scale problems with nVars = 1..10, former for medium-scaled problems with nVars = 1...1000, requires r0) Both are unconstrained for now. Linear problems (LP) ~~~~~~~~~~~~~~~~~~~~ - lp_solve (LGPL) - glpk (GPL) - CVXOPT (GPL) (currently glpk requires CVXOPT installed) Mixed-integer problems (MILP) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - lp_solve Quadratic problems (QP) ~~~~~~~~~~~~~~~~~~~~~~~ - CVXOPT (GPL) (please note - NLP lincher solver requires QP solver, and the only one for now is CVXOPT one) Here you can look at examples for NLP , NSP , QP , LP , MILP Acknowledgements ================ Development of OpenOpt was supported by Google through the Google Summer of Code (GSoC) program, with Alan G. Isaac as mentor. Additonal mentorship was provided by Jarrod Milman. Debts to the SciPy community are many, but we would particularly like to thank Nils Wagner. Appeal ====== The primary author and current maintainer of OpenOpt is Dmitrey Kroshko. The primary author and current maintainer of GenericOpt is Matthieu Brucher. These packages are already functional and extensible, but both would profit from additional intensive development. This will require sponsorship, especially for substantial additions to OpenOpt. The use of Python for scientific programming is only nascent in the Ukraine, so really an outside sponsor is needed. Ideas or leads are very welcome. From vincent.nijs at gmail.com Mon Sep 10 23:00:28 2007 From: vincent.nijs at gmail.com (Vincent) Date: Tue, 11 Sep 2007 03:00:28 -0000 Subject: [SciPy-user] scoreatpercentile on 2D array gives unexpected result Message-ID: <1189479628.787536.107890@50g2000hsm.googlegroups.com> I want to get certain percentiles of each column of an array (i.e., 2.5th, 50th, and 97.5th). Testing the scipy scoreatpercentile function i get some results I didn't expect. In [5]: z Out[5]: array([[1, 1, 1], [1, 1, 1], [4, 4, 3], [1, 1, 1], [1, 1, 1]]) The following works as expected: In [6]: N.median(z) Out[6]: array([1, 1, 1]) Now using scoreatpercentile: In [54]: scipy.stats.scoreatpercentile(z,50) Out[54]: array([3, 4, 4]) The function seems to assume the 2D array is already sorted and then it sorts the returned array from low to high? When I pass scoreatpercentile an array with one column things do seem to work as I expect: In [56]: scipy.stats.scoreatpercentile(z[:,0],50) Out[56]: 1 Any ideas what is going on? Should I be using a different function? Thanks, Vincent From wkerzendorf at googlemail.com Mon Sep 10 23:44:34 2007 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Tue, 11 Sep 2007 13:44:34 +1000 Subject: [SciPy-user] arrays mean & NaN... In-Reply-To: <46E32593.5000001@gmail.com> References: <46E2A82C.5070707@gmail.com> <46E2ED28.9000108@enthought.com> <46E32593.5000001@gmail.com> Message-ID: <46E60F22.4020406@gmail.com> It would be a very good idea, I use nan very often and have trouble when computing the mean. I think it would be better to have a switch in the mean function to switch to ignoring nans. Could that be implemented in other funtions like squaresum (ss) as well? Thanks Wolfgang fred wrote: > eric a ?crit : > >> Travis O., Robert K.,and I were discussing adding nanmean, etc.to the >> nanmin, nanmax methods early this week... >> >> Sounds like others might find this useful as well. >> >> > Should be great ;-) > > Cheers, > > From david at ar.media.kyoto-u.ac.jp Tue Sep 11 02:09:51 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 11 Sep 2007 15:09:51 +0900 Subject: [SciPy-user] arrays mean & NaN... In-Reply-To: <46E60F22.4020406@gmail.com> References: <46E2A82C.5070707@gmail.com> <46E2ED28.9000108@enthought.com> <46E32593.5000001@gmail.com> <46E60F22.4020406@gmail.com> Message-ID: <46E6312F.4000704@ar.media.kyoto-u.ac.jp> Wolfgang Kerzendorf wrote: > It would be a very good idea, I use nan very often and have trouble when > computing the mean. I think it would be better to have a switch in the > mean function to switch to ignoring nans. Could that be implemented in > other funtions like squaresum (ss) as well? > The nanmean, nanmedian and nanstd already exist, but for some reason are not exposed at the package module: from scipy.stats.stats import nanmean, nanmedian, nanstd import numpy as N a = N.array([1., 2., N.nan]) N.mean(a) # -> returns Nan nanmean(a) # -> returns 1.5, treating Nan as a missing value cheers, David From calhoun at amath.washington.edu Tue Sep 11 10:33:44 2007 From: calhoun at amath.washington.edu (Donna Calhoun) Date: Tue, 11 Sep 2007 07:33:44 -0700 (PDT) Subject: [SciPy-user] BLAS and srotgm Message-ID: Dear SciPy Users : I just got SciPy installed, finally. (Many thanks to the user who posted a while ago that the "-shared" flag was needed to build _fftpack.so). But now, when I try to import the blas module, I get the following error : ---------------------------------------------------------------------------- Python 2.5 (r25:51908, Sep 10 2007, 00:42:23) [GCC 3.4.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy.linalg.blas Traceback (most recent call last): File "", line 1, in File "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/basic.py", line 227, in import decomp File "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/decomp.py", line 21, in from blas import get_blas_funcs File "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/blas.py", line 14, in from scipy.linalg import fblas ImportError: /usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/fblas.so: undefined symbol: srotmg_ ---------------------------------------------------------------------------- I build libblas and liblapack from the latest version of lapack (3.1.1). The file srotmg.f is under BLAS/SRC and appears to be in the library : -------------------------------------------------------------- [calhoun at localhost lapack-3.1.1]# nm libblas.a | grep srotmg srotmg.o: 00000000 T srotmg_ -------------------------------------------------------------- (note that libblas.a and liblapack.a are symbolic links to the libraries blas_LINUX.a and lapack_LINUX.a built by the lapack build/install). The '00000000' is a bit suspicous, however. However, fblas.so (under scipy/linalg) has : --------------------------------------------------------- [calhoun at localhost linalg]# nm fblas.so | grep srotmg 0016f4a0 d doc_f2py_rout_fblas_srotmg 0005a4b0 t f2py_rout_fblas_srotmg U srotmg_ ---------------------------------------------------------- which I guess means 'srotmg' is undefined. Looking at a build log of scipy, the blas/lapack I specified in site.cfg were all found. So the question is, why can't python find srotmg? Thank you for any help, Donna From ryanlists at gmail.com Tue Sep 11 13:13:34 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 11 Sep 2007 12:13:34 -0500 Subject: [SciPy-user] Python FEA Message-ID: I would like to start working on finite element analysis using Python. I would like to work specifically in two areas: 1. feedback control of flexible structures 2. modeling impact/contact My assumption is that much is being done in FEA in general and that I would need to write code specific to incorporating feedback or modeling contact. But I am not sure what all is being done using Python in FEA. A quick google search turned up the following: PyFemax: http://www.python.org/pycon/papers/pysparse.html Pfem: http://pfem.sourceforge.net/ which seem like good starts that may or may not still be maintained. Is anyone working on FEA in Python or can anyone point me to a project my google search failed to turn up? Thanks, Ryan From calhoun at amath.washington.edu Tue Sep 11 13:08:47 2007 From: calhoun at amath.washington.edu (Donna) Date: Tue, 11 Sep 2007 17:08:47 +0000 (UTC) Subject: [SciPy-user] BLAS and srotgm References: Message-ID: > > Dear SciPy Users : > > I just got SciPy installed, finally. (Many thanks to the user who posted > a while ago that the "-shared" flag was needed to build _fftpack.so). > > But now, when I try to import the blas module, I get the following error : > > ---------------------------------------------------------------------------- > Python 2.5 (r25:51908, Sep 10 2007, 00:42:23) > [GCC 3.4.0] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy.linalg.blas > Traceback (most recent call last): > File "", line 1, in > File "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ __init__.py", > line 8, in > from basic import * > File "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ basic.py", > line 227, in > import decomp > File "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ decomp.py", > line 21, in > from blas import get_blas_funcs > File "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ blas.py", > line 14, in > from scipy.linalg import fblas > ImportError: > /usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ fblas.so: > undefined symbol: srotmg_ > ---------------------------------------------------------------------------- I think I fixed this problem. In fact, despite my best efforts to make sure the scipy build found the latest libblas file, it had in fact found an old one. Maybe the "-lblas" was in the wrong place in the g77 command that built fblas.so? In the end, re-issuing just that command, slightly modifed so there could be no mistake as to which libblas is was to find, seemed to fix my fblas.so. So now I can import blas. I had thought that all I needed to do was edit site.cfg to indicate where the blas libraries were. But somehow, it still looked elsewhere.. I hope this helps anyone else who is a newbie at this and who runs into the same problem. Donna > > I build libblas and liblapack from the latest version of lapack (3.1.1). > The file srotmg.f is under BLAS/SRC and appears to be in the library : > > -------------------------------------------------------------- > [calhoun localhost lapack-3.1.1]# nm libblas.a | grep srotmg > srotmg.o: > 00000000 T srotmg_ > -------------------------------------------------------------- > > (note that libblas.a and liblapack.a are symbolic links to the libraries > blas_LINUX.a and lapack_LINUX.a built by the lapack build/install). > > The '00000000' is a bit suspicous, however. > > However, fblas.so (under scipy/linalg) has : > > --------------------------------------------------------- > [calhoun localhost linalg]# nm fblas.so | grep srotmg > 0016f4a0 d doc_f2py_rout_fblas_srotmg > 0005a4b0 t f2py_rout_fblas_srotmg > U srotmg_ > ---------------------------------------------------------- > > which I guess means 'srotmg' is undefined. Looking at a build log of > scipy, the blas/lapack I specified in site.cfg were all found. > > So the question is, why can't python find srotmg? > > Thank you for any help, > > Donna > From robert.kern at gmail.com Tue Sep 11 14:02:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Sep 2007 13:02:50 -0500 Subject: [SciPy-user] BLAS and srotgm In-Reply-To: References: Message-ID: <46E6D84A.7020800@gmail.com> Donna wrote: >> Dear SciPy Users : >> >> I just got SciPy installed, finally. (Many thanks to the user who posted >> a while ago that the "-shared" flag was needed to build _fftpack.so). >> >> But now, when I try to import the blas module, I get the following error : >> >> ---------------------------------------------------------------------------- >> Python 2.5 (r25:51908, Sep 10 2007, 00:42:23) >> [GCC 3.4.0] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >>>>> import scipy.linalg.blas >> Traceback (most recent call last): >> File "", line 1, in >> File > "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ > __init__.py", > >> line 8, in >> from basic import * >> File > "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ > basic.py", >> line 227, in >> import decomp >> File > "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ > decomp.py", >> line 21, in >> from blas import get_blas_funcs >> File > "/usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ > blas.py", >> line 14, in >> from scipy.linalg import fblas >> ImportError: >> /usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ > fblas.so: >> undefined symbol: srotmg_ >> ---------------------------------------------------------------------------- > > I think I fixed this problem. In fact, despite my best efforts to make sure > the scipy build found the latest libblas file, it had in fact found an old one. > Maybe the "-lblas" was in the wrong place in the g77 command that built > fblas.so? In the end, re-issuing just that command, slightly modifed so > there could be no mistake as to which libblas is was to find, seemed to fix > my fblas.so. So now I can import blas. > > I had thought that all I needed to do was edit site.cfg to indicate where > the blas libraries were. But somehow, it still looked elsewhere.. > > I hope this helps anyone else who is a newbie at this and who runs into the > same problem. Can you give us the contents of your site.cfg, the locations of all of the libblas's on your system, the output from your build, and the g77 command that worked? That might help us to explain the problem at least, if not fix it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From vaftrudner at gmail.com Tue Sep 11 20:14:39 2007 From: vaftrudner at gmail.com (Martin Blom) Date: Tue, 11 Sep 2007 19:14:39 -0500 Subject: [SciPy-user] getting stats.zprob to return float96 Message-ID: Hello everyone, I'm using stats.zprob and need some more decimal points than standard python floats can give me, and I need the function to be fast. Can I get the standard scipy zprob or ndtr to return float96 in some way? Or do I need to modify something in special/cephes or something like that? cheers Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuntim.luk at polyu.edu.hk Tue Sep 11 23:36:03 2007 From: shuntim.luk at polyu.edu.hk (LUK ShunTim) Date: Wed, 12 Sep 2007 11:36:03 +0800 Subject: [SciPy-user] Python FEA In-Reply-To: References: Message-ID: <46E75EA3.4010004@polyu.edu.hk> Ryan Krauss wrote: > I would like to start working on finite element analysis using Python. > I would like to work specifically in two areas: > 1. feedback control of flexible structures > 2. modeling impact/contact > > My assumption is that much is being done in FEA in general and that I > would need to write code specific to incorporating feedback or > modeling contact. But I am not sure what all is being done using > Python in FEA. A quick google search turned up the following: > > PyFemax: > http://www.python.org/pycon/papers/pysparse.html > > Pfem: > http://pfem.sourceforge.net/ > > which seem like good starts that may or may not still be maintained. > > Is anyone working on FEA in Python or can anyone point me to a project > my google search failed to turn up? > > Thanks, > > Ryan Hello Ryan, I guess it really depends on the level of details that you'd like to go into. One idea you might consider is general FEM packages that have some python wrappers already. For example, these I know of sundance/trilinos http://software.sandia.gov/sundance/ http://trilinos.sandia.gov/ http://trilinos.sandia.gov/packages/pytrilinos/ fenics/dolfin http://www.fenics.org/wiki/FEniCS_Project http://fenics.org/wiki/DOLFIN getfem http://home.gna.org/getfem/ oof2 http://www.ctcms.nist.gov/oof/oof2/ All open sourced and maintained. There must be a lot of others as well. Regards, ST -- From ded.espaze at laposte.net Wed Sep 12 05:13:23 2007 From: ded.espaze at laposte.net (Dede) Date: Wed, 12 Sep 2007 11:13:23 +0200 Subject: [SciPy-user] Python FEA In-Reply-To: References: Message-ID: <20070912111323.6d2102d0@localhost> Hi Ryan, There is also Code Aster, the FEM solver using Python files in input: http://www.code-aster.org/ and Salome for pre/post processing (all the commands are available from Python): http://www.salome-platform.org/home/presentation/overview/ They are not very easy to install but interesting projects. The French layer of Code Aster is still a problem, only a small part of the documentation has been translated. Cheers, Dede On Tue, 11 Sep 2007 12:13:34 -0500 "Ryan Krauss" wrote: > I would like to start working on finite element analysis using Python. > I would like to work specifically in two areas: > 1. feedback control of flexible structures > 2. modeling impact/contact > > My assumption is that much is being done in FEA in general and that I > would need to write code specific to incorporating feedback or > modeling contact. But I am not sure what all is being done using > Python in FEA. A quick google search turned up the following: > > PyFemax: > http://www.python.org/pycon/papers/pysparse.html > > Pfem: > http://pfem.sourceforge.net/ > > which seem like good starts that may or may not still be maintained. > > Is anyone working on FEA in Python or can anyone point me to a project > my google search failed to turn up? > > Thanks, > > Ryan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From travis at enthought.com Wed Sep 12 12:05:15 2007 From: travis at enthought.com (Travis Vaught) Date: Wed, 12 Sep 2007 11:05:15 -0500 Subject: [SciPy-user] ANN: Reminder - Texas Python Regional Unconference Message-ID: Greetings, Just a reminder for those in the area... http://pycamp.python.org/Texas/HomePage The Unconference is to be held this weekend (Saturday and Sunday, September 15, 16) at the Texas Learning & Computing Center at the University of Houston main campus. It's free. Sign up by adding your name to the wiki page. Travis From rmay at ou.edu Wed Sep 12 14:47:06 2007 From: rmay at ou.edu (Ryan May) Date: Wed, 12 Sep 2007 13:47:06 -0500 Subject: [SciPy-user] Problems with ACML In-Reply-To: <46D46591.9020703@gmail.com> References: <46D42672.8090203@ou.edu> <46D46591.9020703@gmail.com> Message-ID: <46E8342A.8060601@ou.edu> Robert Kern wrote: > Ryan May wrote: >> Hi, >> >> Does anyone here use the AMD Core Math Libraries (ACML) as their >> underlying libraries for BLAS/LAPACK/etc. in SciPy? I have problems >> with (at least) scipy.linalg.eigvals (Fernando discovered this at SciPy >> on my laptop). For instance, the following crashes reliably with ACML, >> but works fine with ATLAS versions of BLAS/LAPACK: >> >>>> >from scipy.linalg import eigvals >>>> >from numpy.random import rand >>>>> a = rand(100,100) >>>>> eigvals(a) >> Anyone else have this problem? Is ACML known to be a bad thing to use >> with SciPy? > > No, but it's not an option that's as thoroughly tested as ATLAS, either. If you > could supply a gdb backtrace from the crash, that would help locate the problem. > Well, right now this is what I get (having updated scipy): Python 2.5.1 (r251:54863, Jul 27 2007, 10:40:24) [GCC 4.1.2 (Gentoo 4.1.2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy.linalg import eigvals Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib64/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib64/python2.5/site-packages/scipy/linalg/lapack.py", line 18, in from scipy.linalg import clapack ImportError: /usr/lib64/python2.5/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgesv >>> scipy.__version__ '0.5.2.1' Any ACML users out there have any idea? This is on AMD64 w/ gfortran/int64 edition of ACML 3.6.1. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From pav at iki.fi Wed Sep 12 16:04:37 2007 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 12 Sep 2007 20:04:37 +0000 (UTC) Subject: [SciPy-user] Python FEA References: Message-ID: Tue, 11 Sep 2007 12:13:34 -0500, Ryan Krauss wrote: [clip] > > Is anyone working on FEA in Python or can anyone point me to a project > my google search failed to turn up? I've stumbled on SFE http://ui505p06-mbs.ntc.zcu.cz/sfe but have to admit not really looking deeply what all it does. -- Pauli Virtanen From fperez.net at gmail.com Wed Sep 12 20:59:34 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 12 Sep 2007 18:59:34 -0600 Subject: [SciPy-user] really basic where() function question In-Reply-To: References: <46D39B90.9050409@gmail.com> Message-ID: On 8/28/07, Kurt Smith wrote: > Moral: be careful about importing * from pylab (or running $ ipython > -pylab)! Its functions aren't the same as numpy/scipy. Or more accurately, set pylab_import_all 0 in your ~/.ipython/ipythonrc file, to prevent the implicit from pylab import * that ipython -pylab normally does. It's worth also noting that pylab is being cleaned up from all this historical cruft, and will gradually become a purely numpy-compatible tool, so these nasty gotchas will mostly go away. These problems come from the fact was written back when Numeric and numarray both existed, with non-overlapping functionality. Pylab tried to expose some unified interface on top, which by necessity was different from either (it even added its own stuff). Over time, these problems will gradually go down, but you may prefer (I think it's safer) to set the above flag, and then use an ipython profile that does import pylab as P # or whatever shorthand you like, if any import numpy as N from pylab import plot, ... (put here the things you really use interactively a lot) from numpy import * With this, you only expose at the top level the real numpy and the plotting-related parts of pylab you actually need, as well as having two quick handles (N,P) to access numpy/pylab. Cheers, f From kwmsmith at gmail.com Thu Sep 13 00:22:22 2007 From: kwmsmith at gmail.com (Kurt Smith) Date: Wed, 12 Sep 2007 23:22:22 -0500 Subject: [SciPy-user] really basic where() function question In-Reply-To: References: <46D39B90.9050409@gmail.com> Message-ID: On 9/12/07, Fernando Perez wrote: > On 8/28/07, Kurt Smith wrote: > > > Moral: be careful about importing * from pylab (or running $ ipython > > -pylab)! Its functions aren't the same as numpy/scipy. > > Or more accurately, set > > pylab_import_all 0 Thanks for the tip -- quite helpful. Glad to hear that these warts are being fixed, too. Kurt From fredmfp at gmail.com Thu Sep 13 06:40:41 2007 From: fredmfp at gmail.com (fred) Date: Thu, 13 Sep 2007 12:40:41 +0200 Subject: [SciPy-user] issues while loading scatter data file with load() from pylab... Message-ID: <46E913A9.1000607@gmail.com> Hi all, First question. Using load() function from pylab, array returned is a float64. Is it possible to directly load it in float32 ? I don't need the double precision. And I saw nothing with load? The issue. My scatter data has ~7x1e6 points, stored as x, y, z, v per line. Using a short C code and fscanf, it takes 12 s and ~240 MB in format double to load it. Fine. Using load() from pylab to load this file is endless and need more than 1 GB. What's the problem ? TIA Cheers, -- http://scipy.org/FredericPetit From gael.varoquaux at normalesup.org Thu Sep 13 07:24:31 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 13 Sep 2007 13:24:31 +0200 Subject: [SciPy-user] issues while loading scatter data file with load() from pylab... In-Reply-To: <46E913A9.1000607@gmail.com> References: <46E913A9.1000607@gmail.com> Message-ID: <20070913112431.GB16512@clipper.ens.fr> On Thu, Sep 13, 2007 at 12:40:41PM +0200, fred wrote: > First question. > Using load() function from pylab, array returned is a float64. > Is it possible to directly load it in float32 ? > I don't need the double precision. > And I saw nothing with load? > The issue. > My scatter data has ~7x1e6 points, > stored as x, y, z, v per line. > Using a short C code and fscanf, it takes 12 s and ~240 MB in format > double to load it. > Fine. > Using load() from pylab to load this file is endless and need more than > 1 GB. Did you try something less "swiss army knife" than pylab.load ? For instance scipy.io.read_array or something homebaked ? As pylab.load is trying to accomodate for all sort of weird things, and is very versatile, I bet something more targetted would be quicker. Other solution is to store the data in a format better suited for large data. For instance hdf5 with pytables. Ga?l From a.g.basden at durham.ac.uk Thu Sep 13 08:06:22 2007 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Thu, 13 Sep 2007 13:06:22 +0100 (BST) Subject: [SciPy-user] determinants Message-ID: Hi, I wonder whether there is a bug in scipy.linalg.det? If I call this with my matrix, it returns inf. The matrix is 2048x2048 in size, with no inf elements (max 54, min -13), float64 type. I can't work out why it would think the determinant is infinity! Thanks... From matthieu.brucher at gmail.com Thu Sep 13 08:08:55 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 13 Sep 2007 14:08:55 +0200 Subject: [SciPy-user] determinants In-Reply-To: References: Message-ID: Hi, What are the maximum and minimum eigenvalue of the array ? Matthieu 2007/9/13, Alastair Basden : > > Hi, > I wonder whether there is a bug in scipy.linalg.det? If I call this with > my matrix, it returns inf. The matrix is 2048x2048 in size, with no inf > elements (max 54, min -13), float64 type. > I can't work out why it would think the determinant is infinity! > > Thanks... > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.g.basden at durham.ac.uk Thu Sep 13 08:17:31 2007 From: a.g.basden at durham.ac.uk (Alastair Basden) Date: Thu, 13 Sep 2007 13:17:31 +0100 (BST) Subject: [SciPy-user] determinants In-Reply-To: References: Message-ID: Hi, the max/min eigen values are: 158.63053878597884+0j, 0.052723723222460814+0j Thanks... On Thu, 13 Sep 2007, Alastair Basden wrote: References: <46E913A9.1000607@gmail.com> <20070913112431.GB16512@clipper.ens.fr> Message-ID: <46E92CAC.4050505@gmail.com> Gael Varoquaux a ?crit : > On Thu, Sep 13, 2007 at 12:40:41PM +0200, fred wrote: > >> First question. >> Using load() function from pylab, array returned is a float64. >> Is it possible to directly load it in float32 ? >> I don't need the double precision. >> And I saw nothing with load? >> > > >> The issue. >> > > >> My scatter data has ~7x1e6 points, >> stored as x, y, z, v per line. >> > > >> Using a short C code and fscanf, it takes 12 s and ~240 MB in format >> double to load it. >> Fine. >> > > >> Using load() from pylab to load this file is endless and need more than >> 1 GB. >> > > Did you try something less "swiss army knife" than pylab.load ? For instance > scipy.io.read_array or something homebaked ? As pylab.load is trying to > accomodate for all sort of weird things, and is very versatile, I bet > something more targetted would be quicker. > Well, I have modified my code to read scatter data from binary file. No more, no less ;-) > Other solution is to store the data in a format better suited for large > data. For instance hdf5 with pytables. > I'll look at it when we have to process data files of several GB ;-) Thanks. -- http://scipy.org/FredericPetit From aisaac at american.edu Thu Sep 13 10:33:47 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 13 Sep 2007 10:33:47 -0400 Subject: [SciPy-user] issues while loading scatter data file with load() from pylab... In-Reply-To: <20070913112431.GB16512@clipper.ens.fr> References: <46E913A9.1000607@gmail.com><20070913112431.GB16512@clipper.ens.fr> Message-ID: On Thu, Sep 13, 2007 at 12:40:41PM +0200, fred wrote: > Using load() function from pylab, array returned is > a float64. Is it possible to directly load it in float32 > ? I don't need the double precision. And I saw nothing > with load? numpy.fromfile with a reshape? hth, Alan Isaac PS >> help(numpy.fromfile) Help on built-in function fromfile in module numpy.core.multiarray: fromfile(...) fromfile(file=, dtype=float, count=-1, sep='') - array. Required arguments: file -- open file object or string containing file name. Keyword arguments: dtype -- type and order of the returned array (default float) count -- number of items to input (default all) sep -- separater between items if file is a text file (default "") Return an array of the given data type from a text or binary file. The 'file' argument can be an open file or a string with the name of a file to read from. If 'count' == -1 the entire file is read, otherwise count is the number of items of the given type to read in. If 'sep' is "" it means to read binary data from the file using the specified dtype, otherwise it gives the separator between elements in a text file. The 'dtype' value is also used to determine the size and order of the items in binary files... From vaftrudner at gmail.com Thu Sep 13 11:15:03 2007 From: vaftrudner at gmail.com (Martin Blom) Date: Thu, 13 Sep 2007 10:15:03 -0500 Subject: [SciPy-user] getting stats.zprob to return float96 In-Reply-To: References: Message-ID: Hello everyone, I'm using stats.zprob and need some more decimal points than standard python floats can give me, and I need the function to be fast. Can I get the standard scipy zprob or ndtr to return float96 in some way? Or do I need to modify something in special/cephes or something like that? cheers Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Sep 13 12:16:25 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 13 Sep 2007 11:16:25 -0500 Subject: [SciPy-user] getting stats.zprob to return float96 In-Reply-To: References: Message-ID: <46E96259.6070401@gmail.com> Martin Blom wrote: > Hello everyone, > > I'm using stats.zprob and need some more decimal points than standard > python floats can give me, and I need the function to be fast. Can I get > the standard scipy zprob or ndtr to return float96 in some way? Or do I > need to modify something in special/cephes or something like that? You would have to find or write a routine that used higher precision floats. We don't wrap any of them. Cephes does have quad-precision versions of many of their functions, but you would have to wrap them yourself. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lou_boog2000 at yahoo.com Thu Sep 13 12:41:11 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 13 Sep 2007 09:41:11 -0700 (PDT) Subject: [SciPy-user] Can't import SciPy packages Message-ID: <955623.98385.qm@web34412.mail.mud.yahoo.com> I have the SciPy library from Enthought, but I can't import the sub-packages (like integrate or special) in ipython. Here's what happens: import scipy.integrate --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /Users/loupecora/ /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/integrate/__init__.py 7 from info import __doc__ 8 ----> 9 from quadrature import * 10 from odepack import * 11 from quadpack import * /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/integrate/quadrature.py 6 'cumtrapz'] 7 ----> 8 from scipy.special.orthogonal import p_roots 9 from numpy import sum, ones, add, diff, isinf, isscalar, \ 10 asarray, real, trapz, arange, empty /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/special/__init__.py 6 #from special_version import special_version as __version__ 7 ----> 8 from basic import * 9 import specfun 10 import orthogonal /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/special/basic.py 6 7 from numpy import * ----> 8 from _cephes import * 9 import types 10 import specfun ImportError: Inappropriate file type for dynamic loading I'm on a MacBook Pro, Tiger 10.4.10, python 2.4. Any ideas? Thanks. -- Lou Pecora, my views are my own. --------------- Great spirits have always encountered violent opposition from mediocre minds. -Albert Einstein ____________________________________________________________________________________ Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz From robert.kern at gmail.com Thu Sep 13 12:53:51 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 13 Sep 2007 11:53:51 -0500 Subject: [SciPy-user] Can't import SciPy packages In-Reply-To: <955623.98385.qm@web34412.mail.mud.yahoo.com> References: <955623.98385.qm@web34412.mail.mud.yahoo.com> Message-ID: <46E96B1F.9070103@gmail.com> Lou Pecora wrote: > I have the SciPy library from Enthought, but I can't > import the sub-packages (like integrate or special) in > ipython. Here's what happens: Exactly which file did you download and install? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Thu Sep 13 13:10:46 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 13 Sep 2007 20:10:46 +0300 Subject: [SciPy-user] determinants In-Reply-To: References: Message-ID: <46E96F16.5020102@ukr.net> from scipy import linalg from scipy import rand for N in [10, 50, 100, 200, 400, 500, 1000]: print linalg.det(rand(N, N)) So typical A[i,j] values are 0.0 ... 1.0 (that is much less than your -13...54) and typical output is 0.00356531521304 210176.6131 -6.3723083425e+24 1.45560352703e+80 8.58872496027e+217 -2.76708200113e+296 -inf Same in MATLAB/Octave So I don't see any bugs here. Regards, D. Alastair Basden wrote: > Hi, > I wonder whether there is a bug in scipy.linalg.det? If I call this with > my matrix, it returns inf. The matrix is 2048x2048 in size, with no inf > elements (max 54, min -13), float64 type. > I can't work out why it would think the determinant is infinity! > > Thanks... > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From lou_boog2000 at yahoo.com Thu Sep 13 13:58:23 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 13 Sep 2007 10:58:23 -0700 (PDT) Subject: [SciPy-user] Can't import SciPy packages In-Reply-To: <46E96B1F.9070103@gmail.com> Message-ID: <550006.73491.qm@web34414.mail.mud.yahoo.com> --- Robert Kern wrote: > Lou Pecora wrote: > > I have the SciPy library from Enthought, but I > can't > > import the sub-packages (like integrate or > special) in > > ipython. Here's what happens: > > Exactly which file did you download and install? Good question. It's been a long time (months) and I don't remember. I decided to install SciPy from the tarball instead and still have some problems. I'm going to post another message on that to see if anyone can figure what happened or tell me how to get *all* of SciPy installed. Sorry I don't have the info you asked for. My other question (above) to be posted very soon. Thanks. -- Lou Pecora, my views are my own. --------------- Great spirits have always encountered violent opposition from mediocre minds. -Albert Einstein ____________________________________________________________________________________ Pinpoint customers who are looking for what you sell. http://searchmarketing.yahoo.com/ From lou_boog2000 at yahoo.com Thu Sep 13 14:04:23 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 13 Sep 2007 11:04:23 -0700 (PDT) Subject: [SciPy-user] Problem getting Full SciPy package set installed. Message-ID: <276391.50249.qm@web34401.mail.mud.yahoo.com> On Mac OSX 10.4.10, Python 2.4. I got NumPy 1.0.3.1 installed from the tarball successfully. I got gfortran installed (seemed successful) from the SourceForge Mac HPC page. But I had a serious error when I tried to install the latest SciPy package set (0.5.2.1) from the tarball. When I run the usual python setup.py install I get (after a LOT of output): ... Lib/integrate/quadpack/dqelg.f: In function 'dqelg': Lib/integrate/quadpack/dqelg.f:1: internal compiler error: Bus error Please submit a full bug report, with preprocessed source if appropriate. See for instructions. error: Command "/usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops -c -c Lib/integrate/quadpack/dqelg.f -o build/temp.macosx-10.3-fat-2.4/Lib/integrate/quadpack/dqelg.o" failed with exit status 1 That error looks really low-level. How can I get the SciPy package installed? Is there another way? Thanks for any help. -- Lou Pecora, my views are my own. --------------- Great spirits have always encountered violent opposition from mediocre minds. -Albert Einstein ____________________________________________________________________________________ Building a website is a piece of cake. Yahoo! Small Business gives you all the tools to get online. http://smallbusiness.yahoo.com/webhosting From robert.kern at gmail.com Thu Sep 13 14:09:02 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 13 Sep 2007 13:09:02 -0500 Subject: [SciPy-user] Problem getting Full SciPy package set installed. In-Reply-To: <276391.50249.qm@web34401.mail.mud.yahoo.com> References: <276391.50249.qm@web34401.mail.mud.yahoo.com> Message-ID: <46E97CBE.5050307@gmail.com> Lou Pecora wrote: > On Mac OSX 10.4.10, Python 2.4. > > I got NumPy 1.0.3.1 installed from the tarball > successfully. I got gfortran installed (seemed > successful) from the SourceForge Mac HPC page. But I > had a serious error when I tried to install the latest > SciPy package set (0.5.2.1) from the tarball. When I > run the usual python setup.py install I get (after > a LOT of output): > > ... > Lib/integrate/quadpack/dqelg.f: In function 'dqelg': > Lib/integrate/quadpack/dqelg.f:1: internal compiler > error: Bus error > Please submit a full bug report, > with preprocessed source if appropriate. > See for > instructions. > error: Command "/usr/local/bin/gfortran -Wall > -ffixed-form -fno-second-underscore -fPIC -O3 > -funroll-loops -c -c Lib/integrate/quadpack/dqelg.f -o > build/temp.macosx-10.3-fat-2.4/Lib/integrate/quadpack/dqelg.o" > failed with exit status 1 > > That error looks really low-level. > > How can I get the SciPy package installed? Is there > another way? Try the gfortran compiler available from here: http://r.research.att.com/tools/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From emanuelez at gmail.com Thu Sep 13 14:33:40 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Thu, 13 Sep 2007 20:33:40 +0200 Subject: [SciPy-user] fft of an image Message-ID: hello, i would like to show the magnitude of the fourier transform of an image. interesting informations appear in the four corners of the resulting image, but i would like to shift the matrix in order to have the corners meeting each other in the middle. any hint? -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From lou_boog2000 at yahoo.com Thu Sep 13 14:34:58 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 13 Sep 2007 11:34:58 -0700 (PDT) Subject: [SciPy-user] Problem getting Full SciPy package set installed. In-Reply-To: <46E97CBE.5050307@gmail.com> Message-ID: <606687.84248.qm@web34407.mail.mud.yahoo.com> --- Robert Kern wrote: > Lou Pecora asked: > > > How can I get the SciPy package installed? Is > > there another way? > > Try the gfortran compiler available from here: > > http://r.research.att.com/tools/ > > -- > Robert Kern That seemed to work. The python setup install of the SciPy module produced a LOT of output with plenty of warnings sprinkled in, but no errors. I can now import the sub-packages 'special' and 'integrate'. Only time will tell if this will work in the calculations. Thanks, Robert. The SciPy web site points to the gfortran that didn't work. Any 'fix' for that? -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Need a vacation? Get great deals to amazing places on Yahoo! Travel. http://travel.yahoo.com/ From peridot.faceted at gmail.com Thu Sep 13 14:44:26 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 13 Sep 2007 14:44:26 -0400 Subject: [SciPy-user] determinants In-Reply-To: <46E96F16.5020102@ukr.net> References: <46E96F16.5020102@ukr.net> Message-ID: If you have problems with determinants becoming excessively large, you may be able to circumvent them by computing the log of the determinant. The easiest way to do this is to use LU decomposition: P,L,U = scipy.linalg.lu(M) d = sum(log(abs(diag(L)))) Of course you lose track of the sign doing this (P may be either an even or odd permutation, though det should be reliable and efficient on it). This is not necessarily much slower than using scipy's built-in det; both are O(N^3), at least, and scipy may implement its det this way. Anne On 13/09/2007, dmitrey wrote: > from scipy import linalg > from scipy import rand > for N in [10, 50, 100, 200, 400, 500, 1000]: > print linalg.det(rand(N, N)) > > So typical A[i,j] values are 0.0 ... 1.0 (that is much less than your > -13...54) and > typical output is > 0.00356531521304 > 210176.6131 > -6.3723083425e+24 > 1.45560352703e+80 > 8.58872496027e+217 > -2.76708200113e+296 > -inf > > Same in MATLAB/Octave > So I don't see any bugs here. > Regards, D. > Alastair Basden wrote: > > Hi, > > I wonder whether there is a bug in scipy.linalg.det? If I call this with > > my matrix, it returns inf. The matrix is 2048x2048 in size, with no inf > > elements (max 54, min -13), float64 type. > > I can't work out why it would think the determinant is infinity! > > > > Thanks... > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lbolla at gmail.com Thu Sep 13 15:20:01 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 13 Sep 2007 21:20:01 +0200 Subject: [SciPy-user] fft of an image In-Reply-To: References: Message-ID: <80c99e790709131220v12271e0v433091db1ba2a86c@mail.gmail.com> what about scipy.fftpack.fftshift? In [7]: x = scipy.arange(25).reshape(5,5) In [8]: x Out[8]: array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) In [9]: scipy.fftpack.fftshift(x) Out[9]: array([[18, 19, 15, 16, 17], [23, 24, 20, 21, 22], [ 3, 4, 0, 1, 2], [ 8, 9, 5, 6, 7], [13, 14, 10, 11, 12]]) On 9/13/07, Emanuele Zattin wrote: > > hello, > i would like to show the magnitude of the fourier transform of an > image. interesting informations appear in the four corners of the > resulting image, but i would like to shift the matrix in order to have > the corners meeting each other in the middle. any hint? > > -- > Emanuele Zattin > --------------------------------------------------- > -I don't have to know an answer. I don't feel frightened by not > knowing things; by being lost in a mysterious universe without any > purpose ? which is the way it really is, as far as I can tell, > possibly. It doesn't frighten me.- Richard Feynman > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Sep 13 16:19:51 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 13 Sep 2007 15:19:51 -0500 Subject: [SciPy-user] determinants In-Reply-To: References: <46E96F16.5020102@ukr.net> Message-ID: <46E99B67.4080600@gmail.com> Anne Archibald wrote: > If you have problems with determinants becoming excessively large, you > may be able to circumvent them by computing the log of the > determinant. The easiest way to do this is to use LU decomposition: > > P,L,U = scipy.linalg.lu(M) > d = sum(log(abs(diag(L)))) > > Of course you lose track of the sign doing this (P may be either an > even or odd permutation, though det should be reliable and efficient > on it). > > This is not necessarily much slower than using scipy's built-in det; > both are O(N^3), at least, and scipy may implement its det this way. We do use an LU decomposition although we accumulate the determinant by straight multiplication, not through logarithms. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rmay at ou.edu Thu Sep 13 16:41:26 2007 From: rmay at ou.edu (Ryan May) Date: Thu, 13 Sep 2007 15:41:26 -0500 Subject: [SciPy-user] Problems with ACML In-Reply-To: <46E8342A.8060601@ou.edu> References: <46D42672.8090203@ou.edu> <46D46591.9020703@gmail.com> <46E8342A.8060601@ou.edu> Message-ID: <46E9A076.2020803@ou.edu> > Well, right now this is what I get (having updated scipy): > > Python 2.5.1 (r251:54863, Jul 27 2007, 10:40:24) > [GCC 4.1.2 (Gentoo 4.1.2)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> from scipy.linalg import eigvals > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib64/python2.5/site-packages/scipy/linalg/__init__.py", > line 8, in > from basic import * > File "/usr/lib64/python2.5/site-packages/scipy/linalg/basic.py", line > 17, in > from lapack import get_lapack_funcs > File "/usr/lib64/python2.5/site-packages/scipy/linalg/lapack.py", line > 18, in > from scipy.linalg import clapack > ImportError: /usr/lib64/python2.5/site-packages/scipy/linalg/clapack.so: > undefined symbol: clapack_sgesv >>>> scipy.__version__ > '0.5.2.1' > > Any ACML users out there have any idea? This is on AMD64 w/ > gfortran/int64 edition of ACML 3.6.1. > > Ryan > Well, it appears that the missing symbol was due to some bad combination of ACML _and_ ATLAS. Removing atlas and recompiling scipy/numpy resolved it. As far as the original crash goes, my problem was that I had the int64 version of ACML installed. So for you ACML users out there, don't use the int64 version unless the code you're using actually uses INTERGER*8. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From David.L.Goldsmith at noaa.gov Thu Sep 13 17:05:58 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Thu, 13 Sep 2007 14:05:58 -0700 Subject: [SciPy-user] determinants In-Reply-To: References: <46E96F16.5020102@ukr.net> Message-ID: <46E9A636.7030805@noaa.gov> Anne Archibald wrote: > If you have problems with determinants becoming excessively large, you > may be able to circumvent them by computing the log of the > determinant. The easiest way to do this is to use LU decomposition: > > P,L,U = scipy.linalg.lu(M) > d = sum(log(abs(diag(L)))) > > Of course you lose track of the sign doing this (P may be either an > even or odd permutation, though det should be reliable and efficient > on it). Of course, one can keep track of the sign by cumprod(sgn(diag(L))), yes? (Sorry, I don't know the numpy functions for these off hand, but I assume they exist, yes?) DG > This is not necessarily much slower than using scipy's built-in det; > both are O(N^3), at least, and scipy may implement its det this way. > > Anne > > > On 13/09/2007, dmitrey wrote: > >> from scipy import linalg >> from scipy import rand >> for N in [10, 50, 100, 200, 400, 500, 1000]: >> print linalg.det(rand(N, N)) >> >> So typical A[i,j] values are 0.0 ... 1.0 (that is much less than your >> -13...54) and >> typical output is >> 0.00356531521304 >> 210176.6131 >> -6.3723083425e+24 >> 1.45560352703e+80 >> 8.58872496027e+217 >> -2.76708200113e+296 >> -inf >> >> Same in MATLAB/Octave >> So I don't see any bugs here. >> Regards, D. >> Alastair Basden wrote: >> >>> Hi, >>> I wonder whether there is a bug in scipy.linalg.det? If I call this with >>> my matrix, it returns inf. The matrix is 2048x2048 in size, with no inf >>> elements (max 54, min -13), float64 type. >>> I can't work out why it would think the determinant is infinity! >>> >>> Thanks... >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Thu Sep 13 17:12:37 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 13 Sep 2007 17:12:37 -0400 Subject: [SciPy-user] determinants In-Reply-To: <46E9A636.7030805@noaa.gov> References: <46E96F16.5020102@ukr.net> <46E9A636.7030805@noaa.gov> Message-ID: On 13/09/2007, David Goldsmith wrote: > Anne Archibald wrote: > > If you have problems with determinants becoming excessively large, you > > may be able to circumvent them by computing the log of the > > determinant. The easiest way to do this is to use LU decomposition: > > > > P,L,U = scipy.linalg.lu(M) > > d = sum(log(abs(diag(L)))) > > > > Of course you lose track of the sign doing this (P may be either an > > even or odd permutation, though det should be reliable and efficient > > on it). > Of course, one can keep track of the sign by cumprod(sgn(diag(L))), > yes? (Sorry, I don't know the numpy functions for these off hand, but I > assume they exist, yes?) In my first draft I suggested this (prod(sign(diag(L)))), but unfortunately P may contribute a factor of -1, so you have to extract its determinant as well (unless there's some more clever way to get the sign of a permutation matrix? some sum involving the positions of the ones modulo 2 ought to do it, but it's been a while since I did this kind of combinatorics). Anne From David.L.Goldsmith at noaa.gov Thu Sep 13 17:25:36 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Thu, 13 Sep 2007 14:25:36 -0700 Subject: [SciPy-user] determinants In-Reply-To: References: <46E96F16.5020102@ukr.net> <46E9A636.7030805@noaa.gov> Message-ID: <46E9AAD0.5040001@noaa.gov> Anne Archibald wrote: > On 13/09/2007, David Goldsmith wrote: > >> Anne Archibald wrote: >> >>> If you have problems with determinants becoming excessively large, you >>> may be able to circumvent them by computing the log of the >>> determinant. The easiest way to do this is to use LU decomposition: >>> >>> P,L,U = scipy.linalg.lu(M) >>> d = sum(log(abs(diag(L)))) >>> >>> Of course you lose track of the sign doing this (P may be either an >>> even or odd permutation, though det should be reliable and efficient >>> on it). >>> >> Of course, one can keep track of the sign by cumprod(sgn(diag(L))), >> yes? (Sorry, I don't know the numpy functions for these off hand, but I >> assume they exist, yes?) >> > > In my first draft I suggested this (prod(sign(diag(L)))), but > unfortunately P may contribute a factor of -1, so you have to extract > its determinant as well (unless there's some more clever way to get > the sign of a permutation matrix? some sum involving the positions of > the ones modulo 2 ought to do it, but it's been a while since I did > this kind of combinatorics). > Right, (clearly) same here. ;-) However, this suggests something that maybe should be implemented inside det's "black box"? (Obviously, if det is being used inside a formula, the function can't simply return log(det) w/out some manner of user notification. Perhaps a custom exception and/or a second function, e.g. "logdet", to which the user is referred if abs(det) returns inf? Just a suggestion.) DG > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Thu Sep 13 17:45:35 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 13 Sep 2007 16:45:35 -0500 Subject: [SciPy-user] determinants In-Reply-To: <46E9AAD0.5040001@noaa.gov> References: <46E96F16.5020102@ukr.net> <46E9A636.7030805@noaa.gov> <46E9AAD0.5040001@noaa.gov> Message-ID: <46E9AF7F.6070200@gmail.com> David Goldsmith wrote: > Anne Archibald wrote: >> On 13/09/2007, David Goldsmith wrote: >> >>> Anne Archibald wrote: >>> >>>> If you have problems with determinants becoming excessively large, you >>>> may be able to circumvent them by computing the log of the >>>> determinant. The easiest way to do this is to use LU decomposition: >>>> >>>> P,L,U = scipy.linalg.lu(M) >>>> d = sum(log(abs(diag(L)))) >>>> >>>> Of course you lose track of the sign doing this (P may be either an >>>> even or odd permutation, though det should be reliable and efficient >>>> on it). >>>> >>> Of course, one can keep track of the sign by cumprod(sgn(diag(L))), >>> yes? (Sorry, I don't know the numpy functions for these off hand, but I >>> assume they exist, yes?) >>> >> In my first draft I suggested this (prod(sign(diag(L)))), but >> unfortunately P may contribute a factor of -1, so you have to extract >> its determinant as well (unless there's some more clever way to get >> the sign of a permutation matrix? some sum involving the positions of >> the ones modulo 2 ought to do it, but it's been a while since I did >> this kind of combinatorics). >> > Right, (clearly) same here. ;-) > > However, this suggests something that maybe should be implemented inside > det's "black box"? (Obviously, if det is being used inside a formula, > the function can't simply return log(det) w/out some manner of user > notification. Perhaps a custom exception and/or a second function, e.g. > "logdet", to which the user is referred if abs(det) returns inf? Just a > suggestion.) I would just implement logdet() separately and not try to switch between the two automatically. It should be fairly straightforward. You can look in scipy/linalg/src/det.f and scipy/linalg/basic.py for the relevant pieces of code. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From David.L.Goldsmith at noaa.gov Thu Sep 13 17:55:38 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Thu, 13 Sep 2007 14:55:38 -0700 Subject: [SciPy-user] determinants In-Reply-To: <46E9AF7F.6070200@gmail.com> References: <46E96F16.5020102@ukr.net> <46E9A636.7030805@noaa.gov> <46E9AAD0.5040001@noaa.gov> <46E9AF7F.6070200@gmail.com> Message-ID: <46E9B1DA.6090700@noaa.gov> Thanks! DG Robert Kern wrote: > David Goldsmith wrote: > >> Anne Archibald wrote: >> >>> On 13/09/2007, David Goldsmith wrote: >>> >>> >>>> Anne Archibald wrote: >>>> >>>> >>>>> If you have problems with determinants becoming excessively large, you >>>>> may be able to circumvent them by computing the log of the >>>>> determinant. The easiest way to do this is to use LU decomposition: >>>>> >>>>> P,L,U = scipy.linalg.lu(M) >>>>> d = sum(log(abs(diag(L)))) >>>>> >>>>> Of course you lose track of the sign doing this (P may be either an >>>>> even or odd permutation, though det should be reliable and efficient >>>>> on it). >>>>> >>>>> >>>> Of course, one can keep track of the sign by cumprod(sgn(diag(L))), >>>> yes? (Sorry, I don't know the numpy functions for these off hand, but I >>>> assume they exist, yes?) >>>> >>>> >>> In my first draft I suggested this (prod(sign(diag(L)))), but >>> unfortunately P may contribute a factor of -1, so you have to extract >>> its determinant as well (unless there's some more clever way to get >>> the sign of a permutation matrix? some sum involving the positions of >>> the ones modulo 2 ought to do it, but it's been a while since I did >>> this kind of combinatorics). >>> >>> >> Right, (clearly) same here. ;-) >> >> However, this suggests something that maybe should be implemented inside >> det's "black box"? (Obviously, if det is being used inside a formula, >> the function can't simply return log(det) w/out some manner of user >> notification. Perhaps a custom exception and/or a second function, e.g. >> "logdet", to which the user is referred if abs(det) returns inf? Just a >> suggestion.) >> > > I would just implement logdet() separately and not try to switch between the two > automatically. It should be fairly straightforward. You can look in > scipy/linalg/src/det.f and scipy/linalg/basic.py for the relevant pieces of code. > > From vaftrudner at gmail.com Thu Sep 13 19:54:04 2007 From: vaftrudner at gmail.com (Martin Blom) Date: Thu, 13 Sep 2007 18:54:04 -0500 Subject: [SciPy-user] getting stats.zprob to return float96 In-Reply-To: <46E96259.6070401@gmail.com> References: <46E96259.6070401@gmail.com> Message-ID: Rrrrgh. That's what I feared! Well, thank you! Martin 2007/9/13, Robert Kern : > > Martin Blom wrote: > > Hello everyone, > > > > I'm using stats.zprob and need some more decimal points than standard > > python floats can give me, and I need the function to be fast. Can I get > > the standard scipy zprob or ndtr to return float96 in some way? Or do I > > need to modify something in special/cephes or something like that? > > You would have to find or write a routine that used higher precision > floats. We > don't wrap any of them. Cephes does have quad-precision versions of many > of > their functions, but you would have to wrap them yourself. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at shrogers.com Thu Sep 13 21:13:13 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Thu, 13 Sep 2007 19:13:13 -0600 Subject: [SciPy-user] APL2007 - Arrays and Objects - Early Bird Registration and Preliminary Program Message-ID: <46E9E029.5020004@shrogers.com> This is the last day for early bird registration for APL2007, 21-23 Oct in Montreal. It's co-located with OOPSLA2007 and sharing registration services at: http://www.regmaster.com/conf/oopsla2007.html =============== Preliminary Program =============== Tutorials and workshops ================= Introduction to APL (Ray Polivka) Object Oriented for APLers, APL for OOers (Dan Baronet) ... others in the works Presentations ========= No Experience Necessary: Hire for Aptitude - Train for Skills (Brooke Allen) Compiling APL with APEX (Robert Bernecky) APL, Bioinformatics, Cancer Research (Ken Fordyce) Generic Programming on Nesting Structure (Stephan Herhut, Sven-Bodo Scholz, Clemens Grelck) Interactive Array-Based Languages and Financial Research (Devon McCormick) Array vs Non-Array Approaches to Programming Problems (Devon McCormick) Design Issues in APL/OO Interfacing (Richard Nabavi) Arrays of Objects, or Arrays within Objects (Richard Nabavi) Competing, with J (John Randall) ... others in the works There is still room for oral or poster presentations that will not be contributed papers (published in a special issue of APL Quote Quad). If you would like to make an oral presentation or a poster, contact Lynne Shaw (Shaw at acm.org). ACM SIGAPL has broadened it's scope to all Array Programming Languages and NumPy/SciPy representation would be welcome. From lucasjb at csse.unimelb.edu.au Fri Sep 14 01:08:17 2007 From: lucasjb at csse.unimelb.edu.au (Lucas Barbuto) Date: Fri, 14 Sep 2007 15:08:17 +1000 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: <46E4E486.100@ar.media.kyoto-u.ac.jp> References: <46E0F921.2040305@ar.media.kyoto-u.ac.jp> <01FCD5EB-12CE-49CE-9996-28CBB70B873B@csse.unimelb.edu.au> <46E4E486.100@ar.media.kyoto-u.ac.jp> Message-ID: <5CC40E5E-3277-45D7-9090-FA346A4929D0@csse.unimelb.edu.au> On 10/09/2007, at 4:30 PM, David Cournapeau wrote: > Do I understand correctly that you want to compiler numpy/scipy with > gcc, using sunperf ? I am not familiar with non gnu devtools under > solaris, so I don't know if sunperf libraries are supposed to work > with > gcc ? All I really want is a working NumPy and SciPy installation on Solaris 9/x86. Seeing as SciPy recommends vendor optimised BLAS and LAPACK routines and I don't want to build ATLAS, I figure I should use Sun's Performance Library (sunperf). As far as I can tell, it won't be possible to use GCC because of the compiler flags needed for sunperf (see below). > My main guess, though, would be that sunperf requires more than just > -lblas option to link; generally, you need some other link flags. > Since > the default error message of the linker is non explanatory, we need > more > info. What does nm > /local/cat2/apps-archive/SUNWspro-12/prod/lib/libblas.so returns > (assuming libblas.so is the name of the library) ? What I've read in the sunperf user guide[1] suggests I'm supposed to use the flags "-dalign", "-xlic_lib=sunperf" and "-xarch=generic" for x86 architecture, these flags aren't supported by GCC. 1. http://docs.sun.com/source/819-5268/plug_intro.html#0_pgfId-11912 Sunperf apparently contains "enhanced" versions of LAPACK, BLAS and various other libraries. The .so is 18MB so the nm output is understandably long, this output was via 'grep FUNC'. I assume you just want to see function names? http://www.cs.mu.oz.au:80/~lucasjb/nm_output_func_sunperf_so_3.txt I've been having a really difficult time with NumPy's distutils system. So much so that I've resorted to writing my own sunccompiler.py customisation so that I can set the CFLAGS and LDFLAGS that I want. Regardless, I continue to end up at the same unresolved symbols dead-end and I'm way over my head in compiler and linker options. Thanks anyway for your interest David. Regards, -- Lucas Barbuto From robert.kern at gmail.com Fri Sep 14 01:15:34 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Sep 2007 00:15:34 -0500 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: <5CC40E5E-3277-45D7-9090-FA346A4929D0@csse.unimelb.edu.au> References: <46E0F921.2040305@ar.media.kyoto-u.ac.jp> <01FCD5EB-12CE-49CE-9996-28CBB70B873B@csse.unimelb.edu.au> <46E4E486.100@ar.media.kyoto-u.ac.jp> <5CC40E5E-3277-45D7-9090-FA346A4929D0@csse.unimelb.edu.au> Message-ID: <46EA18F6.8040900@gmail.com> Lucas Barbuto wrote: > On 10/09/2007, at 4:30 PM, David Cournapeau wrote: >> Do I understand correctly that you want to compiler numpy/scipy with >> gcc, using sunperf ? I am not familiar with non gnu devtools under >> solaris, so I don't know if sunperf libraries are supposed to work >> with >> gcc ? > > All I really want is a working NumPy and SciPy installation on > Solaris 9/x86. Seeing as SciPy recommends vendor optimised BLAS and > LAPACK routines It depends entirely on what you need, not scipy. Some people require fast linear algebra, some don't. Having fast linear algebra is nice, but if it's getting in the way of having *any* linear algebra and your problems don't require really fast linear algebra, just use the reference BLAS and be done with it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From amcmorl at gmail.com Fri Sep 14 01:17:34 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Fri, 14 Sep 2007 17:17:34 +1200 Subject: [SciPy-user] Debian Live CD Message-ID: Hi all, To give myself enough parallel computing power to run some simulations I need to get this thesis out the door, I've been given permission to take over a computer lab in the weekends and evenings... To run the computers, I've constructed a debian live CD (from the current testing distribution), including: python 2.4.4 numpy 1.0.1 scipy 0.5.2 matplotlib 0.90.1 ipython 0.8.1 Scientific 2.4.11 mayavi2 2.0.2a1.dev_r14175, and its stable ets dependencies + g++,cvs,ssh,emacs,kde-cor The CD seems to have one or two quirks (read: bugs), and is still a work in progress, but so far I haven't encountered anything show-stopping. It could be useful for giving demonstrations of the software to people when you don't have your own computer handy. The iso is just over 300 MB, and if anyone's interested, let me know and I can try to make it available somehow (any thoughts on this?), or provide instructions to build your own - the debian live-helper system makes this really easy. There's obviously still room to add some more packages, so we/I could look at augmenting it if anyone has any simple requests (for example, I've just realised PIL should be in there - that'll be in the next iteration). Angus. -- AJC McMorland, PhD Student Physiology, University of Auckland From david at ar.media.kyoto-u.ac.jp Fri Sep 14 03:15:52 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Sep 2007 16:15:52 +0900 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: <5CC40E5E-3277-45D7-9090-FA346A4929D0@csse.unimelb.edu.au> References: <46E0F921.2040305@ar.media.kyoto-u.ac.jp> <01FCD5EB-12CE-49CE-9996-28CBB70B873B@csse.unimelb.edu.au> <46E4E486.100@ar.media.kyoto-u.ac.jp> <5CC40E5E-3277-45D7-9090-FA346A4929D0@csse.unimelb.edu.au> Message-ID: <46EA3528.7060302@ar.media.kyoto-u.ac.jp> Lucas Barbuto wrote: > On 10/09/2007, at 4:30 PM, David Cournapeau wrote: >> Do I understand correctly that you want to compiler numpy/scipy with >> gcc, using sunperf ? I am not familiar with non gnu devtools under >> solaris, so I don't know if sunperf libraries are supposed to work >> with >> gcc ? > > All I really want is a working NumPy and SciPy installation on > Solaris 9/x86. Seeing as SciPy recommends vendor optimised BLAS and > LAPACK routines and I don't want to build ATLAS, Is there a reason why ? Building dev versions (3.7.*) of ATLAS works pretty well now and is not too difficult (but with gcc). > I figure I should > use Sun's Performance Library (sunperf). As far as I can tell, it > won't be possible to use GCC because of the compiler flags needed for > sunperf (see below). > >> My main guess, though, would be that sunperf requires more than just >> -lblas option to link; generally, you need some other link flags. >> Since >> the default error message of the linker is non explanatory, we need >> more >> info. What does nm >> /local/cat2/apps-archive/SUNWspro-12/prod/lib/libblas.so returns >> (assuming libblas.so is the name of the library) ? > > What I've read in the sunperf user guide[1] suggests I'm supposed to > use the flags "-dalign", "-xlic_lib=sunperf" and "-xarch=generic" for > x86 architecture, these flags aren't supported by GCC. > > 1. http://docs.sun.com/source/819-5268/plug_intro.html#0_pgfId-11912 In other words, you have to find out whether sunperf can be used with gcc. One possible way would be to compile BLAS testers with gcc and trying to link them to sunperf, and see if it works (finding which flags are necessary: maybe using equivalent of -dalign and -xarch with gcc is enough). > > Sunperf apparently contains "enhanced" versions of LAPACK, BLAS and > various other libraries. The .so is 18MB so the nm output is > understandably long, this output was via 'grep FUNC'. I assume you > just want to see function names? > > http://www.cs.mu.oz.au:80/~lucasjb/nm_output_func_sunperf_so_3.txt > > I've been having a really difficult time with NumPy's distutils > system. We are between gentlemen, so I won't say the words which come to my mind when I think about distutils (the one of python; numpy.distutils is trying hard to circumvent distutils limitations). For distutils' defense, what scipy/numpy need go much further than the usual need of python extension; but extending distutils is really a PITA, undocumented to say the least, and unmaintained. > So much so that I've resorted to writing my own > sunccompiler.py customisation so that I can set the CFLAGS and > LDFLAGS that I want. Regardless, I continue to end up at the same > unresolved symbols dead-end and I'm way over my head in compiler and > linker options. I think the best would be for someone knowledgable about numpy/scipy to have access to an environment similar to yours. Problem is, this is non free OS, quite a pain to install (last time I tried at least). I have downloaded a vmware image of nexenta, which is GNU above open solaris; according to http://blogs.sun.com/dbx/entry/installing_nexenta_gnu_solaris_on, I can install sunstudio on it, which means sunperf libraries, right ? Do you think this corresponds to your environment ? I don't want to waste time on it if this does not help you. http://www.gnusolaris.org/gswiki/Download My lab has some solaris on SPARC, but I am afraid only for servers, hence not available for any compilation. cheers, David From raphael.langella at steria.cnes.fr Fri Sep 14 05:56:27 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Fri, 14 Sep 2007 11:56:27 +0200 Subject: [SciPy-user] [Numpy-discussion] Compiling numpy with 64 bits support under Solaris Message-ID: <092785B790DCD043BA45401EDA43D9B50121E05D@cst-xch-003.cnesnet.ad.cnes.fr> > -----Message d'origine----- > De : numpy-discussion-bounces at scipy.org > [mailto:numpy-discussion-bounces at scipy.org] De la part de > David Cournapeau > Envoy? : vendredi 14 septembre 2007 09:27 > ? : Discussion of Numerical Python > Objet : Re: [Numpy-discussion] Compiling numpy with 64 bits > support under Solaris > > Langella Raphael wrote: > > Hi, > > I'm trying to compile numpy with 64 bits support under > Sparc/Solaris > > 8. I've already compiled Python 2.5.1 with 64 bits. I've set up my > > environnement with : > > > > export CC="gcc -mcpu=v9 -m64 -D_LARGEFILE64_SOURCE=1" > > export CXX="g++ -mcpu=v9 -m64 -D_LARGEFILE64_SOURCE=1" > > export LDFLAGS='-mcpu=v9 -m64' > > export LDDFLAGS='-mcpu=v9 -m64 -G' > > > > > I am afraid this won't work really well, because it > overwrites LDFLAGS. > Unfortunately, AFAIK, there is no easy way to change flags > used for compilation and linking. I don't think this is > linked to 32 vs 64 bits problem (though I may be wrong; I > don't know much about solaris). > > I also compiled blas and lapack in 64 bits. I know I don't > need them > > for numpy, but I will soon when I'll compile scipy. > > I've tried to set up my site.cfg, tu use libfblas and > libflapack and > > it didn't work. I tried libsunperf and got the same result : > > > See > http://projects.scipy.org/pipermail/scipy-user/2007-September/ > 013580.html > (the problem being about the sun compilers, I think this > applies to sparc as well). Thanks, I haven't noticed my old thread as been revived. I would post in it, but I only recently subscribed, so I've got no mail to answer to. Last time, I gave up trying to link with libsunperf. I linked numpy and scipy against standard blas and lapack and it worked well. But this time, with 64 bits, even standard blas and lapack gives me errors. I've been able to compile numpy with the integrated blas, but for scipy, I really need it, and I still run into linking problem. The fortran code gets compiled in 32 bits. How to pass flags to g77 ? Lucas, I'm very interested in your custom sunccompiler.py (damn distutils!), as I'll probably need linker flags. Could you post it, please? Thanks Rapha?l From raphael.langella at steria.cnes.fr Fri Sep 14 06:33:16 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Fri, 14 Sep 2007 12:33:16 +0200 Subject: [SciPy-user] [Numpy-discussion] Compiling numpy with 64 bits support under Solaris Message-ID: <092785B790DCD043BA45401EDA43D9B50121E060@cst-xch-003.cnesnet.ad.cnes.fr> > -----Message d'origine----- > De : Langella Raphael > Envoy? : vendredi 14 septembre 2007 11:56 > ? : 'SciPy Users List' > Objet : RE: [Numpy-discussion] Compiling numpy with 64 bits > support under Solaris > > > -----Message d'origine----- > > De : numpy-discussion-bounces at scipy.org > > [mailto:numpy-discussion-bounces at scipy.org] De la part de David > > Cournapeau Envoy? : vendredi 14 septembre 2007 09:27 ? : > Discussion of > > Numerical Python Objet : Re: [Numpy-discussion] Compiling > numpy with > > 64 bits support under Solaris > > > > Langella Raphael wrote: > > > Hi, > > > I'm trying to compile numpy with 64 bits support under > > Sparc/Solaris > > > 8. I've already compiled Python 2.5.1 with 64 bits. I've > set up my > > > environnement with : > > > > > > export CC="gcc -mcpu=v9 -m64 -D_LARGEFILE64_SOURCE=1" > > > export CXX="g++ -mcpu=v9 -m64 -D_LARGEFILE64_SOURCE=1" > > > export LDFLAGS='-mcpu=v9 -m64' > > > export LDDFLAGS='-mcpu=v9 -m64 -G' > > > > > > > > I am afraid this won't work really well, because it overwrites > > LDFLAGS. > > Unfortunately, AFAIK, there is no easy way to change flags used for > > compilation and linking. I don't think this is linked to 32 > vs 64 bits > > problem (though I may be wrong; I don't know much about solaris). > > > I also compiled blas and lapack in 64 bits. I know I don't > > need them > > > for numpy, but I will soon when I'll compile scipy. > > > I've tried to set up my site.cfg, tu use libfblas and > > libflapack and > > > it didn't work. I tried libsunperf and got the same result : > > > > > See > > http://projects.scipy.org/pipermail/scipy-user/2007-September/ > > 013580.html > > (the problem being about the sun compilers, I think this applies to > > sparc as well). > > Thanks, I haven't noticed my old thread as been revived. I > would post in it, but I only recently subscribed, so I've got > no mail to answer to. > Last time, I gave up trying to link with libsunperf. I linked > numpy and scipy against standard blas and lapack and it worked well. > But this time, with 64 bits, even standard blas and lapack > gives me errors. I've been able to compile numpy with the > integrated blas, but for scipy, I really need it, and I still > run into linking problem. The fortran code gets compiled in > 32 bits. How to pass flags to g77 ? I set F77FLAGS and G77FLAGS and one of them worked, all my objects are compiled in 64 bits. But I've got this linking error (note that I'm using GNU compilers and standard blas and lapack) : /outils_std/csw/gcc3/bin/g77 -mcpu=v9 -m64 build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o build/temp.solaris-2.8-sun4u-2.5/Lib/fftpack/src/zfft.o build/temp.solaris-2.8-sun4u-2.5/Lib/fftpack/src/drfft.o build/temp.solaris-2.8-sun4u-2.5/Lib/fftpack/src/zrfft.o build/temp.solaris-2.8-sun4u-2.5/Lib/fftpack/src/zfftnd.o build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o -L/outils_std/csw/gcc3/bin/../lib/gcc/sparc-sun-solaris2.8/3.4.4/sparcv9 -Lbuild/temp.solaris-2.8-sun4u-2.5 -ldfftpack -lg2c -o build/lib.solaris-2.8-sun4u-2.5/scipy/fftpack/_fftpack.so Undefined first referenced symbol in file PyString_AsString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyArg_ParseTupleAndKeywords build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o Py_FindMethod build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyExc_ImportError build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyCObject_AsVoidPtr build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyString_ConcatAndDel build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyComplex_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyString_FromString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyExc_RuntimeError build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyDict_GetItemString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PySequence_GetItem build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyString_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyObject_GetAttrString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyErr_Occurred build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyExc_ValueError build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o MAIN__ /outils_std/csw/gcc3/bin/../lib/gcc/sparc-sun-solaris2.8/3.4.4/../../../sparcv9/libfrtbegin.a(frtbegin.o) PyErr_SetString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o Py_BuildValue build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyDict_DelItemString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyInt_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyDict_SetItemString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyErr_Format build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyType_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyCObject_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PySequence_Check build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyErr_Print build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyErr_Clear build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyModule_GetDict build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyExc_TypeError build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyType_IsSubtype build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyMem_Free build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyExc_AttributeError build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyImport_ImportModule build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o _Py_NoneStruct build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyObject_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyNumber_Int build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyObject_Str build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyErr_NewException build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyCObject_FromVoidPtr build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o Py_InitModule4_64 build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyDict_New build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o _PyObject_New build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o ld: fatal: Symbol referencing errors. No output written to build/lib.solaris-2.8-sun4u-2.5/scipy/fftpack/_fftpack.so collect2: ld returned 1 exit status Undefined first referenced symbol in file PyString_AsString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyArg_ParseTupleAndKeywords build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o Py_FindMethod build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyExc_ImportError build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyCObject_AsVoidPtr build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyString_ConcatAndDel build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyComplex_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyString_FromString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyExc_RuntimeError build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyDict_GetItemString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PySequence_GetItem build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyString_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyObject_GetAttrString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyErr_Occurred build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyExc_ValueError build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o MAIN__ /outils_std/csw/gcc3/bin/../lib/gcc/sparc-sun-solaris2.8/3.4.4/../../../sparcv9/libfrtbegin.a(frtbegin.o) PyErr_SetString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o Py_BuildValue build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyDict_DelItemString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyInt_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyDict_SetItemString build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyErr_Format build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyType_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyCObject_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PySequence_Check build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyErr_Print build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyErr_Clear build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyModule_GetDict build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyExc_TypeError build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyType_IsSubtype build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyMem_Free build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyExc_AttributeError build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyImport_ImportModule build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o _Py_NoneStruct build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyObject_Type build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyNumber_Int build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyObject_Str build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o PyErr_NewException build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyCObject_FromVoidPtr build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o Py_InitModule4_64 build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o PyDict_New build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o _PyObject_New build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o ld: fatal: Symbol referencing errors. No output written to build/lib.solaris-2.8-sun4u-2.5/scipy/fftpack/_fftpack.so collect2: ld returned 1 exit status error: Command "/outils_std/csw/gcc3/bin/g77 -mcpu=v9 -m64 build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/Lib/fftpack/_fftpackmodule.o build/temp.solaris-2.8-sun4u-2.5/Lib/fftpack/src/zfft.o build/temp.solaris-2.8-sun4u-2.5/Lib/fftpack/src/drfft.o build/temp.solaris-2.8-sun4u-2.5/Lib/fftpack/src/zrfft.o build/temp.solaris-2.8-sun4u-2.5/Lib/fftpack/src/zfftnd.o build/temp.solaris-2.8-sun4u-2.5/build/src.solaris-2.8-sun4u-2.5/fortranobject.o -L/outils_std/csw/gcc3/bin/../lib/gcc/sparc-sun-solaris2.8/3.4.4/sparcv9 -Lbuild/temp.solaris-2.8-sun4u-2.5 -ldfftpack -lg2c -o build/lib.solaris-2.8-sun4u-2.5/scipy/fftpack/_fftpack.so" failed with exit status 1 Rapha?l From david at ar.media.kyoto-u.ac.jp Fri Sep 14 06:36:50 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Sep 2007 19:36:50 +0900 Subject: [SciPy-user] [Numpy-discussion] Compiling numpy with 64 bits support under Solaris In-Reply-To: <092785B790DCD043BA45401EDA43D9B50121E060@cst-xch-003.cnesnet.ad.cnes.fr> References: <092785B790DCD043BA45401EDA43D9B50121E060@cst-xch-003.cnesnet.ad.cnes.fr> Message-ID: <46EA6442.8040509@ar.media.kyoto-u.ac.jp> Langella Raphael wrote: > > > I set F77FLAGS and G77FLAGS and one of them worked, all my objects are compiled in 64 bits. But I've got this linking error (note that I'm using GNU compilers and standard blas and lapack) : > > This one looks easy: this is because you need to tell the linker you are building a shared library (you can tell it does not because it is looking for MAIN). I asked before whether sunperf is included in sun studio ? Can you confirm it ? If it is, maybe I can easily run a virtual machine with an open solaris with sunperf + gcc quite easily. cheers, David From raphael.langella at steria.cnes.fr Fri Sep 14 06:54:36 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Fri, 14 Sep 2007 12:54:36 +0200 Subject: [SciPy-user] [Numpy-discussion] Compiling numpy with 64 bits support under Solaris Message-ID: <092785B790DCD043BA45401EDA43D9B50121E061@cst-xch-003.cnesnet.ad.cnes.fr> > -----Message d'origine----- > De : scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] De la part de David Cournapeau > Envoy? : vendredi 14 septembre 2007 12:37 > ? : SciPy Users List > Objet : Re: [SciPy-user] [Numpy-discussion] Compiling numpy > with 64 bits support under Solaris > > Langella Raphael wrote: > > > > > > I set F77FLAGS and G77FLAGS and one of them worked, all my > objects are compiled in 64 bits. But I've got this linking > error (note that I'm using GNU compilers and standard blas > and lapack) : > > > > > This one looks easy: this is because you need to tell the > linker you are building a shared library (you can tell it > does not because it is looking for MAIN). OK, but how to pass options to the linker? I thought it wasn't possible. > > I asked before whether sunperf is included in sun studio ? > Can you confirm it ? If it is, maybe I can easily run a > virtual machine with an open solaris with sunperf + gcc quite easily. Yes, I confirm sunperf is included in sun studio. So you should be able to emulate Lucas environment (Solaris/x86), but I'm running sparc. Anyway, it could be helpful for me as well. Thanks. > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From calhoun at amath.washington.edu Fri Sep 14 06:55:14 2007 From: calhoun at amath.washington.edu (Donna Calhoun) Date: Fri, 14 Sep 2007 10:55:14 +0000 (UTC) Subject: [SciPy-user] BLAS and srotgm References: <46E6D84A.7020800@gmail.com> Message-ID: Robert Kern gmail.com> writes: ---------------------------------------------------------------------------- >> .............(lots cut out here).......................... > >> line 14, in > >> from scipy.linalg import fblas > >> ImportError: > >> /usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ > > fblas.so: > >> undefined symbol: srotmg_ > >> ---------------------------------------------------------------------------- > Can you give us the contents of your site.cfg, the locations of all of the > libblas's on your system, the output from your build, and the g77 command that > worked? That might help us to explain the problem at least, if not fix it. > Yes, here is the short answer to your question. Here is the offending g77 command : # Original command : long_dir = build/temp.linux-i686-2.5/build/src.linux-i686-2.5 g77 -shared -L/usr/lib -lutil -lc -lpthread -L/usr/local/tk/lib -ltk8.4 -L/usr/local/tcl/lib -ltcl8.4 $long_dir/build/src.linux-i686-2.5/Lib/lib/blas/fblasmodule.o $long_dir/fortranobject.o $long_dir/Lib/lib/blas/fblaswrap.o $long_dir/build/src.linux-i686-2.5/Lib/lib/blas/fblas-f2pywrappers.o -L/usr/local/lapack-3.1.1 -L/usr/local/Python-2.5-with_tk/lib -L/usr/local/Python-2.5-with_tk/lib/python2.5/lib-dynload -Lbuild/temp.linux-i686-2.5 -lblas -lblas -lpython2.5 -lz -lg2c -o build/lib.linux-i686-2.5/scipy/lib/blas/fblas.so I have several 'libblas' on my system (and most of them are old) /usr/lib/libblas.a /usr/lib/libblas.so /usr/lib/libblas.so.3 /usr/lib/libblas.so.3.0 /usr/lib/libblas.so.3.0.3 /usr/local/lapack-3.1.1/libblas.a /usr/local/lapack-3.1.1/libblas.so and the scipy build found the first one. This is first in the g77 command, and so g77 quit looking for any other libraries. That library didn't have 'srotmg' in it. (As to WHY I had libc, etc in my link path, that will have to wait for a longer post!) I removed the library flags to libc, and libutil and the build picked up the correct blas library. My site.cfg file was (I believe, although I don't have exactly the one I used) was [DEFAULT] library_dirs = /usr/local/Python-2.5/lib libraries = python2.5 [fftw3] include_dirs = /usr/local/fftw-3.1.2/include library_dirs = /usr/local/fftw-3.1.2/lib libraries = fftw3 [blas] library_dirs = /usr/local/lapack-3.1.1 libraries = blas [lapack] library_dirs = /usr/local/lapack-3.1.1 libraries = lapack scipy found the libraries : lapack_info: FOUND: libraries = ['lapack', 'lapack'] library_dirs = ['/usr/local/lapack-3.1.1'] language = f77 blas_info: FOUND: libraries = ['blas', 'blas'] library_dirs = ['/usr/local/lapack-3.1.1'] language = f77 but they just ended up in the wrong place in the g77 command. Thanks again for your help, Donna From david at ar.media.kyoto-u.ac.jp Fri Sep 14 07:56:08 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Sep 2007 20:56:08 +0900 Subject: [SciPy-user] [Numpy-discussion] Compiling numpy with 64 bits support under Solaris In-Reply-To: <092785B790DCD043BA45401EDA43D9B50121E061@cst-xch-003.cnesnet.ad.cnes.fr> References: <092785B790DCD043BA45401EDA43D9B50121E061@cst-xch-003.cnesnet.ad.cnes.fr> Message-ID: <46EA76D8.2090507@ar.media.kyoto-u.ac.jp> Langella Raphael wrote: > Yes, I confirm sunperf is included in sun studio. So you should be able to emulate Lucas environment (Solaris/x86), but I'm running sparc. Anyway, it could be helpful for me as well. Thanks. > I think the problem is the same on sparc and x86. Actually, there is a linux version of the sun compilers (and maybe sunperf as well), which I think should behave more or less the same. I will look into it, David From robert.kern at gmail.com Fri Sep 14 12:00:27 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Sep 2007 11:00:27 -0500 Subject: [SciPy-user] BLAS and srotgm In-Reply-To: References: <46E6D84A.7020800@gmail.com> Message-ID: <46EAB01B.1000904@gmail.com> Donna Calhoun wrote: > Robert Kern gmail.com> writes: > > ---------------------------------------------------------------------------- >>> .............(lots cut out here).......................... > >>>> line 14, in >>>> from scipy.linalg import fblas >>>> ImportError: >>>> /usr/local/Python-2.5-with_tk/lib/python2.5/site-packages/scipy/linalg/ >>> fblas.so: >>>> undefined symbol: srotmg_ >>>> ---------------------------------------------------------------------------- > > >> Can you give us the contents of your site.cfg, the locations of all of the >> libblas's on your system, the output from your build, and the g77 command that >> worked? That might help us to explain the problem at least, if not fix it. >> > > Yes, here is the short answer to your question. Here is the offending g77 > command : > > # Original command : > > long_dir = build/temp.linux-i686-2.5/build/src.linux-i686-2.5 > > g77 -shared -L/usr/lib -lutil -lc -lpthread -L/usr/local/tk/lib -ltk8.4 > -L/usr/local/tcl/lib -ltcl8.4 > $long_dir/build/src.linux-i686-2.5/Lib/lib/blas/fblasmodule.o > $long_dir/fortranobject.o > $long_dir/Lib/lib/blas/fblaswrap.o > $long_dir/build/src.linux-i686-2.5/Lib/lib/blas/fblas-f2pywrappers.o > -L/usr/local/lapack-3.1.1 -L/usr/local/Python-2.5-with_tk/lib > -L/usr/local/Python-2.5-with_tk/lib/python2.5/lib-dynload > -Lbuild/temp.linux-i686-2.5 -lblas -lblas -lpython2.5 -lz -lg2c -o > build/lib.linux-i686-2.5/scipy/lib/blas/fblas.so > > I have several 'libblas' on my system (and most of them are old) > > /usr/lib/libblas.a > /usr/lib/libblas.so > /usr/lib/libblas.so.3 > /usr/lib/libblas.so.3.0 > /usr/lib/libblas.so.3.0.3 > /usr/local/lapack-3.1.1/libblas.a > /usr/local/lapack-3.1.1/libblas.so > > and the scipy build found the first one. This is first in the g77 command, and > so g77 quit looking for any other libraries. That library didn't have > 'srotmg' in it. (As to WHY I had libc, etc in my link path, that will have > to wait for a longer post!) > > I removed the library flags to libc, and libutil and the build picked up the > correct blas library. I'm curious as to why you had -L/usr/lib in there. Did it come from Python's build or do you have an LDFLAGS environment variable sitting around that's interfering? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lucasjb at csse.unimelb.edu.au Sun Sep 16 22:03:20 2007 From: lucasjb at csse.unimelb.edu.au (Lucas Barbuto) Date: Mon, 17 Sep 2007 12:03:20 +1000 Subject: [SciPy-user] [Numpy-discussion] Compiling numpy with 64 bits support under Solaris In-Reply-To: <092785B790DCD043BA45401EDA43D9B50121E05D@cst-xch-003.cnesnet.ad.cnes.fr> References: <092785B790DCD043BA45401EDA43D9B50121E05D@cst-xch-003.cnesnet.ad.cnes.fr> Message-ID: Hi Raph?el, On 14/09/2007, at 7:56 PM, Langella Raphael wrote: > Lucas, I'm very interested in your custom sunccompiler.py (damn > distutils!), as I'll probably need linker flags. Could you post it, > please? Well, it's nothing to get excited about, I basically copied numpy/ distutils/intelccompiler.py file and changed a few names. To my surprise, it seemed to do what I expected, but it was really a shot in the dark and is almost certainly not "the right way" to go about things. You'll probably have a much better time with David's latest effort, but see attached FWIW. Regards, -- Lucas Barbuto -------------- next part -------------- A non-text attachment was scrubbed... Name: sunccompiler.py Type: application/applefile Size: 373 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sunccompiler.py Type: text/x-python-script Size: 942 bytes Desc: not available URL: -------------- next part -------------- From lucasjb at csse.unimelb.edu.au Sun Sep 16 22:11:31 2007 From: lucasjb at csse.unimelb.edu.au (Lucas Barbuto) Date: Mon, 17 Sep 2007 12:11:31 +1000 Subject: [SciPy-user] Initial support for sunperf and sun compilers (linux + solaris) In-Reply-To: <46ED15E3.2050507@ar.media.kyoto-u.ac.jp> References: <46ED15E3.2050507@ar.media.kyoto-u.ac.jp> Message-ID: <08AE097F-2DA0-49C7-8947-D67587A5BD9D@csse.unimelb.edu.au> Hi David, Thanks for your continued interest in this. On 16/09/2007, at 9:39 PM, David Cournapeau wrote: > Ok, I created a numpy branch to implement this, and get something > working. This is still really rough, though. Please check out the > numpy.sunperf branch: > > svn co http://svn.scipy.org/svn/numpy/branches/numpy.sunperf Just a quick correction, I had to remove branches/ from the above URL. > SUNPERF=SUNPERFROOT python setup.py build --compiler=sun -- > fcompiler=sun I used your test_sunperf.sh script which completed the build without errors, testing didn't go 100% smoothly, I don't know if those warnings are serious, output below. > I have not tested scipy either, so that something you could try > also if numpy works. Will do. Regards, -- Lucas Barbuto Numpy version 1.0.4.dev4045 Python version 2.5 (r25:51908, Mar 13 2007, 12:19:11) [GCC 3.4.5] Found 10/10 tests for numpy.core.defmatrix Found 36/36 tests for numpy.core.ma Found 218/218 tests for numpy.core.multiarray Found 65/65 tests for numpy.core.numeric Found 31/31 tests for numpy.core.numerictypes Found 12/12 tests for numpy.core.records Found 6/6 tests for numpy.core.scalarmath Found 14/14 tests for numpy.core.umath Found 4/4 tests for numpy.ctypeslib Found 5/5 tests for numpy.distutils.misc_util Found 1/1 tests for numpy.fft.fftpack Found 3/3 tests for numpy.fft.helper Found 9/9 tests for numpy.lib.arraysetops Found 46/46 tests for numpy.lib.function_base Found 5/5 tests for numpy.lib.getlimits Found 4/4 tests for numpy.lib.index_tricks Found 3/3 tests for numpy.lib.polynomial Found 49/49 tests for numpy.lib.shape_base Found 13/13 tests for numpy.lib.twodim_base Found 43/43 tests for numpy.lib.type_check Found 1/1 tests for numpy.lib.ufunclike Found 32/32 tests for numpy.linalg Found 2/2 tests for numpy.random Found 0/0 tests for __main__ ........................................................................ ........................................................................ ........................................................................ ........................................................................ ...........................Warning: invalid value encountered in isinf Warning: invalid value encountered in isinf .......Warning: invalid value encountered in isinf Warning: invalid value encountered in isinf .Warning: invalid value encountered in isinf .Warning: invalid value encountered in isinf Warning: invalid value encountered in isinf .Warning: invalid value encountered in isinf .Warning: invalid value encountered in isinf Warning: invalid value encountered in isinf .Warning: invalid value encountered in absolute Warning: invalid value encountered in absolute Warning: invalid value encountered in less_equal ....Warning: invalid value encountered in isinf Warning: invalid value encountered in isinf ........................................................................ ........................................................................ ........................................................................ ...............................................................Warning: invalid value encountered in isfinite .Warning: invalid value encountered in isfinite ...Warning: invalid value encountered in isinf .Warning: invalid value encountered in isinf .Warning: invalid value encountered in isinf .Warning: invalid value encountered in isinf .........................Warning: invalid value encountered in isinf Warning: invalid value encountered in isinf Warning: invalid value encountered in isinf Warning: invalid value encountered in isinf ................................... ---------------------------------------------------------------------- Ran 677 tests in 0.884s OK From david at ar.media.kyoto-u.ac.jp Mon Sep 17 00:29:56 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 17 Sep 2007 13:29:56 +0900 Subject: [SciPy-user] Initial support for sunperf and sun compilers (linux + solaris) In-Reply-To: <08AE097F-2DA0-49C7-8947-D67587A5BD9D@csse.unimelb.edu.au> References: <46ED15E3.2050507@ar.media.kyoto-u.ac.jp> <08AE097F-2DA0-49C7-8947-D67587A5BD9D@csse.unimelb.edu.au> Message-ID: <46EE02C4.2030807@ar.media.kyoto-u.ac.jp> Lucas Barbuto wrote: > Hi David, > > Thanks for your continued interest in this. > > On 16/09/2007, at 9:39 PM, David Cournapeau wrote: > >> Ok, I created a numpy branch to implement this, and get something >> working. This is still really rough, though. Please check out the >> numpy.sunperf branch: >> >> svn co http://svn.scipy.org/svn/numpy/branches/numpy.sunperf >> > > Just a quick correction, I had to remove branches/ from the above URL. > Argh, I totally screwed up on this one, I created the branch at the wrong place. > >> SUNPERF=SUNPERFROOT python setup.py build --compiler=sun -- >> fcompiler=sun >> > > I used your test_sunperf.sh script which completed the build without > errors, testing didn't go 100% smoothly, I don't know if those > warnings are serious, output below. > There are several things missing in the current implementation: first, it should look for sunperf automatically, not using this SUNPERF thing, and also, the sun compilers do not use any flags. I am not sure if it is because I did something wrong to disable them, or if I need to add them anyway. For the warning, the flags may be of some importance, I don't know. I don't remember having seen so many warnings on Linux, though. Would be interesting to know what you get on sparc. I should have said that for now, this is mainly random hacks on a platform I don't really know, so I would not use it without more testing first. David From david at ar.media.kyoto-u.ac.jp Mon Sep 17 03:21:02 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 17 Sep 2007 16:21:02 +0900 Subject: [SciPy-user] A first proposal for dataset organization Message-ID: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> Hi there, A few months ago, we started to discuss about various issues about dataset for numpy/scipy. In the context of my Summer Of Code for machine learning tools in python, I had the possibility to tackle concretely the issue. Before announcing a first alpha version of my work, I would like to gather comments, critics about the following proposal for dataset organization. The following proposal is also available in svn: http://projects.scipy.org/scipy/scikits/browser/trunk/learn/scikits/learn/datasets/DATASET_PROPOSAL.txt Dataset for scipy: design proposal ================================== One of the thing numpy/scipy is missing now is a set of datasets, available for demo, courses, etc. For example, R has a set of dataset available at the core. The expected usage of the datasets are the following: - machine learning: eg the data contain also class information (discrete or continuous) - descriptive statistics - others ? That is, a dataset is not only data, but also some meta-data. The goal of this proposal is to propose common practices for organizing the data, in a way which is both straightforward, and does not prevent specific usage of the data. Organization ------------ A preliminary set of datasets is available at the following address: http://projects.scipy.org/scipy/scikits/browser/trunk/learn/scikits/learn/datasets Each dataset is a directory and defines a python package (e.g. has the __init__.py file). Each package is expected to define the function load, returning the corresponding data. For example, to access datasets data1, you should be able to do: >>> from datasets.data1 import load >>> d = load() # -> d contains the data. load can do whatever it wants: fetching data from a file (python script, csv file, etc...), from the internet, etc... Some special variables must be defined for each package, containing a python string: - COPYRIGHT: copyright informations - SOURCE: where the data are coming from - DESCHOSRT: short description - DESCLONG: long description - NOTE: some notes on the datasets. Format of the data ------------------ Here, I suggest a common practice for the returned value by the load function. Instead of using classes to provide meta-data, I propose to use a dictionnary of arrays, with some values mandatory. The key goals are: - for people who just want the data, there is no extra burden ("just give me the data !" MOTO). - for people who need more, they can easily extract what they need from the returned values. More high level abstractions can be built easily from this model. - all possible dataset should fit into this model. - In particular, I want to be able to be able to convert our dataset to Orange Dataset representation (or other machine learning tool), and vice-versa. For the datasets to be useful in the learn scikits, which is the project which initiated this datasets package, the data returned by load has to be a dict with the following conventions: - 'data': this value should be a record array containing the actual data. - 'label': this value should be a rank 1 array of integers, contains the label index for each sample, that is label[i] should be the label index of data[i]. If it contains float values, it is used for regression instead. - 'class': a record array such as class[i] is the class name. In other words, this makes the correspondance label name > label index. As an example, I use the famouse IRIS dataset: the dataset contains 3 classes of flowers, and for each flower, 4 measures (called attributes in machine learning vocabulary) are available (sepal width and length, petal width and length). In this case, the values returned by load would be: - 'data': a record array containing all the flowers' measurements. For descriptive statistics, that's all you may need. You can easily find the attributes from the dtype (a function to find the attributes is also available: it returns a list of the attributes). - 'labels': an array of integers (for class information) or float (for regression). each class is encoded as an integer, and labels[i] returns this integer for the sample i. - 'class': a record array, which returns the integer code for each class. For example, class['Iris-versicolor'] will return the integer used in label, and all samples i such as label[i] == class['Iris-versicolor'] are of the class 'Iris-versicolor'. This contains enough information to get all useful information through introspection and simple functions. I already implemented a small module to do basic things such as: - selecting only a subset of all samples. - selecting only a subset of the attributes (only sepal length and width, for example). - selecting only the samples of a given class. - small summary of the dataset. This is implemented in less than 100 lines, which tends to show that the above design is not too simplistic. Remaining problems: ------------------- I see mainly two big problems: - if the dataset is big and cannot fit into memory, what kind of API do we want to avoid loading all the data in memory ? Can we use memory mapped arrays ? - Missing data: I thought about subclassing both record arrays and masked arrays classes, but I don't know if this is feasable, or even makes sense. I have the feeling that some Data mining software use Nan (for example, weka seems to use float internally), but this prevents them from representing integer data. Current implementation ---------------------- An implementation following the above design is available in scikits.learn.datasets. If you installed scikits.learn, you can execute the file learn/utils/attrselect.py, which shows the information you can easily extract for now from this model. Also, once the above problems are solved, an arff converter will be available: arff is the format used by WEKA, and many datasets are available at this format: http://weka.sourceforge.net/wekadoc/index.php/en:ARFF_%283.5.4%29 http://www.cs.waikato.ac.nz/ml/weka/index_datasets.html Note ---- Although the datasets package emerged from the learn package, I try to keep it independant from everything else, that is once we agree on the remaining problems and where the package should go, it can easily be put elsewhere without too much trouble. cheers, David From openopt at ukr.net Mon Sep 17 15:33:24 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 17 Sep 2007 22:33:24 +0300 Subject: [SciPy-user] ANN (numerical optimization): ALGENCAN have migrated from numeric to numpy Message-ID: <46EED684.8000308@ukr.net> Today ALGENCAN developers informed that ALGENCAN have finally migrated from using numeric to numpy. (ALGENCAN is constrained non-linear optimization solver, based on Augmented Lagrangian multipliers) Those who prefer to use that one from openopt environment and has encountered the error message with ALGENCAN-openopt connection: struct has no "ndim" attribute should re-install ALGENCAN according to new instructions (you can get that ones here ) Regards, Dmitrey dmitrey.kroshko at scipy.org http://scipy.org/scipy/scikits/wiki/OpenOpt From aisaac at american.edu Mon Sep 17 17:31:10 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 17 Sep 2007 17:31:10 -0400 Subject: [SciPy-user] ANN (numerical optimization): ALGENCAN have migrated from numeric to numpy In-Reply-To: <46EED684.8000308@ukr.net> References: <46EED684.8000308@ukr.net> Message-ID: On Mon, 17 Sep 2007, dmitrey apparently wrote: > re-install ALGENCAN according to new instructions > (you can get that ones here > ) Well this is good news. I have the impression that this speed was in part a response to your queries. So, congrats! Cheers, Alan Isaac From aisaac at american.edu Tue Sep 18 00:41:16 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 18 Sep 2007 00:41:16 -0400 Subject: [SciPy-user] Dickey-Fuller or other unit root tests? Message-ID: I'm looking for a careful implementation of Dickey-Fuller or other unit root tests. Any clues? (Anything in SciPy?) Thank you, Alan Isaac From fdu.xiaojf at gmail.com Tue Sep 18 03:20:28 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Tue, 18 Sep 2007 15:20:28 +0800 Subject: [SciPy-user] ANN (numerical optimization): ALGENCAN have migrated from numeric to numpy In-Reply-To: <46EED684.8000308@ukr.net> References: <46EED684.8000308@ukr.net> Message-ID: <46EF7C3C.9040204@gmail.com> Hi dmitrey, dmitrey wrote: > Today ALGENCAN developers informed that ALGENCAN have finally migrated > from using numeric to numpy. > (ALGENCAN is constrained non-linear optimization solver, based on > Augmented Lagrangian multipliers) > > Those who prefer to use that one from openopt environment and has > encountered the error message with ALGENCAN-openopt connection: > > struct has no "ndim" attribute > > should re-install ALGENCAN according to new instructions > (you can get that ones here > ) > > Regards, Dmitrey > dmitrey.kroshko at scipy.org > http://scipy.org/scipy/scikits/wiki/OpenOpt I have read the instructions of ALGENCAN, but it seems there are only instructions for compilation using gcc and g77. So I'm wondering how to install ALGENCAN and it's python interface on my windows box. Are there compiled version for windows? Thanks. From fredmfp at gmail.com Tue Sep 18 03:23:08 2007 From: fredmfp at gmail.com (fred) Date: Tue, 18 Sep 2007 09:23:08 +0200 Subject: [SciPy-user] arrays mean & NaN... In-Reply-To: <46E6312F.4000704@ar.media.kyoto-u.ac.jp> References: <46E2A82C.5070707@gmail.com> <46E2ED28.9000108@enthought.com> <46E32593.5000001@gmail.com> <46E60F22.4020406@gmail.com> <46E6312F.4000704@ar.media.kyoto-u.ac.jp> Message-ID: <46EF7CDC.8070705@gmail.com> David Cournapeau a ?crit : > Wolfgang Kerzendorf wrote: > >> It would be a very good idea, I use nan very often and have trouble when >> computing the mean. I think it would be better to have a switch in the >> mean function to switch to ignoring nans. Could that be implemented in >> other funtions like squaresum (ss) as well? >> >> > The nanmean, nanmedian and nanstd already exist, but for some reason are > not exposed at the package module: > > from scipy.stats.stats import nanmean, nanmedian, nanstd > import numpy as N > a = N.array([1., 2., N.nan]) > N.mean(a) # -> returns Nan > nanmean(a) # -> returns 1.5, treating Nan as a missing value > So, shall we do wait for it to be implemented in numpy or use it as it is in scipy.stats ? Eric said that they want to put it in numpy IIUC. But... when ? ;-) Cheers, -- http://scipy.org/FredericPetit From openopt at ukr.net Tue Sep 18 03:26:32 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 18 Sep 2007 10:26:32 +0300 Subject: [SciPy-user] ANN (numerical optimization): ALGENCAN have migrated from numeric to numpy In-Reply-To: <46EF7C3C.9040204@gmail.com> References: <46EED684.8000308@ukr.net> <46EF7C3C.9040204@gmail.com> Message-ID: <46EF7DA8.2090900@ukr.net> you'd better contact ALGENCAN developers with the question. Regards, D. fdu.xiaojf at gmail.com wrote: > Hi dmitrey, > dmitrey wrote: > > Today ALGENCAN developers informed that ALGENCAN have finally migrated > > from using numeric to numpy. > > (ALGENCAN is constrained non-linear optimization solver, based on > > Augmented Lagrangian multipliers) > > > > Those who prefer to use that one from openopt environment and has > > encountered the error message with ALGENCAN-openopt connection: > > > > struct has no "ndim" attribute > > > > should re-install ALGENCAN according to new instructions > > (you can get that ones here > > ) > > > > Regards, Dmitrey > > dmitrey.kroshko at scipy.org > > http://scipy.org/scipy/scikits/wiki/OpenOpt > > > I have read the instructions of ALGENCAN, but it seems there are only > instructions for compilation using gcc and g77. So I'm wondering how to > install ALGENCAN and it's python interface on my windows box. > > Are there compiled version for windows? > > Thanks. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From fdu.xiaojf at gmail.com Tue Sep 18 04:23:33 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Tue, 18 Sep 2007 16:23:33 +0800 Subject: [SciPy-user] ANN (numerical optimization): ALGENCAN have migrated from numeric to numpy In-Reply-To: <46EF7DA8.2090900@ukr.net> References: <46EED684.8000308@ukr.net> <46EF7C3C.9040204@gmail.com> <46EF7DA8.2090900@ukr.net> Message-ID: <46EF8B05.2020903@gmail.com> Hi dmitrey, dmitrey wrote: > you'd better contact ALGENCAN developers with the question. > Regards, D. > I'm trying to contact ALGENCAN developers. However, I have a question about openopt. I have tried the example at http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/examples/nlp_1.py. Since I couldn't get ALGENCAN work now, I tried with lincher( I just commented the line "r = p.solve('ALGENCAN')" and uncommented "r = p.solve('lincher')" ). There are the only output: starting solver lincher (BSD license) with problem unnamed itn 0: Fk= 8596.39550577 maxResidual= 605859.237208 And after that, python crashed :-( (I have install cvxopt0.9) I think I must did something wrong, so could you tell me more about how to install scikits and make it work ? There are only little about this on the web. Thanks. From fdu.xiaojf at gmail.com Tue Sep 18 07:19:33 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Tue, 18 Sep 2007 19:19:33 +0800 Subject: [SciPy-user] Compile ALGENCAN on windows Message-ID: <46EFB445.4030205@gmail.com> Hi all, I tried to add parameter "-mno-cygwin" to gcc and g77, and it worked except the last step. According to the "Quick Start" part in http://www.ime.usp.br/%7Eegbirgin/tango/py.php, """ Quick start 1) Copy the 7 files above. 2) Compile typing make or, manually, typing g77 -O4 -c -fPIC -xf77-cpp-input algencan.f gcc -O4 -c -fPIC -Df2cFortran -I$PYTHONDIR -I$PYTHONLIB/site-packages/numpy/core/include pywrapper.c g77 -O4 -shared pywrapper.o algencan.o -o pywrapper.so 3) Run typing python algencanma.py. 4) If everything was ok, the output in the screen should be very similar to the content of the file algencan.out that comes with ALGENCAN. 5) Modify the toyprob.py file to solve your own problem. Obs1: To use this interface you need to have downloaded ALGENCAN. Obs2: It is assumed that (i) packages python, python-dev, python-numpy and python-numpy-dev are installed; that (ii) the environment variable PYTHONDIR points to the directory containing the include files needed for developing Python extensions and embedding the interpreter (for example, /usr/include/python2.5); and that (iii) the environment variable PYTHONLIB points to the directory containing the standard Python modules (for example, /usr/lib/python2.5). """ Here is the commonds I have run and the output: $ export PYTHONDIR=d:/programs/python25/include $ export PYTHONLIB=d:/programs/python25/Lib $ g77 -mno-cygwin -O4 -c -fPIC -xf77-cpp-input algencan.f algencan.f:0: warning: -fPIC ignored for target (all code is position independent) /cygdrive/c/DOCUME~1/ADMINI~1/LOCALS~1/Temp/cc6oThyQ.f:0: warning: -fPIC ignored for target (all code is position independent) $ gcc -mno-cygwin -O4 -c -fPIC -Df2cFortran -Id:/programs/python25/include -ID:/programs/python25/Lib/site-packages/numpy/core/include pywrapper.c pywrapper.c:1: warning: -fPIC ignored for target (all code is position independent) $ g77 -mno-cygwin -O4 -shared pywrapper.o algencan.o -o pywrapper.so pywrapper.o:pywrapper.c:(.text+0x2f): undefined reference to `__imp__Py_InitModule4' pywrapper.o:pywrapper.c:(.text+0x3c): undefined reference to `__imp__PyImport_ImportModule' pywrapper.o:pywrapper.c:(.text+0x54): undefined reference to `__imp__PyObject_GetAttrString' pywrapper.o:pywrapper.c:(.text+0x64): undefined reference to `__imp__PyCObject_Type' pywrapper.o:pywrapper.c:(.text+0x92): undefined reference to `__imp__PyErr_Print' pywrapper.o:pywrapper.c:(.text+0x97): undefined reference to `__imp__PyExc_ImportError' pywrapper.o:pywrapper.c:(.text+0xab): undefined reference to `__imp__PyErr_SetString' pywrapper.o:pywrapper.c:(.text+0xbb): undefined reference to `__imp__PyCObject_AsVoidPtr' pywrapper.o:pywrapper.c:(.text+0xee): undefined reference to `__imp__PyExc_RuntimeError' pywrapper.o:pywrapper.c:(.text+0xf9): undefined reference to `__imp__PyErr_Format' pywrapper.o:pywrapper.c:(.text+0x497): undefined reference to `__imp__PyExc_ValueError' pywrapper.o:pywrapper.c:(.text+0x4ab): undefined reference to `__imp__PyErr_SetString' pywrapper.o:pywrapper.c:(.text+0x59a): undefined reference to `__imp__PyExc_ValueError' pywrapper.o:pywrapper.c:(.text+0x5ae): undefined reference to `__imp__PyErr_SetString' pywrapper.o:pywrapper.c:(.text+0x5e2): undefined reference to `__imp__PyEval_CallFunction' pywrapper.o:pywrapper.c:(.text+0x630): undefined reference to `__imp__PyArg_ParseTuple' pywrapper.o:pywrapper.c:(.text+0x9ea): undefined reference to `__imp__PyEval_CallFunction' pywrapper.o:pywrapper.c:(.text+0xa18): undefined reference to `__imp__PyArg_ParseTuple' pywrapper.o:pywrapper.c:(.text+0xb93): undefined reference to `__imp__PyEval_CallFunction' pywrapper.o:pywrapper.c:(.text+0xbcf): undefined reference to `__imp__PyArg_ParseTuple' pywrapper.o:pywrapper.c:(.text+0xdb3): undefined reference to `__imp__PyEval_CallFunction' pywrapper.o:pywrapper.c:(.text+0xde8): undefined reference to `__imp__PyArg_ParseTuple' pywrapper.o:pywrapper.c:(.text+0xf8c): undefined reference to `__imp__PyEval_CallFunction' pywrapper.o:pywrapper.c:(.text+0xfb3): undefined reference to `__imp__PyArg_ParseTuple' pywrapper.o:pywrapper.c:(.text+0x10fc): undefined reference to `__imp__PyEval_CallFunction' pywrapper.o:pywrapper.c:(.text+0x1138): undefined reference to `__imp__PyArg_ParseTuple' pywrapper.o:pywrapper.c:(.text+0x1306): undefined reference to `__imp__PyEval_CallFunction' pywrapper.o:pywrapper.c:(.text+0x132d): undefined reference to `__imp__PyArg_ParseTuple' pywrapper.o:pywrapper.c:(.text+0x14a5): undefined reference to `__imp__PyEval_CallFunction' pywrapper.o:pywrapper.c:(.text+0x14cc): undefined reference to `__imp__PyArg_ParseTuple' pywrapper.o:pywrapper.c:(.text+0x1743): undefined reference to `__imp__PyEval_CallFunction' pywrapper.o:pywrapper.c:(.text+0x18ee): undefined reference to `__imp__PyDict_Type' pywrapper.o:pywrapper.c:(.text+0x1903): undefined reference to `__imp__PyType_IsSubtype' pywrapper.o:pywrapper.c:(.text+0x1922): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x192e): undefined reference to `__imp__PyBool_Type' pywrapper.o:pywrapper.c:(.text+0x193c): undefined reference to `__imp__PyExc_ValueError' pywrapper.o:pywrapper.c:(.text+0x1947): undefined reference to `__imp__PyErr_SetString' pywrapper.o:pywrapper.c:(.text+0x1962): undefined reference to `__imp___Py_TrueStruct' pywrapper.o:pywrapper.c:(.text+0x1984): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x198d): undefined reference to `__imp__PyInt_AsLong' pywrapper.o:pywrapper.c:(.text+0x1998): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x19b4): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x19bd): undefined reference to `__imp__PyFloat_AsDouble' pywrapper.o:pywrapper.c:(.text+0x19c8): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x19ea): undefined reference to `__imp__PyExc_TypeError' pywrapper.o:pywrapper.c:(.text+0x1a3d): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1a46): undefined reference to `__imp__PyFloat_AsDouble' pywrapper.o:pywrapper.c:(.text+0x1a51): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x1a6d): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1aa0): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1aa9): undefined reference to `__imp__PyInt_AsLong' pywrapper.o:pywrapper.c:(.text+0x1ab4): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x1af5): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1afe): undefined reference to `__imp__PyInt_AsLong' pywrapper.o:pywrapper.c:(.text+0x1b09): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x1b33): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1b3c): undefined reference to `__imp__PyInt_AsLong' pywrapper.o:pywrapper.c:(.text+0x1b47): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x1b70): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1b79): undefined reference to `__imp__PyString_AsString' pywrapper.o:pywrapper.c:(.text+0x1bc1): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1bc7): undefined reference to `__imp__PyBool_Type' pywrapper.o:pywrapper.c:(.text+0x1be0): undefined reference to `__imp___Py_TrueStruct' pywrapper.o:pywrapper.c:(.text+0x1c03): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1c0c): undefined reference to `__imp__PyFloat_AsDouble' pywrapper.o:pywrapper.c:(.text+0x1c17): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x1c3c): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1c45): undefined reference to `__imp__PyFloat_AsDouble' pywrapper.o:pywrapper.c:(.text+0x1c50): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x1c7a): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1c83): undefined reference to `__imp__PyInt_AsLong' pywrapper.o:pywrapper.c:(.text+0x1c8e): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x1cb7): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1cc0): undefined reference to `__imp__PyInt_AsLong' pywrapper.o:pywrapper.c:(.text+0x1ccb): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x1cfa): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1d03): undefined reference to `__imp__PyInt_AsLong' pywrapper.o:pywrapper.c:(.text+0x1d0e): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x1d2d): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1d36): undefined reference to `__imp__PyInt_AsLong' pywrapper.o:pywrapper.c:(.text+0x1d41): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x1d66): undefined reference to `__imp__PyDict_GetItemString' pywrapper.o:pywrapper.c:(.text+0x1d6f): undefined reference to `__imp__PyInt_AsLong' pywrapper.o:pywrapper.c:(.text+0x1d7a): undefined reference to `__imp__PyErr_Occurred' pywrapper.o:pywrapper.c:(.text+0x1db0): undefined reference to `__imp___Py_NoneStruct' pywrapper.o:pywrapper.c:(.text+0x1df6): undefined reference to `__imp__PyDict_Type' pywrapper.o:pywrapper.c:(.text+0x1dff): undefined reference to `__imp__PyFunction_Type' pywrapper.o:pywrapper.c:(.text+0x1e7f): undefined reference to `__imp__PyArg_ParseTuple' collect2: ld returned 1 exit status Those undefined references __imp__* do reside in D:/programs/python25/libs/libpython25.a and D:/programs/python25/libs/python25.lib, so I tried to add the path to ld. $ g77 -mno-cygwin -LD:/programs/Python25/libs -lpython25 -O4 -shared pywrapper.o algencan.o -o pywrapper.so pywrapper.o:pywrapper.c:(.text+0x2f): undefined reference to `__imp__Py_InitModule4' pywrapper.o:pywrapper.c:(.text+0x1dff): undefined reference to `__imp__PyFunction_Type' pywrapper.o:pywrapper.c:(.text+0x1e7f): undefined reference to `__imp__PyArg_ParseTuple' collect2: ld returned 1 exit status But it didn't work. I'm quite sure ALGENCAN can be built with MinGW now. Could somebody give me some hints on how to build it ? Thanks. Xiao Jianfeng From alexander.borghgraef.rma at gmail.com Tue Sep 18 09:52:11 2007 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Tue, 18 Sep 2007 15:52:11 +0200 Subject: [SciPy-user] Weird label behaviour in ndimage Message-ID: <9e8c52a20709180652g5b9a6835pd48c843533e1af2@mail.gmail.com> Hi all, I'm doing some image treatment using the ndimage module, and I've been playing a bit with the label function. I've encountered something very strange: a = zeros((5,4)) # a is a numpy.ndarray of int32 b = label(a)[0] # b is a numpy.ndarray of int32 -- Alex Borghgraef -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.borghgraef.rma at gmail.com Tue Sep 18 09:59:26 2007 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Tue, 18 Sep 2007 15:59:26 +0200 Subject: [SciPy-user] Weird label behaviour in ndimage In-Reply-To: <9e8c52a20709180652g5b9a6835pd48c843533e1af2@mail.gmail.com> References: <9e8c52a20709180652g5b9a6835pd48c843533e1af2@mail.gmail.com> Message-ID: <9e8c52a20709180659l460f86devc361a12060cc2d3a@mail.gmail.com> Sorry about that, shortcut malfunction. Let me rephrase that: Hi all, I'm doing some image treatment using the ndimage module, and I've been playing a bit with the label function. I've encountered something very strange: a = zeros((5,4)) # a is a numpy.ndarray of int32 b = ndimage.label(a)[0] # b is a numpy.ndarray of int32, and basically identical to a ndimage.maximum(a) # returns 0.0 ndimage.maximum(b) # returns the following error message: /usr/lib/python2.4/site-packages/scipy/ndimage/measurements.py in maximum(input, labels, index) 195 if labels.shape != input.shape: 196 raise RuntimeError, 'input and labels shape are not equal' --> 197 return _nd_image.statistics(input, labels, index, 4) 198 199 RuntimeError: data type not supported Don't get this. Objects a and b are the same data type, contain the same data type, print the same, can be added or multiplied together, but one of both cannot be used as input for image statistics functions. What am I missing here? -- Alex Borghgraef -------------- next part -------------- An HTML attachment was scrubbed... URL: From skraelings001 at gmail.com Tue Sep 18 10:21:22 2007 From: skraelings001 at gmail.com (Reynaldo Baquerizo) Date: Tue, 18 Sep 2007 09:21:22 -0500 Subject: [SciPy-user] Weird label behaviour in ndimage In-Reply-To: <9e8c52a20709180659l460f86devc361a12060cc2d3a@mail.gmail.com> References: <9e8c52a20709180652g5b9a6835pd48c843533e1af2@mail.gmail.com> <9e8c52a20709180659l460f86devc361a12060cc2d3a@mail.gmail.com> Message-ID: <46EFDEE2.1080706@gmail.com> Alexander Borghgraef escribi?: > Sorry about that, shortcut malfunction. Let me rephrase that: > Hi all, > > I'm doing some image treatment using the ndimage module, and I've > been playing a bit with the label > function. I've encountered something very strange: > > > a = zeros((5,4)) # a is a numpy.ndarray of int32 > b = ndimage.label(a)[0] # b is a numpy.ndarray of int32, and > basically identical to a > ndimage.maximum(a) # returns 0.0 > ndimage.maximum(b) # returns the following error message: > > /usr/lib/python2.4/site-packages/scipy/ndimage/measurements.py in > maximum(input, labels, index) > 195 if labels.shape != input.shape: > 196 raise RuntimeError, 'input and labels shape are > not equal' > --> 197 return _nd_image.statistics(input, labels, index, 4) > 198 > 199 > > RuntimeError: data type not supported > > Don't get this. Objects a and b are the same data type, contain the > same data type, print the same, can be > added or multiplied together, but one of both cannot be used as input > for image statistics functions. What > am I missing here? You have a buggy version of scipy, it works fine for me: > In [4]: a = zeros((5,4)) > > In [5]: b = ndimage.label(a)[0] > > In [6]: ndimage.maximum(a) > Out[6]: 0.0 > > In [7]: ndimage.maximum(b) > Out[7]: 0.0 Cheers, Reynaldo From blessing at aims.ac.za Tue Sep 18 10:30:41 2007 From: blessing at aims.ac.za (Blessing Amadi) Date: Tue, 18 Sep 2007 16:30:41 +0200 Subject: [SciPy-user] MAILING LIST ACCEPTANCE Message-ID: <46EFE111.9060602@aims.ac.za> thanks alot for the mail, my email address is blessing at aims.ac.za From openopt at ukr.net Tue Sep 18 15:34:50 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 18 Sep 2007 22:34:50 +0300 Subject: [SciPy-user] ANN (numerical optimization): ALGENCAN have migrated from numeric to numpy In-Reply-To: <46EF8B05.2020903@gmail.com> References: <46EED684.8000308@ukr.net> <46EF7C3C.9040204@gmail.com> <46EF7DA8.2090900@ukr.net> <46EF8B05.2020903@gmail.com> Message-ID: <46F0285A.6030706@ukr.net> fdu.xiaojf at gmail.com wrote: > Hi dmitrey, > dmitrey wrote: > > you'd better contact ALGENCAN developers with the question. > > Regards, D. > > > > I'm trying to contact ALGENCAN developers. > > However, I have a question about openopt. > > I have tried the example at > http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/examples/nlp_1.py. > Since I couldn't get ALGENCAN work now, I tried with lincher( I just > commented the line "r = p.solve('ALGENCAN')" and uncommented "r = > p.solve('lincher')" ). > > There are the only output: > > starting solver lincher (BSD license) with problem unnamed > itn 0: Fk= 8596.39550577 maxResidual= 605859.237208 > > And after that, python crashed :-( > (I have install cvxopt0.9) > Because lincher just can't solve the example (see more details below). I have added more accurate handling of the exception to svn, now it will just inform "istop: -11 (failed to solve qp subproblem)" Try to replace line 76 p.b = [7, 9, -825] to p.b = [7, 9, -800] then it works. Optionally, since objfun is not scaled to 1 (f_opt is ~ 100), it's better to set p.funtol to something greater then default 1e-6, automatic scaling is not implemented properly yet. Currently lincher requires QP solver, and the only one connected is CVXOPT one. And I'm not fond of the one, it often fails to solve a problem even with single constraint (when QP certainly has solution). Seems like automatic scaling is not used there. Also, cvxopt lp solver yields solution with precision ~ 1e-8...2e-8, while lp_solve and glpk - to 1e-12. I guess same to qp solver, while solution of subproblem is required to be ~2 orders more precise. Now I'm trying either remove dependence on QP solver or write my own one, that will be much more appropriate. However, for users installing ALGENCAN is much more recommended, since it is developed during years by a big team (vs lincher that had been written during 1-2 months by a gsoc student) and sometimes the solver works better than IPOPT, see papers from their website: http://www.ime.usp.br/~egbirgin/tango/publications.php#selected. It's very nice that ALGENCAN is free, because equivalent NLP solvers cost 5...30 K$, see for example http://tomopt.com/tomlab/products/prices/commercial.php. As for lincher, for nearest future it's appropriate for small-scale cases that are usually used for implementing objfunc,constraints and their derivatives (it's convenient using p.check.df=1, same to dc and dh), while system administrator tries to install ALGENCAN and/or other solvers. I chose the alg implemented in lincher because it allows to handle both eq and ineq non-lin constraints, and handles non-convex funcs rather good. I intend to enhance the one from time to time, especially if a new income(s) will be obtained (GSoC is over and now I spend most of time to other tasks). D. From david.huard at gmail.com Tue Sep 18 21:12:05 2007 From: david.huard at gmail.com (David Huard) Date: Tue, 18 Sep 2007 21:12:05 -0400 Subject: [SciPy-user] A first proposal for dataset organization In-Reply-To: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> References: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> Message-ID: <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> Hi David, your proposal looks good and I think it's a great addition to SciPy. As for the two issues you raise, here is my 2 cents. I wouldn't bother too much about missing data. These data sets are mainly for illustration and testing purposes. Hence, in general, we can choose data sets that don't have missing data. Now, there should be a data set with missing data to illustrate the use of masked arrays or statistical function robust to NaNs, but it can be kept pretty simple, that is, just a single time series. For large data sets, I'm not sure I understand what you're meaning. Do you intend to include netcdf or HDF5 files and provide an interface to access those data sets so users don't have to bother about the underlying engine ? Do we really want to distribute a package weighting > 1GB ? Cheers, David 2007/9/17, David Cournapeau : > > Hi there, > > A few months ago, we started to discuss about various issues about > dataset for numpy/scipy. In the context of my Summer Of Code for machine > learning tools in python, I had the possibility to tackle concretely the > issue. Before announcing a first alpha version of my work, I would like > to gather comments, critics about the following proposal for dataset > organization. > > The following proposal is also available in svn: > > > http://projects.scipy.org/scipy/scikits/browser/trunk/learn/scikits/learn/datasets/DATASET_PROPOSAL.txt > > > Dataset for scipy: design proposal > ================================== > > One of the thing numpy/scipy is missing now is a set of datasets, > available for > demo, courses, etc. For example, R has a set of dataset available at the > core. > > The expected usage of the datasets are the following: > > - machine learning: eg the data contain also class information > (discrete or continuous) > - descriptive statistics > - others ? > > That is, a dataset is not only data, but also some meta-data. The goal > of this > proposal is to propose common practices for organizing the data, in a > way which > is both straightforward, and does not prevent specific usage of the data. > > Organization > ------------ > > A preliminary set of datasets is available at the following address: > > > http://projects.scipy.org/scipy/scikits/browser/trunk/learn/scikits/learn/datasets > > Each dataset is a directory and defines a python package (e.g. has the > __init__.py file). Each package is expected to define the function load, > returning > the corresponding data. For example, to access datasets data1, you > should be able to do: > > >>> from datasets.data1 import load > >>> d = load() # -> d contains the data. > > load can do whatever it wants: fetching data from a file (python script, > csv > file, etc...), from the internet, etc... Some special variables must be > defined > for each package, containing a python string: > > - COPYRIGHT: copyright informations > - SOURCE: where the data are coming from > - DESCHOSRT: short description > - DESCLONG: long description > - NOTE: some notes on the datasets. > > Format of the data > ------------------ > > Here, I suggest a common practice for the returned value by the load > function. > Instead of using classes to provide meta-data, I propose to use a > dictionnary > of arrays, with some values mandatory. The key goals are: > > - for people who just want the data, there is no extra burden > ("just > give me the data !" MOTO). > - for people who need more, they can easily extract what they > need from > the returned values. More high level abstractions can be built > easily > from this model. > - all possible dataset should fit into this model. > - In particular, I want to be able to be able to convert our > dataset to > Orange Dataset representation (or other machine learning > tool), and > vice-versa. > > For the datasets to be useful in the learn scikits, which is the project > which > initiated this datasets package, the data returned by load has to be a > dict > with the following conventions: > > - 'data': this value should be a record array containing the actual > data. > - 'label': this value should be a rank 1 array of integers, contains > the > label index for each sample, that is label[i] should be the label > index > of data[i]. If it contains float values, it is used for regression > instead. > - 'class': a record array such as class[i] is the class name. In other > words, this makes the correspondance label name > label index. > > As an example, I use the famouse IRIS dataset: the dataset contains 3 > classes > of flowers, and for each flower, 4 measures (called attributes in machine > learning vocabulary) are available (sepal width and length, petal width > and > length). In this case, the values returned by load would be: > > - 'data': a record array containing all the flowers' > measurements. For > descriptive statistics, that's all you may need. You can > easily find > the attributes from the dtype (a function to find the > attributes is > also available: it returns a list of the attributes). > - 'labels': an array of integers (for class information) or > float (for > regression). each class is encoded as an integer, and labels[i] > returns this integer for the sample i. > - 'class': a record array, which returns the integer code for each > class. For example, class['Iris-versicolor'] will return the > integer > used in label, and all samples i such as label[i] == > class['Iris-versicolor'] are of the class 'Iris-versicolor'. > > This contains enough information to get all useful information through > introspection and simple functions. I already implemented a small module > to do > basic things such as: > > - selecting only a subset of all samples. > - selecting only a subset of the attributes (only sepal length and > width, for example). > - selecting only the samples of a given class. > - small summary of the dataset. > > This is implemented in less than 100 lines, which tends to show that the > above > design is not too simplistic. > > Remaining problems: > ------------------- > > I see mainly two big problems: > > - if the dataset is big and cannot fit into memory, what kind of > API do > we want to avoid loading all the data in memory ? Can we use > memory > mapped arrays ? > - Missing data: I thought about subclassing both record arrays and > masked arrays classes, but I don't know if this is feasable, > or even > makes sense. I have the feeling that some Data mining software > use > Nan (for example, weka seems to use float internally), but this > prevents them from representing integer data. > > Current implementation > ---------------------- > > An implementation following the above design is available in > scikits.learn.datasets. If you installed scikits.learn, you can execute > the > file learn/utils/attrselect.py, which shows the information you can easily > extract for now from this model. > > Also, once the above problems are solved, an arff converter will be > available: > arff is the format used by WEKA, and many datasets are available at this > format: > > http://weka.sourceforge.net/wekadoc/index.php/en:ARFF_%283.5.4%29 > http://www.cs.waikato.ac.nz/ml/weka/index_datasets.html > > Note > ---- > > Although the datasets package emerged from the learn package, I try to > keep it > independant from everything else, that is once we agree on the remaining > problems and where the package should go, it can easily be put elsewhere > without too much trouble. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasjb at csse.unimelb.edu.au Wed Sep 19 01:14:01 2007 From: lucasjb at csse.unimelb.edu.au (Lucas Barbuto) Date: Wed, 19 Sep 2007 15:14:01 +1000 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: <46EA3528.7060302@ar.media.kyoto-u.ac.jp> References: <46E0F921.2040305@ar.media.kyoto-u.ac.jp> <01FCD5EB-12CE-49CE-9996-28CBB70B873B@csse.unimelb.edu.au> <46E4E486.100@ar.media.kyoto-u.ac.jp> <5CC40E5E-3277-45D7-9090-FA346A4929D0@csse.unimelb.edu.au> <46EA3528.7060302@ar.media.kyoto-u.ac.jp> Message-ID: <36E18F5A-E37C-40AE-A9BD-2475F330E1CD@csse.unimelb.edu.au> On 14/09/2007, at 5:15 PM, David Cournapeau wrote: >> All I really want is a working NumPy and SciPy installation on >> Solaris 9/x86. Seeing as SciPy recommends vendor optimised BLAS and >> LAPACK routines and I don't want to build ATLAS, > Is there a reason why ? Building dev versions (3.7.*) of ATLAS works > pretty well now and is not too difficult (but with gcc). That wasn't my experience, I gave it a brief shot. To start from the start: I'm a Systems Administrator, not a Python programmer, researcher or scientist. Before I started poking around with SciPy I'd never heard of ATLAS, BLAS or LAPACK. Avoiding having to build these and risking further running in circles is my primary motivator and secondary is the assumption that libsunperf will work better for the users. > I have downloaded a vmware image of nexenta, which is GNU above > open solaris; > according to > http://blogs.sun.com/dbx/entry/installing_nexenta_gnu_solaris_on, I > can > install sunstudio on it, which means sunperf libraries, right ? Do you > think this corresponds to your environment ? I don't want to waste > time > on it if this does not help you. Yes, Sun Studio 12 contains libsunperf.{a,so}. I just unpacked and dropped in to /local/cat2 (just somewhere that disk was available). I'm on Solaris 5.9 x86 with its standard development programs in /usr/ ccs, /usr/sfw, /usr/ucb and GNU utilities built from source and installed manually into the /usr/local hierarchy. I suppose Nexenta sounds close to this. Regards, -- Lucas From wfspotz at sandia.gov Wed Sep 19 01:41:37 2007 From: wfspotz at sandia.gov (Bill Spotz) Date: Tue, 18 Sep 2007 23:41:37 -0600 Subject: [SciPy-user] ANN: Trilinos 8.0, including PyTrilinos 4.0 Message-ID: Version 8.0 of Trilinos has been released: http://trilinos.sandia.gov Trilinos is a collection of scientific, object-oriented solver packages. These packages cover linear algebra services, preconditioners, linear solvers, nonlinear solvers, eigensolvers, and a wide range of related utilities. Trilinos supports serial and parallel architectures, as well as dense or sparse problem formulations. Included in Trilinos release 8.0 is PyTrilinos version 4.0, http://trilinos.sandia.gov/packages/pytrilinos a set of python interfaces to selected Trilinos packages. New in version 4.0 of PyTrilinos is an interface to Anasazi, the eigensolver package, and the re-enabling of NOX, the nonlinear solver package. The primary Trilinos linear algebra services package is Epetra, which provides Vector and MultiVector classes, as well as hierarchies of operator, communicator and domain decomposition classes. The PyTrilinos interface to Epetra has been designed with a high degree of compatibility with numpy, with the hope of complementing the SciPy development efforts. ** Bill Spotz ** ** Sandia National Laboratories Voice: (505)845-0170 ** ** P.O. Box 5800 Fax: (505)284-5451 ** ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** From alexander.borghgraef.rma at gmail.com Wed Sep 19 05:05:05 2007 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Wed, 19 Sep 2007 11:05:05 +0200 Subject: [SciPy-user] Weird label behaviour in ndimage In-Reply-To: <46EFDEE2.1080706@gmail.com> References: <9e8c52a20709180652g5b9a6835pd48c843533e1af2@mail.gmail.com> <9e8c52a20709180659l460f86devc361a12060cc2d3a@mail.gmail.com> <46EFDEE2.1080706@gmail.com> Message-ID: <9e8c52a20709190205t57e4a5f2xac934f0ad94cb5a8@mail.gmail.com> On 9/18/07, Reynaldo Baquerizo wrote: > > > You have a buggy version of scipy, it works fine for me: > > > In [4]: a = zeros((5,4)) > > > > In [5]: b = ndimage.label(a)[0] > > > > In [6]: ndimage.maximum(a) > > Out[6]: 0.0 > > > > In [7]: ndimage.maximum(b) > > Out[7]: 0.0 > Hmm, damn. I've got numpy 1.0.1 and scipy 0.5.1 installed on Fedora Core 6. Has anyone else encountered this bug before? I guess I'll have to harass our sysadmin to install a more recent version. -- Alex -- Alex Borghgraef -------------- next part -------------- An HTML attachment was scrubbed... URL: From pepe_kawumi at yahoo.co.uk Wed Sep 19 06:19:53 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Wed, 19 Sep 2007 10:19:53 +0000 (GMT) Subject: [SciPy-user] Returning the positions of a sorted array Message-ID: <582277.4674.qm@web27714.mail.ukl.yahoo.com> Hullo, Im having problems returning the positions of a sorted array. Say I create an array a = ([3,5,1]) then if I use the sort command I get sort(a) = [1,3,5] But what if i want the actual original positions of the sorted array to be returned instead. Is there a command in python that can do this? I'm looking for an answer of the form [2,0,1] which returns the original positions in a of the sorted array instead. Thanks ___________________________________________________________ Win a BlackBerry device from O2 with Yahoo!. Enter now. http://www.yahoo.co.uk/blackberry -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists.steve at arachnedesign.net Wed Sep 19 07:26:55 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Wed, 19 Sep 2007 07:26:55 -0400 Subject: [SciPy-user] Returning the positions of a sorted array In-Reply-To: <582277.4674.qm@web27714.mail.ukl.yahoo.com> References: <582277.4674.qm@web27714.mail.ukl.yahoo.com> Message-ID: <3B8AF462-C2E7-463D-83A3-4145C8978CBB@arachnedesign.net> Hi, > But what if i want the actual original positions of the sorted > array to be returned instead. Is there a command in python that can > do this? > > I'm looking for an answer of the form [2,0,1] which returns the > original positions in a of the sorted array instead. Look at the numpy.argsort function. That'll do the trick. -steve From elcorto at gmx.net Wed Sep 19 07:33:34 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 19 Sep 2007 13:33:34 +0200 Subject: [SciPy-user] Returning the positions of a sorted array In-Reply-To: <582277.4674.qm@web27714.mail.ukl.yahoo.com> References: <582277.4674.qm@web27714.mail.ukl.yahoo.com> Message-ID: <46F1090E.8070708@gmx.net> Perez Kawumi wrote: > Hullo, > Im having problems returning the positions of a sorted array. > > Say I create an array > a = ([3,5,1]) > > then if I use the sort command I get > > sort(a) = [1,3,5] > > But what if i want the actual original positions of the sorted array to be returned instead. Is there a command in python that can do this? > > I'm looking for an answer of the form [2,0,1] which returns the original positions in a of the sorted array instead. > In [7]: numpy.*sort*? numpy.argsort numpy.lexsort numpy.msort numpy.searchsorted numpy.sort numpy.sort_complex In [8]: numpy.argsort? Type: function Base Class: String Form: Namespace: Interactive File: /usr/lib/python2.4/site-packages/numpy/core/fromnumeric.py Definition: numpy.argsort(a, axis=-1, kind='quicksort', order=None) Docstring: Returns array of indices that index 'a' in sorted order. Keyword arguments: axis -- axis to be indirectly sorted (default -1) Can be None to indicate return indices into the flattened array. kind -- sorting algorithm (default 'quicksort') Possible values: 'quicksort', 'mergesort', or 'heapsort' order -- For an array with fields defined, this argument allows specification of which fields to compare first, second, etc. Not all fields need be specified. Returns: array of indices that sort 'a' along the specified axis. This method executes an indirect sort along the given axis using the algorithm specified by the kind keyword. It returns an array of indices of the same shape as 'a' that index data along the given axis in sorted order. The various sorts are characterized by average speed, worst case performance, need for work space, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The three available algorithms have the following properties: |------------------------------------------------------| | kind | speed | worst case | work space | stable| |------------------------------------------------------| |'quicksort'| 1 | O(n^2) | 0 | no | |'mergesort'| 2 | O(n*log(n)) | ~n/2 | yes | |'heapsort' | 3 | O(n*log(n)) | 0 | no | |------------------------------------------------------| All the sort algorithms make temporary copies of the data when the sort is not along the last axis. Consequently, sorts along the last axis are faster and use less space than sorts along other axis. In [10]: a = numpy.array([3,5,1]) In [11]: numpy.argsort(a) Out[11]: array([2, 0, 1]) In [12]: a.*sort*? a.argsort a.searchsorted a.sort In [13]: a.argsort() Out[13]: array([2, 0, 1]) -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From zakaria at aims.ac.za Wed Sep 19 08:24:07 2007 From: zakaria at aims.ac.za (Zakaria Ali) Date: Wed, 19 Sep 2007 14:24:07 +0200 (SAST) Subject: [SciPy-user] scipy-user list Message-ID: <49535.192.168.42.180.1190204647.squirrel@webmail.aims.ac.za> Hello Im having a probleme to generalete someting like that. e1=(1,0.0) e2=(0,1,0) e3=(0,0,1) I want to generalete ej=(0,...,0,..., ) and in position number j I want to have 1 Are there command in python for this? But how I can do the generalisation of this? I'm looking for an answer of the form ej = (0,0,...,o...,0) and positin j, I want to have 1. From lbolla at gmail.com Wed Sep 19 08:39:00 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 19 Sep 2007 14:39:00 +0200 Subject: [SciPy-user] scipy-user list In-Reply-To: <49535.192.168.42.180.1190204647.squirrel@webmail.aims.ac.za> References: <49535.192.168.42.180.1190204647.squirrel@webmail.aims.ac.za> Message-ID: <80c99e790709190539ge8d6cc7l5c44039129d1b925@mail.gmail.com> In [9]: dim = 10 In [10]: j = 2 In [11]: (numpy.arange(dim)==j).astype(float) Out[11]: array([ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.]) L. On 9/19/07, Zakaria Ali wrote: > > Hello > > Im having a probleme to generalete someting like that. > > e1=(1,0.0) > e2=(0,1,0) > e3=(0,0,1) > > I want to generalete ej=(0,...,0,..., ) and in position number j I want > to have 1 > Are there command in python for this? > But how I can do the generalisation of this? > > I'm looking for an answer of the form ej = (0,0,...,o...,0) and positin > j, I want to have 1. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuele at relativita.com Wed Sep 19 08:50:31 2007 From: emanuele at relativita.com (Emanuele Olivetti) Date: Wed, 19 Sep 2007 14:50:31 +0200 Subject: [SciPy-user] A first proposal for dataset organization In-Reply-To: <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> References: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> Message-ID: <46F11B17.1020403@relativita.com> Hi David & David, I like your proposal too but instead I'm very interested in missing data so I'd like to have them in your proposal. And indeed using 'NaN' as a placeholder for a missing entries IS a bad idea. Unfortunately I've no "best" solution to provide. In what I'm doing I use two matrices: one to store actual values and another boolean matrix to say where the missing values are. Avoiding the use of values in entries marked as missing is responsibility of the analysis step. This is good for my case but may not be that wonderful in general. About handling large datasets I had some experience using NiPy: http://neuroimaging.scipy.org/ They have (had?) one implementation using mapped arrays that is good for many users; but my need was to access all the data without the disk bottleneck and even though I had enough RAM I had some trouble to avoid the memory mapping and do just the full load. So the lesson I learnt is that the users needs are not uniform and a library should take in to account always the basic case (full load). P.S. NiPy did it. Hope this helps. More later. Cheers, Emanuele From cimrman3 at ntc.zcu.cz Wed Sep 19 09:09:42 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 19 Sep 2007 15:09:42 +0200 Subject: [SciPy-user] access to 'Projects' page on scipy.org Message-ID: <46F11F96.5010107@ntc.zcu.cz> Hi, I noticed that there is no direct link to http://www.scipy.org/Projects from the main SciPy page, nor from Topical_Software. Could it be linked somewhere so that new users can find it? r. From peridot.faceted at gmail.com Wed Sep 19 09:42:24 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 19 Sep 2007 09:42:24 -0400 Subject: [SciPy-user] A first proposal for dataset organization In-Reply-To: <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> References: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> Message-ID: On 18/09/2007, David Huard wrote: > For large data sets, I'm not sure I understand what you're meaning. Do you > intend to include netcdf or HDF5 files and provide an interface to access > those data sets so users don't have to bother about the underlying engine ? > Do we really want to distribute a package weighting > 1GB ? One of the points of this project, as I understand it, is to make it convenient for people to get and use real datasets. In particular, one possibility is to not include the data in this package, but instead only a script to download it from (say) the HEASARC. Thus big datasets are not outrageous, and more to the point, we need to be able to deal with them whatever form they are in natively. Anne From aisaac at american.edu Wed Sep 19 10:02:24 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 19 Sep 2007 10:02:24 -0400 Subject: [SciPy-user] scipy-user list In-Reply-To: <49535.192.168.42.180.1190204647.squirrel@webmail.aims.ac.za> References: <49535.192.168.42.180.1190204647.squirrel@webmail.aims.ac.za> Message-ID: On Wed, 19 Sep 2007, (SAST) Zakaria Ali apparently wrote: > Im having a probleme to generalete someting like that. > e1=(1,0.0) > e2=(0,1,0) > e3=(0,0,1) You want all of them? Use the identity. (See below.) Cheers, Alan Isaac >>> import numpy as N >>> dim = 5 >>> e = N.eye(dim) >>> for j in range(dim): ... print e[j] ... [ 1. 0. 0. 0. 0.] [ 0. 1. 0. 0. 0.] [ 0. 0. 1. 0. 0.] [ 0. 0. 0. 1. 0.] [ 0. 0. 0. 0. 1.] From david.huard at gmail.com Wed Sep 19 10:05:48 2007 From: david.huard at gmail.com (David Huard) Date: Wed, 19 Sep 2007 10:05:48 -0400 Subject: [SciPy-user] A first proposal for dataset organization In-Reply-To: References: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> Message-ID: <91cf711d0709190705j63db2c78jcbb4415463433361@mail.gmail.com> Hi Anne, 2007/9/19, Anne Archibald : > > On 18/09/2007, David Huard wrote: > > > For large data sets, I'm not sure I understand what you're meaning. Do > you > > intend to include netcdf or HDF5 files and provide an interface to > access > > those data sets so users don't have to bother about the underlying > engine ? > > Do we really want to distribute a package weighting > 1GB ? > > One of the points of this project, as I understand it, is to make it > convenient for people to get and use real datasets. In particular, one > possibility is to not include the data in this package, but instead > only a script to download it from (say) the HEASARC. Thus big datasets > are not outrageous, and more to the point, we need to be able to deal > with them whatever form they are in natively. My understanding was rather : " ... to make it convenient for people to get and use real datasets for use in SciPy and NumPy examples, documentation and tutorials. " This limits the scope of the dataset package, at least for starters. If some tutorial deals with larger than memory issues, then using a specialized binary format makes sense. However, I think that pretty basic datasets can illustrate the use of most SciPy and NumPy functions. Regards, David Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pepe_kawumi at yahoo.co.uk Wed Sep 19 12:44:08 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Wed, 19 Sep 2007 16:44:08 +0000 (GMT) Subject: [SciPy-user] Reading in data into a python program!! Message-ID: <446833.24101.qm@web27709.mail.ukl.yahoo.com> Hi, Im trying to import data from gmsh into a python program. Want to use this data in matrix form. just asking for pointers as to where I can find information on how to do this. I want to be able to creat a 54*4 matrix which I can manipulate.The data I want to use is attached. Thanks ___________________________________________________________ Want ideas for reducing your carbon footprint? Visit Yahoo! For Good http://uk.promotions.yahoo.com/forgood/environment.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: foo1.msh Type: application/octet-stream Size: 1770 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Wed Sep 19 12:49:09 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 19 Sep 2007 18:49:09 +0200 Subject: [SciPy-user] Reading in data into a python program!! In-Reply-To: <446833.24101.qm@web27709.mail.ukl.yahoo.com> References: <446833.24101.qm@web27709.mail.ukl.yahoo.com> Message-ID: <20070919164909.GE20959@clipper.ens.fr> On Wed, Sep 19, 2007 at 04:44:08PM +0000, Perez Kawumi wrote: > Im trying to import data from gmsh into a python program. Want to use this > data in matrix form. just asking for pointers as to where I can find > information on how to do this. > I want to be able to creat a 54*4 matrix which I can manipulate.The data I > want to use is attached. Remove the 5 first lines, the last one, and you can use scipy.io.read_array, this can be done for instance by using the "lines" argument of this function. So, no custom code to write. Ga?l From robert.kern at gmail.com Wed Sep 19 13:22:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Sep 2007 12:22:19 -0500 Subject: [SciPy-user] A first proposal for dataset organization In-Reply-To: <91cf711d0709190705j63db2c78jcbb4415463433361@mail.gmail.com> References: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> <91cf711d0709190705j63db2c78jcbb4415463433361@mail.gmail.com> Message-ID: <46F15ACB.5070409@gmail.com> David Huard wrote: > Hi Anne, > > 2007/9/19, Anne Archibald >: > > On 18/09/2007, David Huard > wrote: > > > For large data sets, I'm not sure I understand what you're > meaning. Do you > > intend to include netcdf or HDF5 files and provide an interface to > access > > those data sets so users don't have to bother about the underlying > engine ? > > Do we really want to distribute a package weighting > 1GB ? > > One of the points of this project, as I understand it, is to make it > convenient for people to get and use real datasets. In particular, one > possibility is to not include the data in this package, but instead > only a script to download it from (say) the HEASARC. Thus big datasets > are not outrageous, and more to the point, we need to be able to deal > with them whatever form they are in natively. > > > My understanding was rather : > " ... to make it convenient for people to get and use real datasets for > use in SciPy and NumPy examples, documentation and tutorials. " This > limits the scope of the dataset package, at least for starters. If some > tutorial deals with larger than memory issues, then using a specialized > binary format makes sense. However, I think that pretty basic datasets > can illustrate the use of most SciPy and NumPy functions. That's an important use case, certainly, but I had in mind uses cases like the one Anne gave, too, when I suggested parts of the design that David implemented. The scope is still fairly broad. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Sep 19 15:00:35 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Sep 2007 14:00:35 -0500 Subject: [SciPy-user] access to 'Projects' page on scipy.org In-Reply-To: <46F11F96.5010107@ntc.zcu.cz> References: <46F11F96.5010107@ntc.zcu.cz> Message-ID: <46F171D3.3050109@gmail.com> Robert Cimrman wrote: > Hi, > > I noticed that there is no direct link to http://www.scipy.org/Projects > from the main SciPy page, nor from Topical_Software. Could it be linked > somewhere so that new users can find it? I added a link from the front page, albeit the bottom. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Wed Sep 19 17:35:03 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 19 Sep 2007 17:35:03 -0400 Subject: [SciPy-user] scipy.stats.t.cdf crashes under Win 2000 Message-ID: Is anyone else seeing Python interpreter crashes using scipy.stats.t.cdf? Thank you, Alan Isaac From aisaac at american.edu Wed Sep 19 19:22:58 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 19 Sep 2007 19:22:58 -0400 Subject: [SciPy-user] scipy.stats.t.cdf crashes under Win 2000 In-Reply-To: References: Message-ID: On Wed, 19 Sep 2007, Alan G Isaac apparently wrote: > Is anyone else seeing Python interpreter crashes > using scipy.stats.t.cdf? OK, I've traced the problem to scipy.special.stdtr but I do not know how to go further. Can anyone help? Thank you, Alan Isaac From robert.kern at gmail.com Wed Sep 19 19:24:24 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Sep 2007 18:24:24 -0500 Subject: [SciPy-user] scipy.stats.t.cdf crashes under Win 2000 In-Reply-To: References: Message-ID: <46F1AFA8.5000105@gmail.com> Alan G Isaac wrote: > On Wed, 19 Sep 2007, Alan G Isaac apparently wrote: >> Is anyone else seeing Python interpreter crashes >> using scipy.stats.t.cdf? > > OK, I've traced the problem to > scipy.special.stdtr > but I do not know how to go further. > Can anyone help? Where did you get your scipy binary from? Can you supply us the code that you ran that caused the crash? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Wed Sep 19 19:40:56 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 19 Sep 2007 19:40:56 -0400 Subject: [SciPy-user] scipy.stats.t.cdf crashes under Win 2000 In-Reply-To: <46F1AFA8.5000105@gmail.com> References: <46F1AFA8.5000105@gmail.com> Message-ID: > Alan G Isaac wrote: >> OK, I've traced the problem to scipy.special.stdtr but >> I do not know how to go further. Can anyone help? On Wed, 19 Sep 2007, Robert Kern apparently wrote: > Where did you get your scipy binary from? Can you supply us the code that you > ran that caused the crash? It's the official binary: http://prdownloads.sourceforge.net/scipy/scipy-0.5.2.1.win32-py2.5.exe?download Here's a session (arbitrary numbers) up to the crash: Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.__version__ '0.5.2.1' >>> from scipy import special >>> special.stdtr(2.,3) Thanks for your help! Alan From aisaac at american.edu Wed Sep 19 20:02:51 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 19 Sep 2007 20:02:51 -0400 Subject: [SciPy-user] scipy.stats.t.cdf crashes under Win 2000 In-Reply-To: References: <46F1AFA8.5000105@gmail.com> Message-ID: >> Alan G Isaac wrote: >>> OK, I've traced the problem to scipy.special.stdtr but >>> I do not know how to go further. Can anyone help? > On Wed, 19 Sep 2007, Robert Kern apparently wrote: >> Where did you get your scipy binary from? Can you supply us the code that you >> ran that caused the crash? On Wed, 19 Sep 2007, Alan G Isaac apparently wrote: > It's the official binary: > http://prdownloads.sourceforge.net/scipy/scipy-0.5.2.1.win32-py2.5.exe?download > Here's a session (arbitrary numbers) up to the crash: > Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 > 32 bit (Intel)] on win32 > Type "help", "copyright", "credits" or "license" for more information. >>>> import scipy >>>> scipy.__version__ > '0.5.2.1' >>>> from scipy import special >>>> special.stdtr(2.,3) Another data point: I just tried this under Windows XP with no problem. The very same binary. The session looks *identical* up to the last command. How odd is this?? Cheers, Alan Isaac From matthew.brett at gmail.com Wed Sep 19 20:15:57 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 19 Sep 2007 17:15:57 -0700 Subject: [SciPy-user] scipy.stats.t.cdf crashes under Win 2000 In-Reply-To: References: <46F1AFA8.5000105@gmail.com> Message-ID: <1e2af89e0709191715u2b7a9bcr504483e064fe77e9@mail.gmail.com> Hi, > Another data point: > I just tried this under Windows XP with no problem. > The very same binary. The session looks *identical* > up to the last command. How odd is this?? Could they be different machines, only the XP has SSE2 instructions, the binary is expecting SSE2 instructions? Matthew From aisaac at american.edu Wed Sep 19 20:35:50 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 19 Sep 2007 20:35:50 -0400 Subject: [SciPy-user] scipy.stats.t.cdf crashes under Win 2000 In-Reply-To: <1e2af89e0709191715u2b7a9bcr504483e064fe77e9@mail.gmail.com> References: <46F1AFA8.5000105@gmail.com><1e2af89e0709191715u2b7a9bcr504483e064fe77e9@mail.gmail.com> Message-ID: >> Another data point: >> I just tried this under Windows XP with no problem. >> The very same binary. The session looks identical >> up to the last command. How odd is this?? On Wed, 19 Sep 2007, Matthew Brett apparently wrote: > Could they be different machines, only the XP has SSE2 instructions, > the binary is expecting SSE2 instructions? Yes it is two different machines. The crash comes under Win 2000. Win XP is working fine. If you can suggest a way to explore this further? Thank you, Alan Isaac From robert.kern at gmail.com Wed Sep 19 21:18:57 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Sep 2007 20:18:57 -0500 Subject: [SciPy-user] scipy.stats.t.cdf crashes under Win 2000 In-Reply-To: References: <46F1AFA8.5000105@gmail.com><1e2af89e0709191715u2b7a9bcr504483e064fe77e9@mail.gmail.com> Message-ID: <46F1CA81.5010203@gmail.com> Alan G Isaac wrote: >>> Another data point: >>> I just tried this under Windows XP with no problem. >>> The very same binary. The session looks identical >>> up to the last command. How odd is this?? > > On Wed, 19 Sep 2007, Matthew Brett apparently wrote: >> Could they be different machines, only the XP has SSE2 instructions, >> the binary is expecting SSE2 instructions? > > > Yes it is two different machines. > The crash comes under Win 2000. > Win XP is working fine. > If you can suggest a way to explore this further? What are the CPUs? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Wed Sep 19 22:12:13 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 19 Sep 2007 22:12:13 -0400 Subject: [SciPy-user] scipy.stats.t.cdf crashes under Win 2000 In-Reply-To: <46F1CA81.5010203@gmail.com> References: <46F1AFA8.5000105@gmail.com><1e2af89e0709191715u2b7a9bcr504483e064fe77e9@mail.gmail.com> <46F1CA81.5010203@gmail.com> Message-ID: >> On Wed, 19 Sep 2007, Matthew Brett apparently wrote: >>> Could they be different machines, only the XP has SSE2 instructions, >>> the binary is expecting SSE2 instructions? > Alan G Isaac wrote: >> Yes it is two different machines. >> The crash comes under Win 2000. >> Win XP is working fine. >> If you can suggest a way to explore this further? On Wed, 19 Sep 2007, Robert Kern apparently wrote: > What are the CPUs? I am not sure how best to specify this. Guide me if I leave out relevant detail. The machine that crashes is a "Pentium III", x86 Family 6 Model 7 Stepping 3. The OS is Win 2000 service pack 4, with all updates (I believe). The machine that does not crash is a "Pentium 4", specifically a Mobile Intel Pentium 4 - M CPU at 2.00 GHz X86 Family 15 Model 2 0 which seems to be this one: http://www.intel.com/products/processor/mobilepentium4/index.htm The OS is Win XP service pack 2 with all updates. Sorry, but I do not know how to determine the processor number. This conforms with Matthew's speculation that the binary is expecting SSE2 instructions... Cheers, Alan Isaac From millman at berkeley.edu Thu Sep 20 00:08:27 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 19 Sep 2007 21:08:27 -0700 Subject: [SciPy-user] scipy.stats.t.cdf crashes under Win 2000 In-Reply-To: References: <46F1AFA8.5000105@gmail.com> <1e2af89e0709191715u2b7a9bcr504483e064fe77e9@mail.gmail.com> <46F1CA81.5010203@gmail.com> Message-ID: Hey Alan, SSE2 was introduced with the Pentium IV: http://en.wikipedia.org/wiki/Streaming_SIMD_Extensions Unfortunately, the current binaries on the sourceforge site require a processor with SSE2. I don't have a Windows box to build packages on at the moment. So the binaries are being built by Travis for the upcoming 0.6.0 release. If anyone can build a Windows exe of SciPy 0.6.0 for older machines without SSE2, please let me know and I will be happy to include them when I make the release. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From tjhnson at gmail.com Thu Sep 20 05:05:02 2007 From: tjhnson at gmail.com (Tom Johnson) Date: Thu, 20 Sep 2007 02:05:02 -0700 Subject: [SciPy-user] Recommendations for Distribution Class Message-ID: Hi, I'd like to hear thoughts on a good representation for discrete probability distributions. Currently, I am using dictionaries as they can be sparse and they give access the probabilities via keys (this is desired). For example, >>> p = {'a':.3,'c':.7} >>> print p['a'] This is nice and fine, but I'd like to add more functionality. For example, >>> print p['b'] 0 >>> q = scipy.log2(p) # or perhaps q = p.aslog2() >>> print q['b'] -inf All this says that I should think about subclassing dict. However, I also want to be able to compute marginal distributions. With a dictionary of dictionaries, p['a']['b'], it is not convenient to sum over the second index. With two random variables, I can store two dictionaries to solve this problem...but I need a general solution and N-dimensional scipy arrays seem like a possibility. But alas, they are not sparse...and scipy.sparse is only for matrices (?). Finally, there is the question of a good representation for conditional probabilities. Any thoughts on this would be very helpful. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Thu Sep 20 06:05:24 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 20 Sep 2007 19:05:24 +0900 Subject: [SciPy-user] A first proposal for dataset organization In-Reply-To: <46F15ACB.5070409@gmail.com> References: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> <91cf711d0709190705j63db2c78jcbb4415463433361@mail.gmail.com> <46F15ACB.5070409@gmail.com> Message-ID: <46F245E4.10605@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Huard wrote: >> Hi Anne, >> >> 2007/9/19, Anne Archibald > >: >> >> On 18/09/2007, David Huard > > wrote: >> >> > For large data sets, I'm not sure I understand what you're >> meaning. Do you >> > intend to include netcdf or HDF5 files and provide an interface to >> access >> > those data sets so users don't have to bother about the underlying >> engine ? >> > Do we really want to distribute a package weighting > 1GB ? >> >> One of the points of this project, as I understand it, is to make it >> convenient for people to get and use real datasets. In particular, one >> possibility is to not include the data in this package, but instead >> only a script to download it from (say) the HEASARC. Thus big datasets >> are not outrageous, and more to the point, we need to be able to deal >> with them whatever form they are in natively. >> >> >> My understanding was rather : >> " ... to make it convenient for people to get and use real datasets for >> use in SciPy and NumPy examples, documentation and tutorials. " This >> limits the scope of the dataset package, at least for starters. If some >> tutorial deals with larger than memory issues, then using a specialized >> binary format makes sense. However, I think that pretty basic datasets >> can illustrate the use of most SciPy and NumPy functions. > > That's an important use case, certainly, but I had in mind uses cases like the > one Anne gave, too, when I suggested parts of the design that David implemented. > The scope is still fairly broad. Yes, indeed, my sentence "to make it convenient for people to get and use real datasets for use in SciPy and NumPy examples, documentation and tutorials" was just a list of possible usages, not the only usages to take into account. I realized also that my proposal sounded like I was the only involved, which was not the case. I hope people involved in previous discussion on that matter didn't take any offence. David (Huard) already highlighted one problem with my proposal (time series representation). I would really be interested in comments about using MaskedArrays to handle missing data (I've never used it myself), and the use of record arrays for the data; for example, I can see cases where record arrays may be a problem (if all your data are homogenous, you cannot treat the data as a big numpy array), but I don't know if this is significant. cheers, David From aisaac at american.edu Thu Sep 20 10:08:37 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 20 Sep 2007 10:08:37 -0400 Subject: [SciPy-user] Reading in data into a python program!! In-Reply-To: <446833.24101.qm@web27709.mail.ukl.yahoo.com> References: <446833.24101.qm@web27709.mail.ukl.yahoo.com> Message-ID: Use pylab.load, which is very configurable. (I often wish for a numpy equivalent.) Cheers, Alan Isaac From david.huard at gmail.com Thu Sep 20 10:13:16 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 20 Sep 2007 10:13:16 -0400 Subject: [SciPy-user] A first proposal for dataset organization In-Reply-To: <46F245E4.10605@ar.media.kyoto-u.ac.jp> References: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> <91cf711d0709190705j63db2c78jcbb4415463433361@mail.gmail.com> <46F15ACB.5070409@gmail.com> <46F245E4.10605@ar.media.kyoto-u.ac.jp> Message-ID: <91cf711d0709200713pa42a731t67713cdb27260a61@mail.gmail.com> 2007/9/20, David Cournapeau : > > Robert Kern wrote: > > David Huard wrote: > >> Hi Anne, > >> > >> 2007/9/19, Anne Archibald >> >: > >> > >> On 18/09/2007, David Huard >> > wrote: > >> > >> > For large data sets, I'm not sure I understand what you're > >> meaning. Do you > >> > intend to include netcdf or HDF5 files and provide an interface > to > >> access > >> > those data sets so users don't have to bother about the > underlying > >> engine ? > >> > Do we really want to distribute a package weighting > 1GB ? > >> > >> One of the points of this project, as I understand it, is to make > it > >> convenient for people to get and use real datasets. In particular, > one > >> possibility is to not include the data in this package, but instead > >> only a script to download it from (say) the HEASARC. Thus big > datasets > >> are not outrageous, and more to the point, we need to be able to > deal > >> with them whatever form they are in natively. > >> > >> > >> My understanding was rather : > >> " ... to make it convenient for people to get and use real datasets for > >> use in SciPy and NumPy examples, documentation and tutorials. " This > >> limits the scope of the dataset package, at least for starters. If some > >> tutorial deals with larger than memory issues, then using a specialized > >> binary format makes sense. However, I think that pretty basic datasets > >> can illustrate the use of most SciPy and NumPy functions. > > > > That's an important use case, certainly, but I had in mind uses cases > like the > > one Anne gave, too, when I suggested parts of the design that David > implemented. > > The scope is still fairly broad. > Yes, indeed, my sentence "to make it convenient for people to get and > use real datasets for use in SciPy and NumPy examples, documentation and > tutorials" was just a list of possible usages, not the only usages to > take into account. I realized also that my proposal sounded like I was > the only involved, which was not the case. I hope people involved in > previous discussion on that matter didn't take any offence. OK. So here is my understanding of what has been said so far about the scope of the package, please correct me if I'm wrong. * Provide data sets for testing, demos and tutorials of scipy and numpy functions. * Propose a standard format to store data in text/binary files. * Propose a format to represent the data internally (dictionary, record arrays, masked arrays, timeseries, etc). * Implement an API to store/retrieve the data to/from text or binary files based on the standard. * Provide utilities to import data sets from web archives and convert them to the proposed format. Regards, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Thu Sep 20 11:12:39 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 20 Sep 2007 10:12:39 -0500 Subject: [SciPy-user] Reading in data into a python program!! In-Reply-To: References: <446833.24101.qm@web27709.mail.ukl.yahoo.com> Message-ID: What does pylab.load do that io.read_array can't? They look very similar to me? skiplines is slightly more intuitive than lines=(startline,-1), but other than that I think they are the same. But I could be wrong. I do this sort of thing alot and have my own fairly elaborate code for searching through a data file with a header of many lines to find the start of the data, but that is overkill in this case. I think a good, flexible, but easy to use function for this kind of thing is essential. io.read_array isn't perfect, but it is pretty good. If pylab.load has flexibility that io.read_array doesn't than additional features should be incorporated, IMHO. I would be interested in contributing to that (in my spare time :). My code depends on the csv module, which I think is part of the standard python distribution on all platforms. Ryan On 9/20/07, Alan G Isaac wrote: > Use pylab.load, which is very configurable. > (I often wish for a numpy equivalent.) > > Cheers, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From Joris.DeRidder at ster.kuleuven.be Thu Sep 20 11:38:52 2007 From: Joris.DeRidder at ster.kuleuven.be (Joris De Ridder) Date: Thu, 20 Sep 2007 17:38:52 +0200 Subject: [SciPy-user] Reading in data into a python program!! In-Reply-To: References: <446833.24101.qm@web27709.mail.ukl.yahoo.com> Message-ID: On 20 Sep 2007, at 16:08, Alan G Isaac wrote: > Use pylab.load, which is very configurable. > (I often wish for a numpy equivalent.) You now have numpy.loadtxt() and numpy.savetxt(). Cheers, Genie in a bottle :-) Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From aisaac at american.edu Thu Sep 20 11:50:29 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 20 Sep 2007 11:50:29 -0400 Subject: [SciPy-user] Reading in data into a python program!! In-Reply-To: References: <446833.24101.qm@web27709.mail.ukl.yahoo.com> Message-ID: On Thu, 20 Sep 2007, Ryan Krauss apparently wrote: > What does pylab.load do that io.read_array can't? Well most obviously, the availability of unpack and converters. Unless I am wrong, and I might well be, for my reason to use pylab.load is very different. It is only **very** recently (since Jarrod's release) that it became reasonable to require my students to install SciPy, while NumPy+Matplotlib has been reasonable for some time. So call it habit. Secondarily, try this: import scipy help(scipy.io.read_array) It fails. Instead you have to from scipy import io help(io.read_array) Little unintuitve things like this add up to a major pain in the butt when teaching. > I think a good, flexible, but easy to use function for > this kind of thing is essential. io.read_array isn't > perfect, but it is pretty good. If pylab.load has > flexibility that io.read_array doesn't than additional > features should be incorporated, IMHO. I would be > interested in contributing to that (in my spare time :). > My code depends on the csv module, which I think is part > of the standard python distribution on all platforms. Is there no way for pylab.load and scipy.io.read_array to become a single code base, now that mpl is shifting entirely to NumPy? Cheers, Alan Isaac From aisaac at american.edu Thu Sep 20 11:50:30 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 20 Sep 2007 11:50:30 -0400 Subject: [SciPy-user] Reading in data into a python program!! In-Reply-To: References: <446833.24101.qm@web27709.mail.ukl.yahoo.com> Message-ID: On Thu, 20 Sep 2007, Joris De Ridder apparently wrote: > You now have numpy.loadtxt() and numpy.savetxt(). wtf?? How long have these been around? Thanks! Alan (happy but sheepish) From gael.varoquaux at normalesup.org Thu Sep 20 12:58:41 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 20 Sep 2007 18:58:41 +0200 Subject: [SciPy-user] weave.accelerate Message-ID: <20070920165841.GB13219@clipper.ens.fr> Hi, I just discovered weave.accelerate. It looks very promising, but I cannot find any documentation on it. Does any one know where to get info, appart from reading the source code, which I don't find increadibly enlightning ? Cheers, Ga?l From millman at berkeley.edu Thu Sep 20 13:46:47 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 20 Sep 2007 10:46:47 -0700 Subject: [SciPy-user] Reading in data into a python program!! In-Reply-To: References: <446833.24101.qm@web27709.mail.ukl.yahoo.com> Message-ID: On 9/20/07, Alan G Isaac wrote: > On Thu, 20 Sep 2007, Joris De Ridder apparently wrote: > > You now have numpy.loadtxt() and numpy.savetxt(). > > How long have these been around? Travis added them in April: http://projects.scipy.org/scipy/numpy/changeset/3722 Jarrod From robert.kern at gmail.com Thu Sep 20 14:00:44 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Sep 2007 13:00:44 -0500 Subject: [SciPy-user] A first proposal for dataset organization In-Reply-To: <91cf711d0709200713pa42a731t67713cdb27260a61@mail.gmail.com> References: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> <91cf711d0709190705j63db2c78jcbb4415463433361@mail.gmail.com> <46F15ACB.5070409@gmail.com> <46F245E4.10605@ar.media.kyoto-u.ac.jp> <91cf711d0709200713pa42a731t67713cdb27260a61@mail.gmail.com> Message-ID: <46F2B54C.6010008@gmail.com> David Huard wrote: > OK. So here is my understanding of what has been said so far about the > scope of the package, please correct me if I'm wrong. For my part, I would modify most of these. > * Provide data sets for testing, demos and tutorials of scipy and numpy > functions. Agree. > * Propose a standard format to store data in text/binary files. This wouldn't be on my radar at all. I think there is much less to be gained from this than having a reasonably consistent API at the Python level for accessing the data in whatever format it happens to be. > * Propose a format to represent the data internally (dictionary, record > arrays, masked arrays, timeseries, etc). Somewhat. I think it's useful to have a consistent API at the surface: load() should probably always return a dictionary. However, I'm less concerned about standardizing what's underneath. Each dataset has different needs. Trying to force it into something inappropriate is a waste of effort. Instead of standards, I'd prefer (multiple) conventions that we simply encourage. We encourage those conventions by providing utilities that manipulate data that follows the conventions. For example, in his "Format of the data" section, David Cournapeau suggested a convention for machine learning datasets and some operations that would be useful to implement on top of that convention. For other fields, other conventions might be used. > * Implement an API to store/retrieve the data to/from text or binary > files based on the standard. Instead, I would provide some utilities for loading common formats. From the perspective of the user of the dataset, the only real API would be load() and the metadata. For the developer of the dataset, we would have a number of utilities to help them implement the load() function for their dataset. > * Provide utilities to import data sets from web archives and convert > them to the proposed format. Rather, provide utilities for importing data sets from a URL and caching them in a location established by convention. Parsing the files is dependent on the format; instead of writing format conversion code, just write the loading code. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Thu Sep 20 14:41:00 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 20 Sep 2007 11:41:00 -0700 Subject: [SciPy-user] A first proposal for dataset organization In-Reply-To: <46F2B54C.6010008@gmail.com> References: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> <91cf711d0709190705j63db2c78jcbb4415463433361@mail.gmail.com> <46F15ACB.5070409@gmail.com> <46F245E4.10605@ar.media.kyoto-u.ac.jp> <91cf711d0709200713pa42a731t67713cdb27260a61@mail.gmail.com> <46F2B54C.6010008@gmail.com> Message-ID: On 9/20/07, Robert Kern wrote: > Rather, provide utilities for importing data sets from a URL and caching them in > a location established by convention. We just checked in some code from NIPY to do this: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/io/datasource.py I would love to extend the functionality of this module. So if anyone has any suggestions, please let me know. We will be adding some better documentation over the next few days as well. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From unpingco at osc.edu Thu Sep 20 14:50:50 2007 From: unpingco at osc.edu (Jose Unpingco) Date: Thu, 20 Sep 2007 14:50:50 -0400 Subject: [SciPy-user] ipython: debug with breakpoint in subfunction? Message-ID: <46F25E93.AA84.0083.0@osc.edu> I am running the Enthought windows XP tool suite and am using ipython 0.7.2. I know I can do %run -d -b11 func.py to set a breakpoint at line 11 and run func.py in ipython. However, I call a number of sub-functions from func.py that reside in other files. How can I set a breakpoint for a sub-function in another file using ipython? Please contact me if you have questions or need more information. Thanks! Jose Unpingco, Ph.D. (619)553-2922 -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Thu Sep 20 16:08:21 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 20 Sep 2007 22:08:21 +0200 Subject: [SciPy-user] ipython: debug with breakpoint in subfunction? In-Reply-To: <46F25E93.AA84.0083.0@osc.edu> References: <46F25E93.AA84.0083.0@osc.edu> Message-ID: <46F2D335.2090807@gmx.net> Jose Unpingco wrote: > I am running the Enthought windows XP tool suite and am using ipython 0.7.2. > > I know I can do %run -d -b11 func.py > > to set a breakpoint at line 11 and run func.py in ipython. However, I call a number of sub-functions from func.py that reside in other files. > > How can I set a breakpoint for a sub-function in another file using ipython? > This might work: suppose main.py, in which sub.py is imported: In [6]: %run -d main.py Breakpoint 1 at /home/elcorto/tmp/main.py:1 NOTE: Enter 'c' at the ipdb> prompt to start your script. > (1) ipdb> b sub.func Breakpoint 2 at /home/elcorto/tmp/sub.py:4 ipdb> c > /home/elcorto/tmp/main.py(1) 1---> 1 print 'hoho+++' 2 import sub 3 sub.func2() ipdb> hoho+++ lala > /home/elcorto/tmp/sub.py(5)func() 2 4 def func(): ----> 5 a = 1 6 b = 2 ipdb> 3 In [7]: -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams -------------- next part -------------- A non-text attachment was scrubbed... Name: main.py Type: text/x-python Size: 50 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sub.py Type: text/x-python Size: 87 bytes Desc: not available URL: From stefan at sun.ac.za Thu Sep 20 16:15:52 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 20 Sep 2007 22:15:52 +0200 Subject: [SciPy-user] scoreatpercentile on 2D array gives unexpected result In-Reply-To: <1189479628.787536.107890@50g2000hsm.googlegroups.com> References: <1189479628.787536.107890@50g2000hsm.googlegroups.com> Message-ID: <20070920201552.GD712@mentat.za.net> Hi Vincent On Tue, Sep 11, 2007 at 03:00:28AM -0000, Vincent wrote: > I want to get certain percentiles of each column of an array (i.e., > 2.5th, 50th, and 97.5th). > > Testing the scipy scoreatpercentile function i get some results I > didn't expect. > > In [5]: z > Out[5]: > array([[1, 1, 1], > [1, 1, 1], > [4, 4, 3], > [1, 1, 1], > [1, 1, 1]]) > > The following works as expected: > > In [6]: N.median(z) > Out[6]: array([1, 1, 1]) > > Now using scoreatpercentile: > > In [54]: scipy.stats.scoreatpercentile(z,50) > Out[54]: array([3, 4, 4]) This should be fixed in SVN r3341. Cheers St?fan From aisaac at american.edu Thu Sep 20 16:47:15 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 20 Sep 2007 16:47:15 -0400 Subject: [SciPy-user] Reading in data into a python program!! In-Reply-To: References: <446833.24101.qm@web27709.mail.ukl.yahoo.com> Message-ID: On Thu, 20 Sep 2007, Jarrod Millman apparently wrote: > Travis added them in April: > http://projects.scipy.org/scipy/numpy/changeset/3722 Aha! OK, thanks. And big thanks to Travis! Cheers, Alan From mattknox_ca at hotmail.com Thu Sep 20 16:31:41 2007 From: mattknox_ca at hotmail.com (Matt Knox) Date: Thu, 20 Sep 2007 20:31:41 +0000 (UTC) Subject: [SciPy-user] A first proposal for dataset organization References: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> <91cf711d0709190705j63db2c78jcbb4415463433361@mail.gmail.com> <46F15ACB.5070409@gmail.com> <46F245E4.10605@ar.media.kyoto-u.ac.jp> Message-ID: > David (Huard) already highlighted one problem with my proposal (time > series representation). I would really be interested in comments about > using MaskedArrays to handle missing data (I've never used it myself), > and the use of record arrays for the data; for example, I can see cases > where record arrays may be a problem (if all your data are homogenous, > you cannot treat the data as a big numpy array), but I don't know if > this is significant. Well, there are tools in the sandbox that handle all this kind of stuff. The new maskedarray implementation in the sandbox has a "MaskedRecords" class which allows for missing values in record arrays. The timeseries package handles time series of various frequencies, and is a subclass of MaskedArray so it also handles missing values too. There is also a "TimeSeriesRecords" class which is a subclass of the "MaskedRecords" class. This would probably be a really nice way to represent a lot of this data, but it is hard to say when/if this stuff will move out of the sandbox and into the core numpy/scipy distribution. If you have specific questions about the maskedarray or timeseries module, or the current numpy.ma module, start up a new thread and I'll answer what I can, and I'm sure others can fill in any gaps. - Matt From pepe_kawumi at yahoo.co.uk Thu Sep 20 22:06:37 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Fri, 21 Sep 2007 02:06:37 +0000 (GMT) Subject: [SciPy-user] printing an entire matrix Message-ID: <154218.27856.qm@web27707.mail.ukl.yahoo.com> Hi, i need to check if all the values in a matrix are right. however, python only allows me to check the the first and last three rows. is there a way i can see the entire matrix. It's an 84*84 matrix whose contents I want to verify. Thanks ___________________________________________________________ Want ideas for reducing your carbon footprint? Visit Yahoo! For Good http://uk.promotions.yahoo.com/forgood/environment.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Sep 20 22:51:13 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Sep 2007 21:51:13 -0500 Subject: [SciPy-user] printing an entire matrix In-Reply-To: <154218.27856.qm@web27707.mail.ukl.yahoo.com> References: <154218.27856.qm@web27707.mail.ukl.yahoo.com> Message-ID: <46F331A1.2060307@gmail.com> Perez Kawumi wrote: > Hi, > i need to check if all the values in a matrix are right. however, python > only allows me to check the the first and last three rows. is there a > way i can see the entire matrix. It's an 84*84 matrix whose contents I > want to verify. import sys import numpy numpy.set_printoptions(threshold=sys.maxint) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pepe_kawumi at yahoo.co.uk Thu Sep 20 22:52:50 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Fri, 21 Sep 2007 02:52:50 +0000 (GMT) Subject: [SciPy-user] generalised eigen value problem Message-ID: <665885.58461.qm@web27709.mail.ukl.yahoo.com> Hi I have got two square matrices A and B. In matlab; [V,D] = EIG(A,B) produces a diagonal matrix D of generalized eigenvalues and a full matrix V whose columns are the corresponding eigenvectors so that A*V = B*V*D. trying to solve for V and D in python I switched D and V and used the command below [D,V] = linalg.eig(A,B) However, this sometimes gives me the right solutions and sometimes "wrong ones". Does anyone know how I can ensure that I always get the right solutions? Thanks Perez ___________________________________________________________ Want ideas for reducing your carbon footprint? Visit Yahoo! For Good http://uk.promotions.yahoo.com/forgood/environment.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Sep 20 23:00:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Sep 2007 22:00:26 -0500 Subject: [SciPy-user] generalised eigen value problem In-Reply-To: <665885.58461.qm@web27709.mail.ukl.yahoo.com> References: <665885.58461.qm@web27709.mail.ukl.yahoo.com> Message-ID: <46F333CA.7060507@gmail.com> Perez Kawumi wrote: > Hi > I have got two square matrices A and B. In matlab; > > > [V,D] = EIG(A,B) produces a diagonal matrix D of generalized > eigenvalues and a full matrix V whose columns are the > corresponding eigenvectors so that A*V = B*V*D. > > trying to solve for V and D in python > I switched D and V and used the command below > > > [D,V] = linalg.eig(A,B) > > However, this sometimes gives me the right solutions and sometimes > "wrong ones". Does anyone know how I can ensure that I always get the > right solutions? What do you mean by "right" and "wrong"? What code did you try? What results did you get? What results did you expect? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pepe_kawumi at yahoo.co.uk Thu Sep 20 23:27:11 2007 From: pepe_kawumi at yahoo.co.uk (Perez Kawumi) Date: Fri, 21 Sep 2007 03:27:11 +0000 (GMT) Subject: [SciPy-user] genaralised eigen value problem Message-ID: <259107.58100.qm@web27705.mail.ukl.yahoo.com> Just dont know why the value of my third eigen value is incorrect Matlab >> a=([1,2,3;4,5,6;7,8,9]) a = 1 2 3 4 5 6 7 8 9 >> b=([9,8,7;6,5,4;3,2,1]) b = 9 8 7 6 5 4 3 2 1 >> [c,d]=eig(a,b) c = -0.1538 - 0.0000i -0.1538 + 0.0000i -0.5000 -0.8462 - 0.0000i -0.8462 + 0.0000i 1.0000 1.0000 + 0.0000i 1.0000 - 0.0000i -0.5000 d = -1.0000 + 0.0000i 0 0 0 -1.0000 - 0.0000i 0 0 0 0.0172 Python >>> a=array(([1,2,3],[4,5,6],[7,8,9])) >>> b=array(([9,8,7],[6,5,4],[3,2,1])) >>> [d,c]= linalg.eig(a,b) >>> d array([-1. +1.50395773e-08j, -1. -1.50395773e-08j, 0.01724138 +0.00000000e+00j]) >>> c array([[-0.15384615 -1.73705175e-09j, -0.15384615 +1.73705175e-09j, -0.5 +0.00000000e+00j], [-0.84615385 -9.43749936e-12j, -0.84615385 +9.43749936e-12j, 1. +0.00000000e+00j], [ 1. +1.11534083e-11j, 1. -1.11534083e-11j, -0.5 +0.00000000e+00j]]) ___________________________________________________________ Want ideas for reducing your carbon footprint? Visit Yahoo! For Good http://uk.promotions.yahoo.com/forgood/environment.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Sep 20 23:33:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Sep 2007 22:33:26 -0500 Subject: [SciPy-user] genaralised eigen value problem In-Reply-To: <259107.58100.qm@web27705.mail.ukl.yahoo.com> References: <259107.58100.qm@web27705.mail.ukl.yahoo.com> Message-ID: <46F33B86.9000605@gmail.com> Perez Kawumi wrote: > Just dont know why the value of my third eigen value is incorrect > d = > -1.0000 + 0.0000i 0 0 > 0 -1.0000 - 0.0000i 0 > 0 0 0.0172 > array([-1. +1.50395773e-08j, -1. -1.50395773e-08j, > 0.01724138 +0.00000000e+00j]) They look the same to me. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From markbak at gmail.com Fri Sep 21 04:24:40 2007 From: markbak at gmail.com (Mark Bakker) Date: Fri, 21 Sep 2007 10:24:40 +0200 Subject: [SciPy-user] Where do I send bug reports ? Bug in function iv Message-ID: <6946b9500709210124k430f31am9f6157b91ee2406f@mail.gmail.com> Hello - I couldn't find a place to send bug reports. I submitted this to the list a couple of weeks ago, but I haven't had a response yet, and I don't want it to get lost (I really hope somebody knows how to fix it) It seems that the modified Bessel function iv returns incorrect values for large argument. iv(0,100) gives 5.72185663838e+041 while the equivalent jv(0,complex(0,100)) (1.07375170713e+042+0j ) This latter results is the correct answer, as verified in Abramowitz and Stegun, Table 9.11, Page 428. Using the jv is a work-around, but this should probably be fixed. The two results start to deviate for arguments over about 50, with jv giving the correct answer. On a related note, there are implementations for Bessel functions of integer order (jn, kn) but not for the modified Bessel function In. I guess this is because the function would be called 'in', but it would be nice to have a special function for integer order, and I am pretty sure they are around. Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Fri Sep 21 04:27:37 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 21 Sep 2007 11:27:37 +0300 Subject: [SciPy-user] one more feature scipy website lacks Message-ID: <46F38079.9000301@scipy.org> Hi all, don't you think scipy.org site mainpage lacks a good feedback of votings? I know lots of sites provide the ones. There is an empty space in left bottom corner of the page, so there could be a votings, like - what package(s)/function(s) are most missing in scipy? - what package(s)/function(s) are most useful? - what package(s)/function(s) need the most enhancements? - are you a student, a teacher, a scientist (etc) - how many years is your experience with scipy? - how many % of code do you write using Python? etc, etc. I guess regular votings will enhance feedback and increase site visits number. There are lots of already written script for voting, like this one: http://home.datacomm.ch/atair/perlscript/ Regards, Dmitrey. http://openopt.blogspot.com/ http://scipy.org/scipy/scikits/wiki/OpenOpt From millman at berkeley.edu Fri Sep 21 04:55:01 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 21 Sep 2007 01:55:01 -0700 Subject: [SciPy-user] Where do I send bug reports ? Bug in function iv In-Reply-To: <6946b9500709210124k430f31am9f6157b91ee2406f@mail.gmail.com> References: <6946b9500709210124k430f31am9f6157b91ee2406f@mail.gmail.com> Message-ID: Hey Mark, Thanks for following up with this bug report. Please register on the SciPy development wiki: http://projects.scipy.org/scipy/scipy/ There is a link to registering on the front page or you can go directly here: http://projects.scipy.org/scipy/scipy/register Once you have an account, login and submit a new ticket. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Fri Sep 21 05:04:32 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 21 Sep 2007 02:04:32 -0700 Subject: [SciPy-user] ANN: SciPy 0.6.0 Message-ID: I'm pleased to announce the release of SciPy 0.6.0: http://scipy.org/Download SciPy is a package of tools for science and engineering for Python. It includes modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, ODE solvers, and more. This release brings many bugfixes and speed improvements. Major changes since 0.5.2.1: * cluster o cleaned up kmeans code and added a kmeans2 function that adds several initialization methods * fftpack o fft speedups for complex data o major overhaul of fft source code for easier maintenance * interpolate o add Lagrange interpolating polynomial o fix interp1d so that it works for higher order splines * io o read and write basic .wav files * linalg o add Cholesky decomposition and solution of banded linear systems with Hermitian or symmetric matrices o add RQ decomposition * ndimage o port to NumPy API o fix byteswapping problem in rotate o better support for 64-bit platforms * optimize o nonlinear solvers module added o a lot of bugfixes and modernization * signal o add complex Morlet wavelet * sparse o significant performance improvements Thank you to everybody who contributed to the recent release. Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From Jim.Vickroy at noaa.gov Fri Sep 21 10:56:30 2007 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Fri, 21 Sep 2007 08:56:30 -0600 Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux Message-ID: <015601c7fc5f$98105b90$41e3a8c0@sec.noaa.gov> Hello, I've just installed scipy 0.6.0 via scipy-0.6.0.win32-py2.5.msi. The installation appeared to proceed normally, but the scipy import had the following result: python version: 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] Traceback (most recent call last): File "t-variant.py", line 4, in from scipy import * File "C:\Python25\lib\site-packages\scipy\linsolve\__init__.py", line 5, in import umfpack File "C:\Python25\lib\site-packages\scipy\linsolve\umfpack\__init__.py", line 3, in from umfpack import * File "C:\Python25\lib\site-packages\scipy\linsolve\umfpack\umfpack.py", line 11, in import scipy.sparse as sp File "C:\Python25\lib\site-packages\scipy\sparse\__init__.py", line 5, in from sparse import * File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 21, in from scipy.sparse.sparsetools import cscmux, csrmux, \ ImportError: cannot import name cscmux This did not happen with scipy 0.5.2.1. I uninstalled and reinstalled the package, but the failure continues. Apologies if this has been posted. I am a new subscriber and did not see this problem reported in the archive. -- jv -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Fri Sep 21 11:10:07 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 21 Sep 2007 08:10:07 -0700 Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux In-Reply-To: <015601c7fc5f$98105b90$41e3a8c0@sec.noaa.gov> References: <015601c7fc5f$98105b90$41e3a8c0@sec.noaa.gov> Message-ID: On 9/21/07, Jim Vickroy wrote: > I've just installed scipy 0.6.0 via scipy-0.6.0.win32-py2.5.msi. The > installation appeared to proceed normally, but the scipy import had the > following result: Hey Jim, Could you try installing the exe to see if it is a problem with just the msi? Also, what version of NumPy are you running? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From fdu.xiaojf at gmail.com Fri Sep 21 11:18:15 2007 From: fdu.xiaojf at gmail.com (fdu.xiaojf at gmail.com) Date: Fri, 21 Sep 2007 23:18:15 +0800 Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux In-Reply-To: <015601c7fc5f$98105b90$41e3a8c0@sec.noaa.gov> References: <015601c7fc5f$98105b90$41e3a8c0@sec.noaa.gov> Message-ID: <46F3E0B7.3070100@gmail.com> Jim Vickroy wrote: > Hello, > > > > I?ve just installed scipy 0.6.0 via scipy-0.6.0.win32-py2.5.msi. The > installation appeared to proceed normally, but the scipy import had the > following result: > > > > python version: 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 > bit (Intel)] > > Traceback (most recent call last): > > File "t-variant.py", line 4, in > > from scipy import * > > File "C:\Python25\lib\site-packages\scipy\linsolve\__init__.py", line > 5, in > > import umfpack > > File > "C:\Python25\lib\site-packages\scipy\linsolve\umfpack\__init__.py", line > 3, in > > from umfpack import * > > File > "C:\Python25\lib\site-packages\scipy\linsolve\umfpack\umfpack.py", line > 11, in > > import scipy.sparse as sp > > File "C:\Python25\lib\site-packages\scipy\sparse\__init__.py", line 5, > in > > from sparse import * > > File "C:\Python25\lib\site-packages\scipy\sparse\sparse.py", line 21, > in > > from scipy.sparse.sparsetools import cscmux, csrmux, \ > > ImportError: cannot import name cscmux > > > > This did not happen with scipy 0.5.2.1. > > > > I uninstalled and reinstalled the package, but the failure continues. > > > > Apologies if this has been posted. I am a new subscriber and did not > see this problem reported in the archive. > > > It works for me. I'm running python2.5.1, and I installed scipy using scipy-0.6.0.win32-py2.5.exe. From koepsell at gmail.com Fri Sep 21 12:05:04 2007 From: koepsell at gmail.com (killian koepsell) Date: Fri, 21 Sep 2007 09:05:04 -0700 Subject: [SciPy-user] scipy 0.6.0 on OSX: src/fblaswrap_veclib_c.c: No such file or directory Message-ID: hi, scipy 0.6.0 doesn't compile on my macbook pro, OSX 10.4.10. scipy 0.5.2.1 compiled without problems. output is attached below. kilian creating build/temp.macosx-10.4-i386-2.5/src creating build/temp.macosx-10.4-i386-2.5/build/src.macosx-10.4-i386-2.5/build/src.macosx-10.4-i386-2.5/scipy/linalg compile options: '-DNO_ATLAS_INFO=3 -Ibuild/src.macosx-10.4-i386-2.5 -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c' extra options: '-msse3' gcc: build/src.macosx-10.4-i386-2.5/build/src.macosx-10.4-i386-2.5/scipy/linalg/fblasmodule.c gcc: src/fblaswrap_veclib_c.c powerpc-apple-darwin8-gcc-4.0.1: src/fblaswrap_veclib_c.c: No such file or directory i686-apple-darwin8-gcc-4.0.1: src/fblaswrap_veclib_c.c: No such file or directory powerpc-apple-darwin8-gcc-4.0.1: no input files i686-apple-darwin8-gcc-4.0.1: no input files lipo: can't figure out the architecture type of: /var/tmp//ccCUD6Oz.out powerpc-apple-darwin8-gcc-4.0.1: src/fblaswrap_veclib_c.c: No such file or directory i686-apple-darwin8-gcc-4.0.1: src/fblaswrap_veclib_c.c: No such file or directory powerpc-apple-darwin8-gcc-4.0.1: no input files i686-apple-darwin8-gcc-4.0.1: no input files lipo: can't figure out the architecture type of: /var/tmp//ccCUD6Oz.out error: Command "gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 -DNO_ATLAS_INFO=3 -Ibuild/src.macosx-10.4-i386-2.5 -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c src/fblaswrap_veclib_c.c -o build/temp.macosx-10.4-i386-2.5/src/fblaswrap_veclib_c.o -msse3" failed with exit status 1 From chanley at stsci.edu Fri Sep 21 12:55:39 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 21 Sep 2007 12:55:39 -0400 Subject: [SciPy-user] build problem on 64bit RHE4 Message-ID: <46F3F78B.2060200@stsci.edu> Hi, We are having a problem building scipy on one of our RHE4 systems. This was the initial error. We installed the full blas libraries from netlib, defined the BLAS_SRC env variable and got an error about missing LAPACK libraries. This use to be a problem on 32bit machines but was fixed long ago. Any help would be appreciated. Thanks, Chris ----------- Build Log: This is on RH4 AMD Opteron machine. lapack_opt_info: lapack_mkl_info: NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/stsci/pyssg/Python-2.5.1/lib libraries lapack_atlas not found in /usr/stsci/pyssg/Python-2.5.1/lib libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in /usr/stsci/pyssg/Python-2.5.1/lib libraries lapack_atlas not found in /usr/stsci/pyssg/Python-2.5.1/lib libraries f77blas,cblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info NOT AVAILABLE /usr/stsci/source/install-scipy/numpy/distutils/system_info.py:1221: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: libraries lapack not found in /usr/stsci/pyssg/Python-2.5.1/lib libraries lapack not found in /usr/local/lib libraries lapack not found in /usr/lib NOT AVAILABLE /usr/stsci/source/install-scipy/numpy/distutils/system_info.py:1232: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) lapack_src_info: NOT AVAILABLE /usr/stsci/source/install-scipy/numpy/distutils/system_info.py:1235: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) Traceback (most recent call last): File "setup.py", line 53, in setup_package() File "setup.py", line 45, in setup_package configuration=configuration ) File "../install-scipy/numpy/distutils/core.py", line 142, in setup config = configuration() File "setup.py", line 19, in configuration config.add_subpackage('scipy') File "../install-scipy/numpy/distutils/misc_util.py", line 772, in add_subpackage caller_level = 2) File "../install-scipy/numpy/distutils/misc_util.py", line 755, in get_subpackage caller_level = caller_level + 1) File "../install-scipy/numpy/distutils/misc_util.py", line 702, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "/tmp_mnt/stsciEWS4_x64/source/scipy-0.6.0/scipy/setup.py", line 10, in configuration config.add_subpackage('lib') File "../install-scipy/numpy/distutils/misc_util.py", line 772, in add_subpackage caller_level = 2) File "../install-scipy/numpy/distutils/misc_util.py", line 755, in get_subpackage caller_level = caller_level + 1) File "../install-scipy/numpy/distutils/misc_util.py", line 702, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/lib/setup.py", line 8, in configuration config.add_subpackage('lapack') File "../install-scipy/numpy/distutils/misc_util.py", line 772, in add_subpackage caller_level = 2) File "../install-scipy/numpy/distutils/misc_util.py", line 755, in get_subpackage caller_level = caller_level + 1) File "../install-scipy/numpy/distutils/misc_util.py", line 702, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/lib/lapack/setup.py", line 32, in configuration lapack_opt = get_info('lapack_opt',notfound_action=2) File "../install-scipy/numpy/distutils/system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "../install-scipy/numpy/distutils/system_info.py", line 405, in get_info raise self.notfounderror,self.notfounderror.__doc__ numpy.distutils.system_info.LapackNotFoundError: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. -- Christopher Hanley Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From robert.kern at gmail.com Fri Sep 21 13:29:29 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Sep 2007 12:29:29 -0500 Subject: [SciPy-user] build problem on 64bit RHE4 In-Reply-To: <46F3F78B.2060200@stsci.edu> References: <46F3F78B.2060200@stsci.edu> Message-ID: <46F3FF79.4090700@gmail.com> Christopher Hanley wrote: > Hi, > > We are having a problem building scipy on one of our RHE4 systems. > > This was the initial error. We installed the full blas libraries from > netlib, > defined the BLAS_SRC env variable and got an error about missing LAPACK > libraries. > > This use to be a problem on 32bit machines but was fixed long ago. > > Any help would be appreciated. Install LAPACK? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Jim.Vickroy at noaa.gov Fri Sep 21 14:01:20 2007 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Fri, 21 Sep 2007 12:01:20 -0600 Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux In-Reply-To: References: <015601c7fc5f$98105b90$41e3a8c0@sec.noaa.gov> Message-ID: <016801c7fc79$6a04f2a0$41e3a8c0@sec.noaa.gov> -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Jarrod Millman Sent: Friday, September 21, 2007 9:10 AM To: SciPy Users List Subject: Re: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux On 9/21/07, Jim Vickroy wrote: > I've just installed scipy 0.6.0 via scipy-0.6.0.win32-py2.5.msi. The > installation appeared to proceed normally, but the scipy import had the > following result: Hey Jim, Could you try installing the exe to see if it is a problem with just the msi? OK, I just uninstalled the "msi" version of scipy 0.6.0 and reinstalled using scipy-0.6.0.win32-py2.5.exe. The exact same import exception is raised with the "exe" version. Also, what version of NumPy are you running? Here are particulars about my system: >>> import numpy >>> numpy.__version__ '1.0.3.1' >>> import sys >>> sys.version '2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)]' >>> import scipy >>> scipy.__version__ '0.6.0' >>> Don't know if this is important but notice that "import scipy" does not generate an exception while "from scipy import *" does. Thanks for your interest. -- jv Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From Jim.Vickroy at noaa.gov Fri Sep 21 14:52:16 2007 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Fri, 21 Sep 2007 12:52:16 -0600 Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux -- my mistake ! In-Reply-To: <016801c7fc79$6a04f2a0$41e3a8c0@sec.noaa.gov> References: <015601c7fc5f$98105b90$41e3a8c0@sec.noaa.gov> <016801c7fc79$6a04f2a0$41e3a8c0@sec.noaa.gov> Message-ID: <016e01c7fc80$879152d0$41e3a8c0@sec.noaa.gov> The import error did not occur when I restarted my IDE -- which is where I was working when it occurred. I apologize for raising this false issue. -- jv -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Jim Vickroy Sent: Friday, September 21, 2007 12:01 PM To: 'SciPy Users List' Subject: Re: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Jarrod Millman Sent: Friday, September 21, 2007 9:10 AM To: SciPy Users List Subject: Re: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux On 9/21/07, Jim Vickroy wrote: > I've just installed scipy 0.6.0 via scipy-0.6.0.win32-py2.5.msi. The > installation appeared to proceed normally, but the scipy import had the > following result: Hey Jim, Could you try installing the exe to see if it is a problem with just the msi? OK, I just uninstalled the "msi" version of scipy 0.6.0 and reinstalled using scipy-0.6.0.win32-py2.5.exe. The exact same import exception is raised with the "exe" version. Also, what version of NumPy are you running? Here are particulars about my system: >>> import numpy >>> numpy.__version__ '1.0.3.1' >>> import sys >>> sys.version '2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)]' >>> import scipy >>> scipy.__version__ '0.6.0' >>> Don't know if this is important but notice that "import scipy" does not generate an exception while "from scipy import *" does. Thanks for your interest. -- jv Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From millman at berkeley.edu Fri Sep 21 15:49:19 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 21 Sep 2007 12:49:19 -0700 Subject: [SciPy-user] scipy 0.6.0 on OSX: src/fblaswrap_veclib_c.c: No such file or directory In-Reply-To: References: Message-ID: It looks like a problem with the tarball I generated. I will look into it and upload a new one later today. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From cookedm at physics.mcmaster.ca Fri Sep 21 16:00:25 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 21 Sep 2007 16:00:25 -0400 Subject: [SciPy-user] ANN: SciPy 0.6.0 In-Reply-To: (Jarrod Millman's message of "Fri\, 21 Sep 2007 02\:04\:32 -0700") References: Message-ID: "Jarrod Millman" writes: > I'm pleased to announce the release of SciPy 0.6.0: > http://scipy.org/Download Big thanks for getting this out! -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From dencheva at stsci.edu Fri Sep 21 16:07:15 2007 From: dencheva at stsci.edu (Nadia Dencheva) Date: Fri, 21 Sep 2007 20:07:15 +0000 (UTC) Subject: [SciPy-user] build problem on 64bit RHE4 References: <46F3F78B.2060200@stsci.edu> <46F3FF79.4090700@gmail.com> Message-ID: Robert Kern gmail.com> writes: > > Christopher Hanley wrote: > > Hi, > > > > We are having a problem building scipy on one of our RHE4 systems. > > > > This was the initial error. We installed the full blas libraries from > > netlib, > > defined the BLAS_SRC env variable and got an error about missing LAPACK > > libraries. > > > > This use to be a problem on 32bit machines but was fixed long ago. > > > > Any help would be appreciated. > > Install LAPACK? My impression is (I may be wrong) that there's no web page which clearly states what the requirements are. For example, BLAS, LAPACK - hard requirements ATLAS - optional May be I just didn't find it but after running a search for LAPACK about 4 times I got: ============= Warning: You triggered the wiki's surge protection by doing too many requests in a short time. Please make a short break reading the stuff you already got. When you restart doing requests AFTER that, slow down or you might get locked out for a longer time! ================== Four is not too many and it was a short time because there were no results. If I was new to this I would probably not come back. Nadia Dencheva From cburns at berkeley.edu Fri Sep 21 18:06:04 2007 From: cburns at berkeley.edu (Christopher Burns) Date: Fri, 21 Sep 2007 15:06:04 -0700 Subject: [SciPy-user] build problem on 64bit RHE4 In-Reply-To: References: <46F3F78B.2060200@stsci.edu> <46F3FF79.4090700@gmail.com> Message-ID: <764e38540709211506l7df56e74l9a87f9c60661feb6@mail.gmail.com> Is LAPACK installed in your /usr/lib64 directory? If so you may need to tell the config where to look. On my 64-bit FC7 box I added a site.cfg file in the scipy-trunk directory with the following: [DEFAULT] library_dirs = /usr/lib64 Chris On 9/21/07, Nadia Dencheva wrote: > > Robert Kern gmail.com> writes: > > > > > Christopher Hanley wrote: > > > Hi, > > > > > > We are having a problem building scipy on one of our RHE4 systems. > > > > > > This was the initial error. We installed the full blas libraries from > > > netlib, > > > defined the BLAS_SRC env variable and got an error about missing > LAPACK > > > libraries. > > > > > > This use to be a problem on 32bit machines but was fixed long ago. > > > > > > Any help would be appreciated. > > > > Install LAPACK? > > My impression is (I may be wrong) that there's no web page which clearly > states > what the requirements are. For example, > > BLAS, LAPACK - hard requirements > ATLAS - optional > > May be I just didn't find it but after running a search for LAPACK about 4 > times > I got: > > ============= > Warning: > You triggered the wiki's surge protection by doing too many requests in a > short > time. > Please make a short break reading the stuff you already got. > When you restart doing requests AFTER that, slow down or you might get > locked > out for a longer time! > ================== > > Four is not too many and it was a short time because there were no > results. > If I was new to this I would probably not come back. > > Nadia Dencheva > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Christopher Burns Software Engineer CIRL - UC Berkeley http://cirl.berkeley.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From pearu at cens.ioc.ee Sat Sep 22 03:47:31 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 22 Sep 2007 10:47:31 +0300 (EEST) Subject: [SciPy-user] scipy 0.6.0 on OSX: src/fblaswrap_veclib_c.c: No such file or directory In-Reply-To: References: Message-ID: <60226.85.166.14.172.1190447251.squirrel@cens.ioc.ee> On Fri, September 21, 2007 10:49 pm, Jarrod Millman wrote: > It looks like a problem with the tarball I generated. I will look > into it and upload a new one later today. I have fixed the bug related to this issue in scipy trunk. Pearu From millman at berkeley.edu Sat Sep 22 04:31:01 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 22 Sep 2007 01:31:01 -0700 Subject: [SciPy-user] New scipy 0.6.0 uploads Message-ID: There is a new scipy-0.6.0.tar.gz on the sourceforge page, which contains the missing scipy/linalg/src/fblaswrap_veclib_c.c. There is also now a scipy-0.6.0-py2.4-win32.egg and scipy-0.6.0-py2.5-win32.egg. Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Sun Sep 23 10:01:46 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 23 Sep 2007 07:01:46 -0700 Subject: [SciPy-user] scipy.misc.limits deprecated Message-ID: Hello, As of r3362, scipy.misc.limits is officially deprecated: http://projects.scipy.org/scipy/scipy/changeset/3362 If you need to work with the machine limits, please use numpy.finfo instead: http://scipy.org/scipy/numpy/browser/trunk/numpy/lib/getlimits.py For example, from numpy import finfo, single single_epsilon = finfo(single).eps -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From jturner at gemini.edu Sun Sep 23 16:37:05 2007 From: jturner at gemini.edu (James Turner) Date: Sun, 23 Sep 2007 16:37:05 -0400 Subject: [SciPy-user] JOB: Data Process Developers at Gemini Observatory Message-ID: <46F6CE71.1080806@gemini.edu> Hi everyone, Gemini Observatory is advertising again for developers to work on Python-based data reduction software. If you're interested in applying, please see the instructions at the links below. It would be great to see applications from some developers here ;-). Apologies for the late notice (applications are due on 1 October). http://www.gemini.edu/jobs/#41 http://members.aas.org/JobReg/JobDetailPage.cfm?JobID=23796 Thanks! James. From david at ar.media.kyoto-u.ac.jp Mon Sep 24 01:23:25 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 24 Sep 2007 14:23:25 +0900 Subject: [SciPy-user] A first proposal for dataset organization In-Reply-To: References: <46EE2ADE.2050602@ar.media.kyoto-u.ac.jp> <91cf711d0709181812v62726a9du284c5effb8513389@mail.gmail.com> <91cf711d0709190705j63db2c78jcbb4415463433361@mail.gmail.com> <46F15ACB.5070409@gmail.com> <46F245E4.10605@ar.media.kyoto-u.ac.jp> Message-ID: <46F749CD.3060701@ar.media.kyoto-u.ac.jp> Matt Knox wrote: >> David (Huard) already highlighted one problem with my proposal (time >> series representation). I would really be interested in comments about >> using MaskedArrays to handle missing data (I've never used it myself), >> and the use of record arrays for the data; for example, I can see cases >> where record arrays may be a problem (if all your data are homogenous, >> you cannot treat the data as a big numpy array), but I don't know if >> this is significant. >> > > Well, there are tools in the sandbox that handle all this kind of stuff. The > new maskedarray implementation in the sandbox has a "MaskedRecords" class > which allows for missing values in record arrays. The timeseries package > handles time series of various frequencies, and is a subclass of MaskedArray > so it also handles missing values too. There is also a "TimeSeriesRecords" > class which is a subclass of the "MaskedRecords" class. This would probably be > a really nice way to represent a lot of this data, but it is hard to say > when/if this stuff will move out of the sandbox and into the core numpy/scipy > distribution. > This sounds great. I am a bit worried to depend on sandboxed packages, though. My understanding, but I did not follow the discussion in details, was that MaskedArrays would replace the current implementation in numpy, right ? > If you have specific questions about the maskedarray or timeseries module, or > the current numpy.ma module, start up a new thread and I'll answer what I can, > and I'm sure others can fill in any gaps. > Ok, I will take a look at those, because I am totally unfamiliar with those, cheers, David From aisaac at american.edu Mon Sep 24 14:20:07 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 24 Sep 2007 14:20:07 -0400 Subject: [SciPy-user] NIPY Message-ID: What is the relationship between NIPY and SciPy? It seems a classic SciKit, but it is not in SciKits. Anyway, does the old NIPY models and algorithms modules still exist somewhere? Thank you, Alan Isaac From millman at berkeley.edu Mon Sep 24 19:38:59 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 24 Sep 2007 16:38:59 -0700 Subject: [SciPy-user] NIPY In-Reply-To: References: Message-ID: On 9/24/07, Alan G Isaac wrote: > What is the relationship between NIPY and SciPy? > It seems a classic SciKit, but it is not in SciKits. NIPY stands for Neuroimaging in Python and was started prior to the creation of Scikits. NIPY currently has its own repository and trac instance like some of the other old projects. Here is the NIPY user's page: http://neuroimaging.scipy.org/ I haven't spent much time trying to keep the front page updated because the project is still fairly young and needs more work before it will be generally useful. > Anyway, does the old NIPY models and algorithms modules > still exist somewhere? Take a look at the developer's site: http://projects.scipy.org/neuroimaging/ni/wiki We just got funding for the project and are starting by focusing on getting the more generic stuff out of NIPY and into SciPy. For example, Jonathan Taylor's statistical models code is available in scipy.stats.models in svn. We are also working on integrating our io and image processing code into scipy.io and scipy.ndimage. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From aisaac at american.edu Tue Sep 25 10:45:09 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 25 Sep 2007 10:45:09 -0400 Subject: [SciPy-user] Dickey-Fuller test Message-ID: Nobody responded to my appeal for SciPy based or Python based unit root tests, so in case anyone cares, I cleaned up this Dickey-Fuller test (tested against EViews output): http://econpy.googlecode.com/svn/trunk/pytrix/unitroot.py Cheers, Alan Isaac From matthieu.brucher at gmail.com Tue Sep 25 16:43:05 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 25 Sep 2007 22:43:05 +0200 Subject: [SciPy-user] Signification of parameters Message-ID: Hi, Does someone know the meaning of the parameters in stats. f_value_wilks_lambda(ER, EF, dfnum, dfden, a, b), especially a and b ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan2057 at gmx.de Wed Sep 26 07:11:09 2007 From: ryan2057 at gmx.de (J. K.) Date: Wed, 26 Sep 2007 13:11:09 +0200 Subject: [SciPy-user] array.tofile stops writing to file Message-ID: Hi, I am using SciPy 0.6.0 with NumPy 1:1.03-1ubuntu2 on Python 2.5.1.5ubuntu3. I am using arrays (about 120 rows with 7 columns) to store data and calculate stuff with the stored values. To be able to cross check the calculations, I wanted to store the arrays in files. I tried the different approaches: - Array.tofile(FileName, sep='\t', format = "%s") - Iteration over array k = 0 for row in Koeff: Filename.write(array[k, 0], array[k, 1], array[k, 3]) k = k + 1 and similar ways. Now the problem is, the write process just stops in the middle of a number. In the same loop I use to write the files I can print all those lines on screen, the arrays are complete. When using array.tofile, the same thing happens. I can print them on screen, but the file just ends in the middle of a value (always the same). On the other hand, I can write to file (not using arrays) without problems, the files can be as large as I want them to be. Is there something terribly wrong with the way I use it or is it a bug in numpy? If you need more detail, please tell me Thank you for your time and help, Jack Kinch From unpingco at osc.edu Wed Sep 26 08:31:22 2007 From: unpingco at osc.edu (Jose Unpingco) Date: Wed, 26 Sep 2007 08:31:22 -0400 Subject: [SciPy-user] Stack two matrices on top of each other? Message-ID: <46F9EEA9.AA84.0083.0@osc.edu> a=randn(2,3) bmat([[a],[a]]) # stack them side-by-side But, this doesn't work with the semicolon: bmat([[a];[a]]) # not work, although it would be natural Is there another way to stack two matrices on top of each other? I know I can do bmat('a ; a') but I have a list of matrices I would like to stack and doing it this way means that I would have to assign a variable to each one of matrices like a0=A[0] a1=A[1] .. aN=A[N] which is what I'm doing now, but would prefer not to. Please contact me if you have questions or need more information. Thanks! Jose Unpingco, Ph.D. (619)553-2922 -------------- next part -------------- An HTML attachment was scrubbed... URL: From unpingco at osc.edu Wed Sep 26 08:35:54 2007 From: unpingco at osc.edu (Jose Unpingco) Date: Wed, 26 Sep 2007 08:35:54 -0400 Subject: [SciPy-user] Ipython history and editing Message-ID: <46F9EFB9.AA84.0083.0@osc.edu> One of the greatest things about Ipython is the ability to use one's favorite editor to edit previous entires as in > %edit _i18 which would edit the 18th input in the history. The annoying thing is that the so-edited command is not placed on the history, instead, the unuseful 22: _ip.magic("edit _i18") is put there. I would like the output of the post-edited command to show in the %hist list. By the way, I know I can get the post-edited output using Out[22], but I would really like to see it on the %hist list. Is this possible? Please contact me if you have questions or need more information. Thanks! Jose Unpingco, Ph.D. (619)553-2922 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jr at sun.ac.za Wed Sep 26 08:40:00 2007 From: jr at sun.ac.za (Johann Rohwer) Date: Wed, 26 Sep 2007 14:40:00 +0200 Subject: [SciPy-user] Stack two matrices on top of each other? In-Reply-To: <46F9EEA9.AA84.0083.0@osc.edu> References: <46F9EEA9.AA84.0083.0@osc.edu> Message-ID: <46FA5320.4000704@sun.ac.za> Use numpy.hstack numpy.vstack Johann Jose Unpingco wrote: > > a=randn(2,3) > bmat([[a],[a]]) # stack them side-by-side > > But, this doesn't work with the semicolon: > > bmat([[a];[a]]) # not work, although it would be natural > > Is there another way to stack two matrices on top of each other? I know > I can do > > bmat('a ; a') > > but I have a list of matrices I would like to stack and doing it this > way means that I would have to > assign a variable to each one of matrices like > > a0=A[0] > a1=A[1] > .. > aN=A[N] > > which is what I'm doing now, but would prefer not to. > > > Please contact me if you have questions or need more information. > > Thanks! From lbolla at gmail.com Wed Sep 26 08:58:55 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 26 Sep 2007 14:58:55 +0200 Subject: [SciPy-user] Stack two matrices on top of each other? In-Reply-To: <46F9EEA9.AA84.0083.0@osc.edu> References: <46F9EEA9.AA84.0083.0@osc.edu> Message-ID: <80c99e790709260558w530d49ft5f8b85e2d5f72d5a@mail.gmail.com> In [5]: a.shape Out[5]: (2, 3) In [6]: numpy.bmat([a,a]).shape Out[6]: (2, 6) In [7]: numpy.bmat([[a],[a]]).shape Out[7]: (4, 3) L. On 9/26/07, Jose Unpingco wrote: > > > a=randn(2,3) > bmat([[a],[a]]) # stack them side-by-side > > But, this doesn't work with the semicolon: > > bmat([[a];[a]]) # not work, although it would be natural > > Is there another way to stack two matrices on top of each other? I know I > can do > > bmat('a ; a') > > but I have a list of matrices I would like to stack and doing it this way > means that I would have to > assign a variable to each one of matrices like > > a0=A[0] > a1=A[1] > .. > aN=A[N] > > which is what I'm doing now, but would prefer not to. > > > > Please contact me if you have questions or need more information. > > Thanks! > > Jose Unpingco, Ph.D. > (619)553-2922 > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Sep 26 13:57:01 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 26 Sep 2007 11:57:01 -0600 Subject: [SciPy-user] Ipython history and editing In-Reply-To: <46F9EFB9.AA84.0083.0@osc.edu> References: <46F9EFB9.AA84.0083.0@osc.edu> Message-ID: [ Please note that these ipython-specific questions should be posted on the ipython lists, not here ] On 9/26/07, Jose Unpingco wrote: > > > > One of the greatest things about Ipython is the ability to use one's > favorite editor to edit previous > entires as in > > > ?it _i18 > > which would edit the 18th input in the history. The annoying thing is that > the so-edited command is not placed on the history, instead, the unuseful > > 22: _ip.magic("edit _i18") > > is put there. I would like the output of the post-edited command to show in > the %hist list. By the way, I know I can get the post-edited output using > Out[22], but I would really like to see it on the %hist list. > > Is this possible? Use the '-r' switch to %hist: In [1]: cd tmp/ /scratch/local/home/fperez/tmp In [2]: ls lih-wfns/ local/ matplotlib_tex.cache/ src@ In [3]: %hist 1: _ip.magic("cd tmp/") 2: _ip.system("ls -F ") 3: _ip.magic("hist ") In [4]: %hist -r 1: cd tmp/ 2: ls 3: %hist 4: %hist -r Typing '%history?' will give you all the details on the use of %history (%hist is just a shorthand for %history). Cheers, f From stefan at sun.ac.za Wed Sep 26 14:44:24 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 26 Sep 2007 20:44:24 +0200 Subject: [SciPy-user] array.tofile stops writing to file In-Reply-To: References: Message-ID: <20070926184424.GC32704@mentat.za.net> Hi Joe On Wed, Sep 26, 2007 at 01:11:09PM +0200, J. K. wrote: > I am using SciPy 0.6.0 with NumPy 1:1.03-1ubuntu2 on Python 2.5.1.5ubuntu3. > > I am using arrays (about 120 rows with 7 columns) to store data and > calculate stuff with the stored values. To be able to cross check the > calculations, I wanted to store the arrays in files. > I tried the different approaches: > > - Array.tofile(FileName, sep='\t', format = "%s") > > - Iteration over array > k = 0 > for row in Koeff: > Filename.write(array[k, 0], array[k, 1], array[k, 3]) > k = k + 1 Is "FileName" above a file descriptor? If so, did you remember to close the file? I tested writing a random array to file using x = N.random.random((120,7)) x.tofile('/tmp/data.txt',sep='\t',format="%s") as well as f = open('/tmp/data.txt','w') x.tofile(f,sep='\t',format="%s") f.close() without a problem. Under Python 2.5 you can conveniently do with open('/tmp/data.txt') as f: x.tofile(f,sep='\t',format="%s") Cheers St?fan From ryan2057 at gmx.de Thu Sep 27 04:35:38 2007 From: ryan2057 at gmx.de (J. K.) Date: Thu, 27 Sep 2007 10:35:38 +0200 Subject: [SciPy-user] array.tofile stops writing to file In-Reply-To: <20070926184424.GC32704@mentat.za.net> References: <20070926184424.GC32704@mentat.za.net> Message-ID: > Is "FileName" above a file descriptor? If so, did you remember to > close the file? Yes, FileName was meant as a file descriptor. You were absolutely correct, I forgot to close the file. :) Thanks a lot for your help, it is interesting what kind of problems one can create by forgetting something like that. (And yes, I am a python/programming newb) Thanks again, Jack Kinch From berthe.loic at gmail.com Thu Sep 27 06:09:17 2007 From: berthe.loic at gmail.com (LB) Date: Thu, 27 Sep 2007 03:09:17 -0700 Subject: [SciPy-user] Pb with numpy.histogram Message-ID: <1190887757.463306.305620@d55g2000hsg.googlegroups.com> Hi, I've got strange results with numpy.histogram : Here is its doc strings : """ Help on function histogram in module numpy.lib.function_base: histogram(a, bins=10, range=None, normed=False) Compute the histogram from a set of data. :Parameters: - `a` : array The data to histogram. n-D arrays will be flattened. - `bins` : int or sequence of floats, optional If an int, then the number of equal-width bins in the given range. Otherwise, a sequence of the lower bound of each bin. - `range` : (float, float), optional The lower and upper range of the bins. If not provided, then (a.min(), a.max()) is used. Values outside of this range are allocated to the closest bin. - `normed` : bool, optional If False, the result array will contain the number of samples in each bin. If True, the result array is the value of the probability *density* function at the bin normalized such that the *integral* over the range is 1. Note that the sum of all of the histogram values will not usually be 1; it is not a probability *mass* function. :Returns: - `hist` : array (n,) The values of the histogram. See `normed` for a description of the possible semantics. - `lower_edges` : float array (n,) The lower edges of each bin. """ and here is a snipplet of code : >>> r = random.normal(8, 2, 500) >>> r.min(), r.max() (1.164117097856284, 13.069426390055149) >>> ra (3, 12) >>> pdf, xpdf = histogram(r, nbins, range=ra, normed=False) >>> pdf array([ 1, 6, 5, 8, 30, 39, 53, 55, 61, 50, 45, 42, 32, 26, 17, 27]) >>> pdf.sum() 497 It seems I've lost 3 of my 500 random numbers ! >>> r[ r>= ra[1]] array([ 12.00676288, 12.8381615 , 12.48380931, 12.55392835, 12.26153469, 12.92869504, 12.58290343, 12.03782311, 13.06942639, 12.06375346, 12.02970414, 12.53556779, 12.54203654, 12.02611864, 12.85113934, 12.64692817]) >>> r[ r<= ra[0]] array([ 1.1641171 , 2.85873306, 2.92046745]) So this number match the number of experiments below the range given to histogram. This smells like a bug to me. Is there something I've misunderstood in the utilisation of numpy.histogram ? For information >>> numpy.__version__ '1.0.2' Regards, -- LB From aisaac at american.edu Thu Sep 27 11:00:35 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 27 Sep 2007 11:00:35 -0400 Subject: [SciPy-user] stats module accessibility Message-ID: Why not allow direct use of subpackages of scipy? That is, if I want to ``describe`` a series, why not allow:: scipy.stats.describe(myseries) instead of:: from scipy import stats stats.describe(myseries) or:: from scipy.stats import describe describe(myseries) I am sure this is a naive question about package structure, but I'd like to understand. Thank you, Alan Isaac From bnuttall at uky.edu Thu Sep 27 10:58:16 2007 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Thu, 27 Sep 2007 10:58:16 -0400 Subject: [SciPy-user] stats module accessibility In-Reply-To: References: Message-ID: Alan, Why not: >>> from scipy.stats import describe >>> describe(series) It's all in the way you specify the import Brandon Powered by CardScan -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Alan G Isaac Sent: Thursday, September 27, 2007 11:01 AM To: SciPy Users List Subject: [SciPy-user] stats module accessibility Why not allow direct use of subpackages of scipy? That is, if I want to ``describe`` a series, why not allow:: scipy.stats.describe(myseries) instead of:: from scipy import stats stats.describe(myseries) or:: from scipy.stats import describe describe(myseries) I am sure this is a naive question about package structure, but I'd like to understand. Thank you, Alan Isaac _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From peridot.faceted at gmail.com Thu Sep 27 11:29:34 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 27 Sep 2007 11:29:34 -0400 Subject: [SciPy-user] stats module accessibility In-Reply-To: References: Message-ID: On 27/09/2007, Alan G Isaac wrote: > Why not allow direct use of subpackages of scipy? > > That is, if I want to ``describe`` a series, why not > allow:: > > scipy.stats.describe(myseries) In [1]: import scipy.stats In [2]: scipy.stats.describe? Type: function Base Class: String Form: Namespace: Interactive File: /usr/lib/python2.5/site-packages/scipy/stats/stats.py Definition: scipy.stats.describe(a, axis=0) Docstring: [...] You can't just do "import scipy" and access the subpackages, because that would mean that a bare "import scipy" had to recursively load all subpackages, which can be expensive - many shared libraries to load, for example. This is standard behaviour for python packages. Anne From pgmdevlist at gmail.com Thu Sep 27 11:45:11 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 27 Sep 2007 11:45:11 -0400 Subject: [SciPy-user] ANN: maskedarray Message-ID: <200709271145.11842.pgmdevlist@gmail.com> All, The latest version of maskedarray has just been released on the scipy SVN sandbox. This version fixes the inconsistencies in filling (see below) and introduces some minor modifications for optimization purposes (see below as well). Many thanks to Eric Firing and Matt Knox for the fruitful discussions at the origin of this release! In addition, a bench.py file has been introduced, to compare the speed of numpy.ma and maskedarray. Once again, thanks to Eric for his first draft. Please feel free to try it and send me some feedback. Modifications: * Consistent filling ! In numpy.ma, the division of array A by array B works in several steps: - A is filled w/ 0 - B is filled w/ 1 - A/B is computed - the output mask is updated as the combination of A.mask, B.mask and the domain mask (B==0) The problems with this approach are that (i) it's not useful to fill A and B beforehand if the values will be masked anyway; (ii) nothing prevents infs to show up, as the domain is taken into account at the end only. In this latest version of maskedarray, the same division is decomposed as: - a copy of B._data is filled with 1 with the domain (B==0) - the division of A._data by this copy is computed - the output mask is updated as the combination of A.mask, B.mask and the domain mask (B==0). Prefilling on the domain avoids the presence of nans/infs. However, this comes with the price of making some functions and methods slower than their numpy.ma counterparts, as you'll be able to observe for sqrt and log with the bench.py file. An alternative would be to avoid filling at all, at the risk of leaving nans and infs. * masked_invalid / fix_invalid Two new functions are introduced. masked_invalid(x) masks x where x is nan or inf. fix_invalid(x) returns (a copy of) x, where invalid values (nans & infs) are replaced by fill_value. * No mask shrinking Following Paul Dubois and Sasha's example, I eventually had to get rid of the semi-automatic shrinking of the mask in __getitem__, which appeared to be a major bottleneck. In other words, one can end up with an array full of False instead of nomask, which may slow things down a bit. You can force a mask back to nomask with the new shrink_mask method. *_sharedmask Here again, I followed Paul and Sasha's ideas and reintroduce the _sharedmask flag to prevent inadequate propagation of the mask. When creating a new array with x=masked_array(data, mask=m), x._mask is initially a reference to m and x._sharedmask is True. When x is modified, x._mask is copied to prevent a propagation back to m. From aisaac at american.edu Thu Sep 27 12:08:46 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 27 Sep 2007 12:08:46 -0400 Subject: [SciPy-user] stats module accessibility In-Reply-To: References: Message-ID: On Thu, 27 Sep 2007, Anne Archibald apparently wrote: > You can't just do "import scipy" and access the subpackages, because > that would mean that a bare "import scipy" had to recursively load all > subpackages, which can be expensive many shared libraries > to load, for example. OK. I guessed that would be the answer. > This is standard behaviour for python packages. As I warned, it was a naive question. As a user, what I notice of course is that some subpackages are immediately available (e.g., random and fft) and that others are not. Thanks, Alan From robert.kern at gmail.com Thu Sep 27 12:44:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 27 Sep 2007 11:44:17 -0500 Subject: [SciPy-user] stats module accessibility In-Reply-To: References: Message-ID: <46FBDDE1.7060304@gmail.com> Alan G Isaac wrote: > On Thu, 27 Sep 2007, Anne Archibald apparently wrote: >> You can't just do "import scipy" and access the subpackages, because >> that would mean that a bare "import scipy" had to recursively load all >> subpackages, which can be expensive many shared libraries >> to load, for example. > > OK. I guessed that would be the answer. > >> This is standard behaviour for python packages. > > As I warned, it was a naive question. > As a user, what I notice of course is that some > subpackages are immediately available (e.g., > random and fft) and that others are not. That's numpy, not scipy. numpy is just small enough for that to be feasible. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david.huard at gmail.com Thu Sep 27 13:28:21 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 27 Sep 2007 13:28:21 -0400 Subject: [SciPy-user] Pb with numpy.histogram In-Reply-To: <1190887757.463306.305620@d55g2000hsg.googlegroups.com> References: <1190887757.463306.305620@d55g2000hsg.googlegroups.com> Message-ID: <91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com> Hi LB, I think histogram has had this weird behavior since the numeric era and a lot of code may break if we fix it. Basically, histogram discards the lower than range values as outliers but puts the higher than range values into the last bin. I'm generally using my own histograming routines, I could send them your way if you're interested. David 2007/9/27, LB : > > Hi, > > I've got strange results with numpy.histogram : > > Here is its doc strings : > """ > Help on function histogram in module numpy.lib.function_base: > > histogram(a, bins=10, range=None, normed=False) > Compute the histogram from a set of data. > > :Parameters: > - `a` : array > The data to histogram. n-D arrays will be flattened. > - `bins` : int or sequence of floats, optional > If an int, then the number of equal-width bins in the given > range. > Otherwise, a sequence of the lower bound of each bin. > - `range` : (float, float), optional > The lower and upper range of the bins. If not provided, then > (a.min(), > a.max()) is used. Values outside of this range are allocated > to the > closest bin. > - `normed` : bool, optional > If False, the result array will contain the number of samples > in each bin. > If True, the result array is the value of the probability > *density* > function at the bin normalized such that the *integral* over > the range > is 1. Note that the sum of all of the histogram values will > not usually > be 1; it is not a probability *mass* function. > > :Returns: > - `hist` : array (n,) > The values of the histogram. See `normed` for a description of > the > possible semantics. > - `lower_edges` : float array (n,) > The lower edges of each bin. > """ > > and here is a snipplet of code : > >>> r = random.normal(8, 2, 500) > >>> r.min(), r.max() > (1.164117097856284, 13.069426390055149) > >>> ra > (3, 12) > >>> pdf, xpdf = histogram(r, nbins, range=ra, normed=False) > >>> pdf > array([ 1, 6, 5, 8, 30, 39, 53, 55, 61, 50, 45, 42, 32, 26, 17, > 27]) > >>> pdf.sum() > 497 > > It seems I've lost 3 of my 500 random numbers ! > > >>> r[ r>= ra[1]] > array([ 12.00676288, 12.8381615 , 12.48380931, 12.55392835, > 12.26153469, 12.92869504, 12.58290343, 12.03782311, > 13.06942639, 12.06375346, 12.02970414, 12.53556779, > 12.54203654, 12.02611864, 12.85113934, 12.64692817]) > > >>> r[ r<= ra[0]] > array([ 1.1641171 , 2.85873306, 2.92046745]) > > So this number match the number of experiments below the range given > to histogram. > This smells like a bug to me. > Is there something I've misunderstood in the utilisation of > numpy.histogram ? > > For information > >>> numpy.__version__ > '1.0.2' > > Regards, > > -- > LB > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Sep 27 14:12:38 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 27 Sep 2007 14:12:38 -0400 Subject: [SciPy-user] Pb with numpy.histogram In-Reply-To: <91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com> References: <1190887757.463306.305620@d55g2000hsg.googlegroups.com><91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com> Message-ID: On Thu, 27 Sep 2007, David Huard apparently wrote: > I'm generally using my own histograming routines, I could > send them your way if you're interested. If they share the SciPy license, please consider posting them or adding them to your sandbox directory. Cheers, Alan Isaac From loredo at astro.cornell.edu Thu Sep 27 14:24:18 2007 From: loredo at astro.cornell.edu (Tom Loredo) Date: Thu, 27 Sep 2007 14:24:18 -0400 Subject: [SciPy-user] OS X users - Please try multiple scipy.test() runs Message-ID: <1190917458.46fbf552976b4@astrosun2.astro.cornell.edu> Hi folks- I just reported some strange scipy crashes I've experienced on my MacBook running OS X Tiger & Python 2.5.1. I'd like to find out more about the conditions producing the crash. If OS X users could do the following simple test and report the results, it would be very helpful. Please also report details about your platform (machine, OS version, Python, numpy & scipy versions). All you have to do is start Python, import scipy, and run scipy.test() *multiple times*: >>> import scipy >>> scipy.test() [snip] >>> scipy.test() and so on, up to 10 times. For me, with Python-2.5.1, numpy-1.0.3.1, and scipy-0.6.0, right after install I get a crash at the 10th time. Subsequent attempts (with new Python invocations) give crashes earlier and earlier. Eventually the crash is fully reproducible at the 3rd scipy.test(), always crashing here: Found 42 tests for scipy.lib.lapack Found 41 tests for scipy.linalg.basic Segmentation fault I get similar crashes with scipy-0.5.2.1, but the conditions are not reproducible. On a G4 (PPC) machine running Py-2.4.4 and scipy-0.5.2.1, I don't get any crashes (I ran scipy.test() 12 times in a row). Your help would be appreciated in sorting out the conditions under which such crashes happen. Thanks, Tom PS: For the curious, I was led to this somewhat accidentally, due to problems with linalg I'm experiencing under RHEL 5 and another user is experiencing with FC4. I was trying to see if the problems were platform-specific. ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From david.huard at gmail.com Thu Sep 27 14:39:06 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 27 Sep 2007 14:39:06 -0400 Subject: [SciPy-user] Pb with numpy.histogram In-Reply-To: References: <1190887757.463306.305620@d55g2000hsg.googlegroups.com> <91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com> Message-ID: <91cf711d0709271139w1cb99633u600abfdd9c199ec2@mail.gmail.com> 2007/9/27, Alan G Isaac : > > On Thu, 27 Sep 2007, David Huard apparently wrote: > > I'm generally using my own histograming routines, I could > > send them your way if you're interested. > > If they share the SciPy license, please consider posting > them or adding them to your sandbox directory. Alan, You'll find the histogram.py module in scipy/sandbox/dhuard/ Cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From bblais at bryant.edu Thu Sep 27 14:51:57 2007 From: bblais at bryant.edu (Brian Blais) Date: Thu, 27 Sep 2007 14:51:57 -0400 Subject: [SciPy-user] OS X users - Please try multiple scipy.test() runs In-Reply-To: <1190917458.46fbf552976b4@astrosun2.astro.cornell.edu> References: <1190917458.46fbf552976b4@astrosun2.astro.cornell.edu> Message-ID: <4EC79C96-654A-49D0-A7A7-5AB3AE0FD499@bryant.edu> On Sep 27, 2007, at Sep 27:2:24 PM, Tom Loredo wrote: > All you have to do is start Python, import scipy, and run > scipy.test() *multiple times*: > OS X 10.4.10 Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) >>> scipy.__version__ '0.5.3.dev3105' I ran the scipy.test() 20 times, with no crash. I did get two fails: ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 1.6723473439110595e-36j DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 8.2300675769387774e-37j DESIRED: (-9+2j) bb -- Brian Blais bblais at bryant.edu http://web.bryant.edu/~bblais -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominique.orban at gmail.com Thu Sep 27 14:52:08 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Thu, 27 Sep 2007 14:52:08 -0400 Subject: [SciPy-user] OS X users - Please try multiple scipy.test() runs In-Reply-To: <1190917458.46fbf552976b4@astrosun2.astro.cornell.edu> References: <1190917458.46fbf552976b4@astrosun2.astro.cornell.edu> Message-ID: <8793ae6e0709271152t5c1d93eblf75e1f1e4e84f2c7@mail.gmail.com> On 9/27/07, Tom Loredo wrote: > > Hi folks- > > I just reported some strange scipy crashes I've experienced on my > MacBook running OS X Tiger & Python 2.5.1. I'd like to find out > more about the conditions producing the crash. If OS X users > could do the following simple test and report the results, it > would be very helpful. Please also report details about your > platform (machine, OS version, Python, numpy & scipy versions). > > All you have to do is start Python, import scipy, and run > scipy.test() *multiple times*: > > >>> import scipy > >>> scipy.test() > > [snip] > > >>> scipy.test() > > and so on, up to 10 times. For me, with Python-2.5.1, numpy-1.0.3.1, > and scipy-0.6.0, right after install I get a crash at the 10th time. > Subsequent attempts (with new Python invocations) give crashes earlier > and earlier. Eventually the crash is fully reproducible at the > 3rd scipy.test(), always crashing here: > > Found 42 tests for scipy.lib.lapack > Found 41 tests for scipy.linalg.basic > '/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/fblas.so'> > Segmentation fault > > I get similar crashes with scipy-0.5.2.1, but the conditions are not > reproducible. > > On a G4 (PPC) machine running Py-2.4.4 and scipy-0.5.2.1, I don't > get any crashes (I ran scipy.test() 12 times in a row). > > Your help would be appreciated in sorting out the conditions under > which such crashes happen. > > Thanks, > Tom > > PS: For the curious, I was led to this somewhat accidentally, > due to problems with linalg I'm experiencing under RHEL 5 and > another user is experiencing with FC4. I was trying to see if > the problems were platform-specific. I don't have any crash after 20 calls to scipy.test() from within Ipython. My config is: In [21]: scipy.__version__ Out[21]: '0.7.0.dev3369' In [22]: import numpy In [23]: numpy.__version__ Out[23]: '1.0.4.dev3964' In [24]: scipy.__config__.show() amd_info: libraries = ['amd'] library_dirs = ['/Users/dpo/local/lib'] define_macros = [('SCIPY_AMD_H', None)] swig_opts = ['-I/Users/dpo/local/LinearAlgebra/UMFPACK/AMD/Include'] include_dirs = ['/Users/dpo/local/LinearAlgebra/UMFPACK/AMD/Include'] umfpack_info: libraries = ['umfpack', 'amd'] library_dirs = ['/Users/dpo/local/lib'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] swig_opts = ['-I/Users/dpo/local/LinearAlgebra/UMFPACK/UMFPACK/Include', '-I/Users/dpo/local/LinearAlgebra/UMFPACK/AMD/Include'] include_dirs = ['/Users/dpo/local/LinearAlgebra/UMFPACK/UMFPACK/Include', '/Users/dpo/local/LinearAlgebra/UMFPACK/AMD/Include'] lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3'] define_macros = [('NO_ATLAS_INFO', 3)] blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/vecLib.framework/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] djbfft_info: NOT AVAILABLE fftw3_info: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] mkl_info: NOT AVAILABLE I compiled SciPy from svn with gcc 4.3.0 and gfortran 4.2.0. The machine is an Intel MacBook Pro with OSX 10.4.10. In scipy.test() I always get an error in the test module for the single-precision complex dot products in the fblas wrapper. The return value is meaningless: Traceback (most recent call last): File "/Users/dpo/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 1.6841453353081064e-36j DESIRED: (-9+2j) Given the return value 1.6841453353081064e-36j, I am not surprised that segfaults occur in that module. Reports on that error are rampant in the mailing list, but it seems nobody has been bothered enough to look into it. The double precision complex dot product functions return the correct result, though, so it might just be a casting issue. Dominique From borgulya at gyer2.sote.hu Thu Sep 27 15:46:23 2007 From: borgulya at gyer2.sote.hu (BORGULYA =?iso-8859-2?q?G=E1bor?=) Date: Thu, 27 Sep 2007 21:46:23 +0200 Subject: [SciPy-user] array TypeError: expected a readable buffer object Message-ID: <200709272146.25345.borgulya@gyer2.sote.hu> Hi List! Could someone explain me why I get an error message for the first expression and why no error for the second? In [1]: import scipy In [2]: a = scipy.array(['1','2','3',None,'4','5']) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/gab/ TypeError: expected a readable buffer object In [3]: a = scipy.array([1,2,3,None,4,5]) In [4]: a Out[4]: array([1, 2, 3, None, 4, 5], dtype=object) I expected that the first expression would evaluate as array(['1', '2', '3', None, '4', '5'], dtype=object) but it did not. Please Cc my email address when replying to the list, thank you! G?bor From lbolla at gmail.com Thu Sep 27 15:57:02 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 27 Sep 2007 21:57:02 +0200 Subject: [SciPy-user] EMpy Message-ID: <80c99e790709271257w4ff4064bl752d4fab94f3853b@mail.gmail.com> I'm very pleased to announce the release of EMpy (Electromagnetic Python), a suite of numerical algorithms widely used in electromagnetism. The package, in its very-alpha stage by now, only contains the transfer matrix and the rigorous coupled wave analysis algorithms, and some handy functions and classes (to model materials, for examples). The idea is to expand it in the near future with new algorithms and interfaces to existent software. Package's homepage: http://empy.sourceforge.net Anyone interested in contributing, please contact me: lbolla at users.sourceforge.net. The package is based on Numpy/Scipy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Sep 27 15:59:02 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 27 Sep 2007 14:59:02 -0500 Subject: [SciPy-user] array TypeError: expected a readable buffer object In-Reply-To: <200709272146.25345.borgulya@gyer2.sote.hu> References: <200709272146.25345.borgulya@gyer2.sote.hu> Message-ID: <46FC0B86.101@gmail.com> BORGULYA G?bor wrote: > Hi List! > > Could someone explain me why I get an error message for the first > expression and why no error for the second? > > In [1]: import scipy > > In [2]: a = scipy.array(['1','2','3',None,'4','5']) > --------------------------------------------------------------------------- > exceptions.TypeError Traceback (most recent > call last) > > /home/gab/ > > TypeError: expected a readable buffer object > > In [3]: a = scipy.array([1,2,3,None,4,5]) > > In [4]: a > Out[4]: array([1, 2, 3, None, 4, 5], dtype=object) > > I expected that the first expression would evaluate as > array(['1', '2', '3', None, '4', '5'], dtype=object) > but it did not. What version of numpy are you using? With numpy 1.0.3 and a recent SVN checkout, it works for me. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cburns at berkeley.edu Thu Sep 27 16:27:02 2007 From: cburns at berkeley.edu (Christopher Burns) Date: Thu, 27 Sep 2007 13:27:02 -0700 Subject: [SciPy-user] OS X users - Please try multiple scipy.test() runs In-Reply-To: <8793ae6e0709271152t5c1d93eblf75e1f1e4e84f2c7@mail.gmail.com> References: <1190917458.46fbf552976b4@astrosun2.astro.cornell.edu> <8793ae6e0709271152t5c1d93eblf75e1f1e4e84f2c7@mail.gmail.com> Message-ID: <764e38540709271327u555efebeq9191a46fd4160f7@mail.gmail.com> > Traceback (most recent call last): > File "/Users/dpo/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", > line 76, in check_dot > AssertionError: > > Given the return value 1.6841453353081064e-36j, I am not surprised > that segfaults occur in that module. Reports on that error are rampant > in the mailing list, but it seems nobody has been bothered enough to > look into it. The double precision complex dot product functions > return the correct result, though, so it might just be a casting > issue. The blas, check_dot AssertionError is a known issue and is an active ticket to be resolved in the 0.7 release of SciPy, scheduled for 12/20/2007. A patch was applied in the 0.6 release that resolved one of the two errors in 0.5.x. The errors appear to be an issue with a handful of fortran subroutines in Apple's Accelerate Framework. Work continues on a fix. -- Christopher Burns Software Engineer CIRL - UC Berkeley http://cirl.berkeley.edu From matthieu.brucher at gmail.com Thu Sep 27 17:08:39 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 27 Sep 2007 23:08:39 +0200 Subject: [SciPy-user] Signification of parameters In-Reply-To: References: Message-ID: Can someone help ? Someone has a clue ? 2007/9/25, Matthieu Brucher : > > Hi, > > Does someone know the meaning of the parameters in stats. > f_value_wilks_lambda(ER, EF, dfnum, dfden, a, b), especially a and b ? > > Matthieu > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnuttall at uky.edu Thu Sep 27 17:15:53 2007 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Thu, 27 Sep 2007 17:15:53 -0400 Subject: [SciPy-user] Signification of parameters In-Reply-To: References: Message-ID: Matthieu I've never used the function, but... >>> from scipy.stats import f_value_wilks_lambda >>> help(f_value_wilks_lambda) Help on function f_value_wilks_lambda in module scipy.stats.stats: f_value_wilks_lambda(ER, EF, dfnum, dfden, a, b) Calculation of Wilks lambda F-statistic for multivarite data, per Maxwell & Delaney p.657. >>> I suggest you try the reference. You might also find additional info by looking at the Python source code. Brandon [cid:image001.jpg at 01C8012A.0F2498D0] Powered by CardScan ________________________________ From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Matthieu Brucher Sent: Thursday, September 27, 2007 5:09 PM To: SciPy Users List Subject: Re: [SciPy-user] Signification of parameters Can someone help ? Someone has a clue ? 2007/9/25, Matthieu Brucher >: Hi, Does someone know the meaning of the parameters in stats.f_value_wilks_lambda(ER, EF, dfnum, dfden, a, b), especially a and b ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 17011 bytes Desc: image001.jpg URL: From matthieu.brucher at gmail.com Thu Sep 27 17:35:57 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 27 Sep 2007 23:35:57 +0200 Subject: [SciPy-user] Signification of parameters In-Reply-To: References: Message-ID: 2007/9/27, Nuttall, Brandon C : > > Matthieu > > > > I've never used the function, but? > > > > >>> from scipy.stats import f_value_wilks_lambda > > >>> help(f_value_wilks_lambda) > > Help on function f_value_wilks_lambda in module scipy.stats.stats: > > > > f_value_wilks_lambda(ER, EF, dfnum, dfden, a, b) > > Calculation of Wilks lambda F-statistic for multivarite data, per > > Maxwell & Delaney p.657. > > > > >>> > > > > I suggest you try the reference. You might also find additional info by > looking at the Python source code. > Thank you for the answer but I already tried the code, but no explanation, and looking on the web didn't help me find an explanation of the algorithm. I don't know if I can access somewhere the book, I'll try, but if someone has it, please let us know the meaning of the special parameters ;) Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Sep 27 18:03:04 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 27 Sep 2007 18:03:04 -0400 Subject: [SciPy-user] Pb with numpy.histogram In-Reply-To: <91cf711d0709271139w1cb99633u600abfdd9c199ec2@mail.gmail.com> References: <1190887757.463306.305620@d55g2000hsg.googlegroups.com><91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com><91cf711d0709271139w1cb99633u600abfdd9c199ec2@mail.gmail.com> Message-ID: On Thu, 27 Sep 2007, David Huard apparently wrote: > You'll find the histogram.py module in scipy/sandbox/dhuard/ Thanks! I notice you do not specify a license. Is everything in the sandbox considered to be under the SciPy license? Cheers, Alan Isaac From gnurser at googlemail.com Thu Sep 27 18:00:01 2007 From: gnurser at googlemail.com (George Nurser) Date: Thu, 27 Sep 2007 23:00:01 +0100 Subject: [SciPy-user] 4 test failures in test_odr for scipy svn 3371 Message-ID: <1d1e6ea70709271500n2dac8eeej4510656f811b3929@mail.gmail.com> I compiled latest svn version, using gcc 3.4.4 on RH Linux Opteron 64-bit, and intel fortran v9.1: python setup.py config --fcompiler=intelem build_clib --fcompiler=intelem build_ext --fcompiler=intelem install >& inst.log & I had modified numpy/distutils/intel.py so as to include -xW flag for ifort, as recomended by Intel for the Opteron. scipy.test(level=1) gave four failures, in: test_explicit (scipy.tests.test_odr.test_odr, test_lorentz (scipy.tests.test_odr.test_odr) test_multi (scipy.tests.test_odr.test_odr) test_pearson (scipy.tests.test_odr.test_odr) --George Nurser. ====================================================================== FAIL: test_explicit (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 50, in test_explicit -8.7849712165253724e-02]), File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.26462971e+03, -5.42545890e+01, -8.64250389e-02]) y: array([ 1.26465481e+03, -5.40184100e+01, -8.78497122e-02]) ====================================================================== FAIL: test_lorentz (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 295, in test_lorentz 3.7798193600109009e+00]), File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.00000000e+03, 1.00000000e-01, 3.80000000e+00]) y: array([ 1.43067808e+03, 1.33905090e-01, 3.77981936e+00]) ====================================================================== FAIL: test_multi (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 191, in test_multi 0.5101147161764654, 0.5173902330489161]), File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4. , 2. , 7. , 0.4, 0.5]) y: array([ 4.37998803, 2.43330576, 8.00288459, 0.51011472, 0.51739023]) ====================================================================== FAIL: test_pearson (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 238, in test_pearson np.array([ 5.4767400299231674, -0.4796082367610305]), File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/noc/users/agn/ext/AMD64/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1., 1.]) y: array([ 5.47674003, -0.47960824]) ---------------------------------------------------------------------- Ran 1771 tests in 4.373s FAILED (failures=4) From borgulya at gyer2.sote.hu Thu Sep 27 18:00:10 2007 From: borgulya at gyer2.sote.hu (BORGULYA =?iso-8859-2?q?G=E1bor?=) Date: Fri, 28 Sep 2007 00:00:10 +0200 Subject: [SciPy-user] array TypeError: expected a readable buffer object In-Reply-To: <46FC0B86.101@gmail.com> References: <200709272146.25345.borgulya@gyer2.sote.hu> <46FC0B86.101@gmail.com> Message-ID: <200709280000.11778.borgulya@gyer2.sote.hu> > BORGULYA G?bor wrote: > > In [1]: import scipy > > > > In [2]: a = scipy.array(['1','2','3',None,'4','5']) > > ---------------------------------------------------------------------- > >----- exceptions.TypeError Traceback > > (most recent call last) > > > > /home/gab/ > > > > TypeError: expected a readable buffer object Thursday 27 September 2007 d?tummal Robert Kern ezt ?rta: > What version of numpy are you using? With numpy 1.0.3 and a recent SVN > checkout, it works for me. Thank you Robert for testing it on your system. I am using Gentoo Linux with the following packages: szitakoto ~ # emerge --search python * dev-lang/python Latest version available: 2.5.1-r2 Latest version installed: 2.4.4 Size of files: 9,169 kB Homepage: http://www.python.org/ Description: Python is an interpreted, interactive, object-oriented programming language. License: PSF-2.2 szitakoto ~ # emerge --search scipy * sci-libs/scipy Latest version available: 0.5.2.1 Latest version installed: 0.5.1 Size of files: 5,859 kB Homepage: http://www.scipy.org/ Description: Scientific algorithms library for Python License: BSD szitakoto ~ # emerge --search numpy * dev-python/numpy Latest version available: 1.0.3.1 Latest version installed: 1.0.1-r1 Size of files: 1,465 kB Homepage: http://numeric.scipy.org/ Description: Python array processing for numbers, strings, records, and objects License: BSD True, I could try upgrading the packages. I will report if it solves the problem or not. G?bor From brian.clowers at pnl.gov Thu Sep 27 18:04:57 2007 From: brian.clowers at pnl.gov (Clowers, Brian H) Date: Thu, 27 Sep 2007 15:04:57 -0700 Subject: [SciPy-user] Multi-peak fitting Message-ID: After perusing the archives I'm still a bit stumped. I'm looking to create a script in python that will fit multiple peaks to an array of spectroscopic data. It appears as though there are some C++ (fityk) and python (peak-o-mat) programs designed to do this, however, I'd like to incorporate such a function into an automated batch type processing. Ideally, it would nice to access those functions directly but my level of proficiency with python is not quite there yet. Does anyone have an example of how this might be done using the libraries currently available or be willing to share a custom script using the built in functionality of scipy and numpy? My data can be quite simple (i.e. having just one Gaussian peak) or other cases in which many individual and overlapping Gaussian shapes may exist. My current solution is to export the array to a text file and perform the multi-peak fitting using either IGOR Pro or MATLAB. It is my understanding that in order to fit multiple peaks a script designed to identify peaks and provide the initial estimates for the peak fitting is required. I'm not sure how to include such a script with a fitting routine that takes into account all of the potential features in a spectrum. Any help is greatly appreciated. Cheers, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From borgulya at gyer2.sote.hu Thu Sep 27 18:13:49 2007 From: borgulya at gyer2.sote.hu (BORGULYA =?iso-8859-2?q?G=E1bor?=) Date: Fri, 28 Sep 2007 00:13:49 +0200 Subject: [SciPy-user] array TypeError: expected a readable buffer object In-Reply-To: <200709280000.11778.borgulya@gyer2.sote.hu> References: <200709272146.25345.borgulya@gyer2.sote.hu> <46FC0B86.101@gmail.com> <200709280000.11778.borgulya@gyer2.sote.hu> Message-ID: <200709280013.51293.borgulya@gyer2.sote.hu> > > BORGULYA G?bor wrote: > > > In [1]: import scipy > > > > > > In [2]: a = scipy.array(['1','2','3',None,'4','5']) > > > -------------------------------------------------------------------- > > >-- ----- exceptions.TypeError > > > Traceback (most recent call last) > > > > > > /home/gab/ > > > > > > TypeError: expected a readable buffer object > Thursday 27 September 2007 d?tummal Robert Kern ezt ?rta: > > What version of numpy are you using? With numpy 1.0.3 and a recent SVN > > checkout, it works for me. Friday 28 September 2007 d?tummal BORGULYA G?bor ezt ?rta: > * dev-python/numpy > Latest version available: 1.0.3.1 > Latest version installed: 1.0.1-r1 Robert, thank you very much for the solution! Upgrading numpy has solved the problem. This may have been a bug. It works now: In [1]: import scipy /usr/lib/python2.4/site-packages/scipy/misc/__init__.py:25: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test In [2]: a = scipy.array(['1','2','3',None,'4','5']) In [3]: a Out[3]: array([1, 2, 3, None, 4, 5], dtype=object) G?bor From rhc28 at cornell.edu Thu Sep 27 18:20:54 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Thu, 27 Sep 2007 18:20:54 -0400 Subject: [SciPy-user] Multi-peak fitting In-Reply-To: References: Message-ID: Brian, I'm just in the middle of developing some classes for feature extraction in data, particularly for peaks in the kind of data you have. The same classes store quantitative information (like peak positions, heights...) that can be extracted by objective functions in a fitting algorithm. The code will become part of PyDSTool (and as such is somewhat integrated with scipy and numpy) but I am happy to share with you what I have so far. It works well for the fitting problems I am working on in my research and it's relatively easy to use. It hasn't been tried on a wider range of problems yet. I warn you that it comes with a little extra baggage because I'm setting this whole thing up as part of a "bigger idea" involving model estimation for dynamical systems and scientific data. But email me if you want to try it out! -Rob From robert.kern at gmail.com Thu Sep 27 18:21:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 27 Sep 2007 17:21:23 -0500 Subject: [SciPy-user] Pb with numpy.histogram In-Reply-To: References: <1190887757.463306.305620@d55g2000hsg.googlegroups.com><91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com><91cf711d0709271139w1cb99633u600abfdd9c199ec2@mail.gmail.com> Message-ID: <46FC2CE3.2080903@gmail.com> Alan G Isaac wrote: > On Thu, 27 Sep 2007, David Huard apparently wrote: >> You'll find the histogram.py module in scipy/sandbox/dhuard/ > > Thanks! > I notice you do not specify a license. > Is everything in the sandbox considered to be > under the SciPy license? As a matter of policy, no one should check something in there that isn't. Assuming mistakes aren't made and committers understand what they are doing, then yes, everything there is under the SciPy license. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu Sep 27 18:41:51 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 27 Sep 2007 18:41:51 -0400 Subject: [SciPy-user] Pb with numpy.histogram In-Reply-To: <46FC2CE3.2080903@gmail.com> References: <1190887757.463306.305620@d55g2000hsg.googlegroups.com><91cf711d0709271028l7eab63c4sb6f1919b95efd697@mail.gmail.com><91cf711d0709271139w1cb99633u600abfdd9c199ec2@mail.gmail.com> <46FC2CE3.2080903@gmail.com> Message-ID: >> Is everything in the sandbox considered to be >> under the SciPy license? On Thu, 27 Sep 2007, Robert Kern apparently wrote: > As a matter of policy, no one should check something in there that isn't. > Assuming mistakes aren't made and committers understand what they are doing, > then yes, everything there is under the SciPy license. Thanks! Alan From lists.steve at arachnedesign.net Thu Sep 27 19:33:41 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Thu, 27 Sep 2007 19:33:41 -0400 Subject: [SciPy-user] OS X users - Please try multiple scipy.test() runs In-Reply-To: <1190917458.46fbf552976b4@astrosun2.astro.cornell.edu> References: <1190917458.46fbf552976b4@astrosun2.astro.cornell.edu> Message-ID: On Sep 27, 2007, at 2:24 PM, Tom Loredo wrote: > All you have to do is start Python, import scipy, and run > scipy.test() *multiple times*: Ran the test 20 times ... no segfault, only one error in the check_dot function (as mentioned by Christopher) Python version 2.4.4 GCC 4.0.1 (Apple Computer, Inc. build 5367) gfortran from the http://r.research.att.com/tools/ (version 4.2.1) MacBook Pro OS X.4.10 In [4]: scipy.__version__ Out[4]: '0.7.0.dev3351' In [6]: numpy.__version__ Out[6]: '1.0.4.dev4074' -steve From manouchk at gmail.com Thu Sep 27 20:10:44 2007 From: manouchk at gmail.com (Emmanuel) Date: Thu, 27 Sep 2007 21:10:44 -0300 Subject: [SciPy-user] Multi-peak fitting In-Reply-To: References: Message-ID: I think I may be quite interested in this kind of thing, if you want to share with me too. Emmanuel Favre-Nicolin On 9/27/07, Rob Clewley wrote: > > Brian, I'm just in the middle of developing some classes for feature > extraction in data, particularly for peaks in the kind of data you > have. The same classes store quantitative information (like peak > positions, heights...) that can be extracted by objective functions in > a fitting algorithm. The code will become part of PyDSTool (and as > such is somewhat integrated with scipy and numpy) but I am happy to > share with you what I have so far. It works well for the fitting > problems I am working on in my research and it's relatively > easy to use. It hasn't been tried on a wider range of problems yet. I > warn you that it comes with a little extra baggage because I'm setting > this whole thing up as part of a "bigger idea" involving model > estimation for dynamical systems and scientific data. But email me if > you want to try it out! > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.ratcliff at gmail.com Thu Sep 27 20:44:23 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Thu, 27 Sep 2007 20:44:23 -0400 Subject: [SciPy-user] Multi-peak fitting In-Reply-To: References: Message-ID: <827183970709271744p65c28b7ak9869ac753ea4b2af@mail.gmail.com> Are you using a point and click method of choosing initial parameters? I've used that in the past and have code in matlab that does that. Alternatively, for the case where I know the number of peaks, I use the first and second derivative to look for initial parameters before using a least squares method for the actual fitting. I've also implemented this in MATLAB (still needs to be converted to python). For well separated peaks, it works rather nicely. Now, I'm interested in trying something using Bayesian methods to try to also determine how many peaks are present and their positions in the case where I don't know the widths--I'd like to be able to safely automate this procedure for when there are larger numbers of files to be fitted and the point and click method becomes tedious, but where the peaks are sufficiently overlapped that my current automated method gives poor initial predictions. This would be along the lines of Sivia et. al. Has anyone attempted an implementation of this yet? Cheers, William (btw. for my current automated procedure, I haven't converted it over to python yet because I didn't notice a savitsky-golay filter already implemented in scipy--is there one now?) On 9/27/07, Emmanuel wrote: > > I think I may be quite interested in this kind of thing, if you want to > share with me too. > > Emmanuel Favre-Nicolin > > > On 9/27/07, Rob Clewley < rhc28 at cornell.edu> wrote: > > > > Brian, I'm just in the middle of developing some classes for feature > > extraction in data, particularly for peaks in the kind of data you > > have. The same classes store quantitative information (like peak > > positions, heights...) that can be extracted by objective functions in > > a fitting algorithm. The code will become part of PyDSTool (and as > > such is somewhat integrated with scipy and numpy) but I am happy to > > share with you what I have so far. It works well for the fitting > > problems I am working on in my research and it's relatively > > easy to use. It hasn't been tried on a wider range of problems yet. I > > warn you that it comes with a little extra baggage because I'm setting > > this whole thing up as part of a "bigger idea" involving model > > estimation for dynamical systems and scientific data. But email me if > > you want to try it out! > > > > -Rob > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Sep 27 20:50:31 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 27 Sep 2007 19:50:31 -0500 Subject: [SciPy-user] Multi-peak fitting In-Reply-To: <827183970709271744p65c28b7ak9869ac753ea4b2af@mail.gmail.com> References: <827183970709271744p65c28b7ak9869ac753ea4b2af@mail.gmail.com> Message-ID: <46FC4FD7.8020703@gmail.com> william ratcliff wrote: > (btw. for my current automated procedure, I haven't converted it over to > python yet because I didn't notice a savitsky-golay filter already > implemented in scipy--is there one now?) Not in scipy itself, no, but there is an implementation: http://new.scipy.org/Cookbook/SavitzkyGolay -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Thu Sep 27 22:46:17 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 28 Sep 2007 11:46:17 +0900 Subject: [SciPy-user] OS X users - Please try multiple scipy.test() runs In-Reply-To: References: <1190917458.46fbf552976b4@astrosun2.astro.cornell.edu> Message-ID: <46FC6AF9.3030008@ar.media.kyoto-u.ac.jp> Steve Lianoglou wrote: > On Sep 27, 2007, at 2:24 PM, Tom Loredo wrote: > > >> All you have to do is start Python, import scipy, and run >> scipy.test() *multiple times*: >> > > > Ran the test 20 times ... no segfault, only one error in the > check_dot function (as mentioned by Christopher) > The error cause bad memory access, so the error, while reproducible sometimes, does not crash all the time: it is really configuration dependant. Before the problem is fixed, a temporary would be to disable scipy.lib, since you are not likely to need it: just comment the line which import lib in scipy/setup.py inside scipy sources: config.add_subpackage('lib') -> #config.add_subpackage('lib') cheers, David From rhc28 at cornell.edu Fri Sep 28 00:25:08 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Fri, 28 Sep 2007 00:25:08 -0400 Subject: [SciPy-user] Multi-peak fitting In-Reply-To: References: Message-ID: I'm not familiar with the properties of the Savitzky Golay filter but one thing to be concerned about in general is introducing a phase bias into the data by filtering out noise before finding features. (Some fancy filters supposedly get around this by basically filtering twice -- once going "forwards" then going "backwards"). I've posted my small amount of code and some sample data at my website: www.mathstat.gsu.edu/~matrhc/context.zip The test example for my abstract feature detection classes gets right on with fitting a local quadratic to a 'spike' over an appropriate number of points of the raw noisy data. That's done by making concrete feature sub-classes in neuro_data.py specific to my problem. Of course there's always some statistical bias introduced, and the requires some other assumptions but part of the point is that I've written more general purpose classes where it's up to you to put in what assumptions are appropriate for your problem. It contains a readme with more detailed information, including about how to make it standalone, and a working example test script (but only if you are running PyDSTool :). The underlying feature detection classes (in context.py) are essentially standalone so you can decouple those from PyDSTool easily at least. Constructive feedback is welcome, as ever... HTH! -Rob From manuhack at gmail.com Fri Sep 28 03:03:22 2007 From: manuhack at gmail.com (Manu Hack) Date: Fri, 28 Sep 2007 03:03:22 -0400 Subject: [SciPy-user] scipy.stats.lognorm Message-ID: <50af02ed0709280003y56d7c47amde2d495ce523792d@mail.gmail.com> Hi all, I have a quick question on scipy.stats.lognorm. From the manual it said: Lognormal distribution lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) for x > 0, s > 0. At the same time there are loc and scale to control the shape of the distribution. So it's like there are 3 parameters to control the shape of the distribution (but 2 should be enough to specify one). In fact, if we look at R, the function dlnorm, plnorm, etc. are simple and just did the job. And after searching and trying different parameters and compare with R's function, I still couldn't figure out how to use scipy.stats.lognorm. So am I missing anything there? Manu From matthieu.brucher at gmail.com Fri Sep 28 03:16:24 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 28 Sep 2007 09:16:24 +0200 Subject: [SciPy-user] scipy.stats.lognorm In-Reply-To: <50af02ed0709280003y56d7c47amde2d495ce523792d@mail.gmail.com> References: <50af02ed0709280003y56d7c47amde2d495ce523792d@mail.gmail.com> Message-ID: Hi, Do not worry about loc and scale. The first thing to know is that you modify the shape of your distribution with s. Then, with loc and shape, you can move the center and change the scale. Don't think that scale is another way of defining the shape, it's only a way to change the scale of the distribution. Matthieu 2007/9/28, Manu Hack : > > Hi all, > > I have a quick question on scipy.stats.lognorm. From the manual it said: > > Lognormal distribution > > lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) > for x > 0, s > 0. > > At the same time there are loc and scale to control the shape of the > distribution. So it's like there are 3 parameters to control the > shape of the distribution (but 2 should be enough to specify one). > In fact, if we look at R, the function dlnorm, plnorm, etc. are > simple and just did the job. And after searching and trying different > parameters and compare with R's function, I still couldn't figure out > how to use scipy.stats.lognorm. So am I missing anything there? > > Manu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Sep 28 03:22:24 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 28 Sep 2007 02:22:24 -0500 Subject: [SciPy-user] scipy.stats.lognorm In-Reply-To: <50af02ed0709280003y56d7c47amde2d495ce523792d@mail.gmail.com> References: <50af02ed0709280003y56d7c47amde2d495ce523792d@mail.gmail.com> Message-ID: <46FCABB0.7040305@gmail.com> Manu Hack wrote: > Hi all, > > I have a quick question on scipy.stats.lognorm. From the manual it said: > > Lognormal distribution > > lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) > for x > 0, s > 0. > > At the same time there are loc and scale to control the shape of the > distribution. Yes, they are omitted from the docstring because they behave in exactly the same way for all distributions. Replace x with (x-loc)/scale if you want to be explicit. > So it's like there are 3 parameters to control the > shape of the distribution (but 2 should be enough to specify one). > In fact, if we look at R, the function dlnorm, plnorm, etc. are > simple and just did the job. Ours is more general than R's, albeit in a somewhat unconventional way. Conventionally, log-normal distributions are only defined on the positive real line, and there is no location parameter. We extend this to the full real line and add a location parameter to shift it. If you don't want it, don't use it. > And after searching and trying different > parameters and compare with R's function, I still couldn't figure out > how to use scipy.stats.lognorm. So am I missing anything there? The scale parameter here corresponds to the exponential of the mean of the Gaussian in log-space (exp(meanlog) in the terms of the dlnorm documentation). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From timmichelsen at gmx-topmail.de Thu Sep 27 10:28:10 2007 From: timmichelsen at gmx-topmail.de (Tim) Date: Thu, 27 Sep 2007 14:28:10 +0000 (UTC) Subject: [SciPy-user] What happened to python-idl? Message-ID: Hello, I stumbled upon the annoucement of Python-IDL: http://mail.python.org/pipermail/python-announce-list/2003-January/001987.html That al liked to http://www.astro.uio.no/~mcmurry/python-idl/ which is a dead link. Does anyone know what has happened to this package? What would the members of the list do if you have some good and useful code in IDL but rather would like to use it or similar routines in Python? Thanks in advance, Tim From bruno.chazelas at ias.u-psud.fr Fri Sep 28 04:39:30 2007 From: bruno.chazelas at ias.u-psud.fr (bruno) Date: Fri, 28 Sep 2007 10:39:30 +0200 Subject: [SciPy-user] What happened to python-idl? In-Reply-To: References: Message-ID: <46FCBDC2.7030800@ias.u-psud.fr> Hello, what about : http://gnudatalanguage.sourceforge.net/ Bruno Tim a ?crit : > Hello, > I stumbled upon the annoucement of Python-IDL: > http://mail.python.org/pipermail/python-announce-list/2003-January/001987.html > That al liked to > http://www.astro.uio.no/~mcmurry/python-idl/ > which is a dead link. > > Does anyone know what has happened to this package? > > What would the members of the list do if you have some good and useful code in > IDL but rather would like to use it or similar routines in Python? > > Thanks in advance, > Tim > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bnuttall at uky.edu Fri Sep 28 07:38:50 2007 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Fri, 28 Sep 2007 07:38:50 -0400 Subject: [SciPy-user] Signification of parameters In-Reply-To: References: Message-ID: A search for authors Maxwell and Delaney at the University of Kentucky library online catalog reveals: Maxwell, S.E., and Delaney, H.D., 1999, Designing Experiments and Analyzing Data: A Model Comparison Perspective: Lawrence Erlbaum, 902 p. ISBN 9780805837063 (ebook ISBN 9780585282459). The volume is available electronically and I note there is a discussion of Wilks-lambda on page 657. Not being a statistician, I have reached the limit of what help I can provide. Brandon [cid:image001.jpg at 01C801A2.9C71C2D0] Powered by CardScan ________________________________ From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Matthieu Brucher Sent: Thursday, September 27, 2007 5:36 PM To: SciPy Users List Subject: Re: [SciPy-user] Signification of parameters 2007/9/27, Nuttall, Brandon C >: Matthieu I've never used the function, but... >>> from scipy.stats import f_value_wilks_lambda >>> help(f_value_wilks_lambda) Help on function f_value_wilks_lambda in module scipy.stats.stats: f_value_wilks_lambda(ER, EF, dfnum, dfden, a, b) Calculation of Wilks lambda F-statistic for multivarite data, per Maxwell & Delaney p.657. >>> I suggest you try the reference. You might also find additional info by looking at the Python source code. Thank you for the answer but I already tried the code, but no explanation, and looking on the web didn't help me find an explanation of the algorithm. I don't know if I can access somewhere the book, I'll try, but if someone has it, please let us know the meaning of the special parameters ;) Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 17011 bytes Desc: image001.jpg URL: From matthieu.brucher at gmail.com Fri Sep 28 08:01:22 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 28 Sep 2007 14:01:22 +0200 Subject: [SciPy-user] Signification of parameters In-Reply-To: References: Message-ID: 2007/9/28, Nuttall, Brandon C : > > A search for authors Maxwell and Delaney at the University of Kentucky > library online catalog reveals: > > > > Maxwell, S.E., and Delaney, H.D., 1999, Designing Experiments and > Analyzing Data: A Model Comparison Perspective: Lawrence Erlbaum, 902 p. > ISBN 9780805837063 (ebook ISBN 9780585282459). > > > > The volume is available electronically and I note there is a discussion of > Wilks-lambda on page 657. > Unfortunately, this is only available for people at the Kentucky university. My university (in France) does not provide it :( Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnuttall at uky.edu Fri Sep 28 08:10:03 2007 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Fri, 28 Sep 2007 08:10:03 -0400 Subject: [SciPy-user] Signification of parameters In-Reply-To: References: Message-ID: Excerpts from the book are available online from Google, see http://books.google.com/books?id=h-bMhmQMifsC&pg=RA2-PA134&lpg=RA2-PA134&dq=maxwell+delaney+wilks&source=web&ots=mLCliWT-cZ&sig=g3u2ZKxn0JESJCAGBwD2f7n5fSc. There is a link on that page to find the book in a library. Search results indicate there is a copy in Groupe Essec Bibliotheque Rcon Cergy Pontoise, 95021 France, see also www.essec.fr Check with your University. I would assume they have an interlibrary loan program for academic research. Brandon [cid:image001.jpg at 01C801A6.F946E130] Powered by CardScan ________________________________ From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Matthieu Brucher Sent: Friday, September 28, 2007 8:01 AM To: SciPy Users List Subject: Re: [SciPy-user] Signification of parameters 2007/9/28, Nuttall, Brandon C >: A search for authors Maxwell and Delaney at the University of Kentucky library online catalog reveals: Maxwell, S.E., and Delaney, H.D., 1999, Designing Experiments and Analyzing Data: A Model Comparison Perspective: Lawrence Erlbaum, 902 p. ISBN 9780805837063 (ebook ISBN 9780585282459). The volume is available electronically and I note there is a discussion of Wilks-lambda on page 657. Unfortunately, this is only available for people at the Kentucky university. My university (in France) does not provide it :( Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 17011 bytes Desc: image001.jpg URL: From issa at aims.ac.za Fri Sep 28 09:02:34 2007 From: issa at aims.ac.za (Issa Karambal) Date: Fri, 28 Sep 2007 15:02:34 +0200 (SAST) Subject: [SciPy-user] first order pde Message-ID: <43075.146.230.224.85.1190984554.squirrel@webmail.aims.ac.za> Hi all, I am having problem to solve numerically a hyperbolic equation like u_x=-pi*u_t, u(x,0)=exp(sin(x)), u(0,t)=u(2pi,t)=1. I used the method of lines; so I discretized the spatial domain using the centered finite difference method and thereafter I used the implicit Euler method, but I am still having problem after long time integration. If anyone has an idea on how I can solve numerically my problem? THANKS, issa From gnata at obs.univ-lyon1.fr Fri Sep 28 09:07:08 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Fri, 28 Sep 2007 15:07:08 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 Message-ID: <46FCFC7C.2090501@obs.univ-lyon1.fr> Hi, The scipy testsuite fails this way : lots of test look ok and then : ...Use minimum degree ordering on A'+A. .............................................................................................................../usr/lib/python2.4/site-packages/scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' .............................................................................................Illegal instruction I get no failed/success summary. Xavier -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From zunzun at zunzun.com Fri Sep 28 09:12:38 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Fri, 28 Sep 2007 09:12:38 -0400 Subject: [SciPy-user] ANN: Python Equations 2.0 - weaveless version Message-ID: <20070928131238.GB3932@zunzun.com> Python Equations 2.0 is available for download at http://sf.net/project/showfiles.php?group_id=187105 This version does not require Weave and only requires python, numpy and scipy. BSD license. About Python Equations The middleware for http://zunzun.com as a collection of Python equations that can fit themselves to both 2D and 3D data sets (curve fitting), output source code in several computing languages, and run a genetic algorithm for initial parameter estimation. James Phillips From marcos.capistran at gmail.com Fri Sep 28 09:23:17 2007 From: marcos.capistran at gmail.com (Marcos Capistran) Date: Fri, 28 Sep 2007 07:23:17 -0600 Subject: [SciPy-user] first order pde In-Reply-To: <43075.146.230.224.85.1190984554.squirrel@webmail.aims.ac.za> References: <43075.146.230.224.85.1190984554.squirrel@webmail.aims.ac.za> Message-ID: Hi Issa, You must make sure that your numerical method satisfies the Courant-Friedrichs-Lewy condition. On the other hand, (implicit) Euler is a low order method, it may not be a good choice if you need a reliable solution after long time integration. good look, On 9/28/07, Issa Karambal wrote: > Hi all, > > I am having problem to solve numerically a hyperbolic equation like > u_x=-pi*u_t, u(x,0)=exp(sin(x)), u(0,t)=u(2pi,t)=1. I used the method of > lines; so I discretized the spatial domain using the centered finite > difference method and thereafter I used the implicit Euler method, but I > am still having problem after long time integration. > If anyone has an idea on how I can solve numerically my problem? > > THANKS, > > issa > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Marcos Aurelio Capistr?n Ocampo CIMAT A. P. 402 Jalisco S/N, Valenciana Guanajuato, GTO 36240 Tel: (473) 73 2 71 55 Ext. 49640 From aisaac at american.edu Fri Sep 28 10:38:25 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 28 Sep 2007 10:38:25 -0400 Subject: [SciPy-user] ANN: Python Equations 2.0 - weaveless version In-Reply-To: <20070928131238.GB3932@zunzun.com> References: <20070928131238.GB3932@zunzun.com> Message-ID: On Fri, 28 Sep 2007, zunzun at zunzun.com apparently wrote: > About Python Equations > The middleware for http://zunzun.com as a > collection of Python equations that can fit > themselves to both 2D and 3D data sets > (curve fitting), output source code in > several computing languages, and run a genetic > algorithm for initial parameter estimation. This is a pretty sparse description. Could you elaborate a bit, and maybe offer an example of the usage? Thank you, Alan Isaac From jh at physics.ucf.edu Fri Sep 28 10:41:49 2007 From: jh at physics.ucf.edu (Joe Harrington) Date: Fri, 28 Sep 2007 10:41:49 -0400 Subject: [SciPy-user] What happened to python-idl? Message-ID: <1190990509.6370.437.camel@glup.physics.ucf.edu> > http://gnudatalanguage.sourceforge.net/ To get out of IDL's horrible syntax completely, see (and contribute to): http://software.pseudogreen.org/i2py/ This package appears to produce numarray code, but I imagine that making it produce numpy code would be straightforward. --jh-- From stefan at sun.ac.za Fri Sep 28 10:47:00 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 28 Sep 2007 16:47:00 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <46FCFC7C.2090501@obs.univ-lyon1.fr> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> Message-ID: <20070928144700.GV32704@mentat.za.net> Hi Xavier Which platform are you running on? Looks like http://projects.scipy.org/scipy/scipy/ticket/404 again. Please run the test suite with scipy.test(verbosity=9999) so we can see which test failed. Cheers St?fan On Fri, Sep 28, 2007 at 03:07:08PM +0200, Xavier Gnata wrote: > Hi, > > The scipy testsuite fails this way : > > lots of test look ok and then : > > ...Use minimum degree ordering on A'+A. > .............................................................................................................../usr/lib/python2.4/site-packages/scipy/ndimage/interpolation.py:41: > UserWarning: Mode "reflect" may yield incorrect results on boundaries. > Please use "mirror" instead. > warnings.warn('Mode "reflect" may yield incorrect results on ' > .............................................................................................Illegal > instruction > > I get no failed/success summary. > > > Xavier From zunzun at zunzun.com Fri Sep 28 11:01:26 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Fri, 28 Sep 2007 11:01:26 -0400 Subject: [SciPy-user] ANN: Python Equations 2.0 - weaveless version In-Reply-To: References: <20070928131238.GB3932@zunzun.com> Message-ID: <20070928150126.GA6932@zunzun.com> On Fri, Sep 28, 2007 at 10:38:25AM -0400, Alan G Isaac wrote: > > This is a pretty sparse description. I really didn't think too many people would be interested, and did not want to blather on and on to the mailing list. > Could you elaborate a bit, and maybe offer > an example of the usage? Sure. I have a curve and surface fitting web site http://zunzun.com and have released the actual fitting parts as a separate independant module. If someone wants to run the code on their own equipment, for example because their computers are faster than the web site's shared server or to implement some equation I do not yet have on the site, then they can. Originally I used weave and C++ for performance, but thanks to Robert Kern's suggestions I've been able to remove the requirement for a C++ compiler and use numpy natively - hence the new version, which requires only python, numpy and scipy. The package comes with quite a few examples, below is a simple linear 3D surface fit with Java source code output - this is included in the package under the Examples directory. For readability here I have removed the imports and comments that are in the example. James equation = PythonEquations.Equations3D.Polynomial.Linear3D() equation.fittingTarget = 'SSQABS' equation.ConvertTextToData(equation.exampleData) equation.Initialize() equation.FitToCacheData() # perform the fit print equation.CodeJAVA() # output Java source code From gnata at obs.univ-lyon1.fr Fri Sep 28 11:25:35 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Fri, 28 Sep 2007 17:25:35 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <20070928144700.GV32704@mentat.za.net> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <20070928144700.GV32704@mentat.za.net> Message-ID: <46FD1CEF.5030901@obs.univ-lyon1.fr> Hi, I'm running a simple debian sid i368. scipy.test(verbosity=9999) provides me with this output : gaussian filter 1 ... ok gaussian filter 2 ... ok gaussian filter 3 ... ok gaussian filter 4 ... ok gaussian filter 5 ... ok gaussian filter 6 ... ok gaussian gradient magnitude filter 1 ... ok gaussian gradient magnitude filter 2 ... ok gaussian laplace filter 1 ... ok gaussian laplace filter 2 ... ok generation of a binary structure 1 ... ok generation of a binary structure 2 ... ok generation of a binary structure 3 ... ok generation of a binary structure 4 ... ok generic filter 1Illegal instruction Hope this helps. Xavier > Hi Xavier > > Which platform are you running on? Looks like > > http://projects.scipy.org/scipy/scipy/ticket/404 > > again. > > Please run the test suite with > > scipy.test(verbosity=9999) > > so we can see which test failed. > > Cheers > St?fan > > On Fri, Sep 28, 2007 at 03:07:08PM +0200, Xavier Gnata wrote: > >> Hi, >> >> The scipy testsuite fails this way : >> >> lots of test look ok and then : >> >> ...Use minimum degree ordering on A'+A. >> .............................................................................................................../usr/lib/python2.4/site-packages/scipy/ndimage/interpolation.py:41: >> UserWarning: Mode "reflect" may yield incorrect results on boundaries. >> Please use "mirror" instead. >> warnings.warn('Mode "reflect" may yield incorrect results on ' >> .............................................................................................Illegal >> instruction >> >> I get no failed/success summary. >> >> >> Xavier >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From lists.steve at arachnedesign.net Fri Sep 28 13:18:55 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Fri, 28 Sep 2007 13:18:55 -0400 Subject: [SciPy-user] ANN: Python Equations 2.0 - weaveless version In-Reply-To: <20070928150126.GA6932@zunzun.com> References: <20070928131238.GB3932@zunzun.com> <20070928150126.GA6932@zunzun.com> Message-ID: Hi James, > Originally I used weave and C++ for performance, > but thanks to Robert Kern's suggestions I've > been able to remove the requirement for a C++ > compiler and use numpy natively - hence the > new version, which requires only python, numpy > and scipy. > Just out of curiosity. Have you noticed any significant speed differences between the weave/C++ version vs. your new all-python version, or were you able to pull of some serious scipy-fu to mitigate that? Thanks, -steve From zunzun at zunzun.com Fri Sep 28 13:37:54 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Fri, 28 Sep 2007 13:37:54 -0400 Subject: [SciPy-user] ANN: Python Equations 2.0 - weaveless version In-Reply-To: References: <20070928131238.GB3932@zunzun.com> <20070928150126.GA6932@zunzun.com> Message-ID: <20070928173754.GA11088@zunzun.com> On Fri, Sep 28, 2007 at 01:18:55PM -0400, Steve Lianoglou wrote: > > Just out of curiosity. Have you noticed any significant speed > differences between the weave/C++ version vs. your new all-python > version, or were you able to pull of some serious scipy-fu to > mitigate that? Interesting point. Anything that speeds numpy now speeds the new code, for instance ATLAS compilation targeted to a specific computer platform. I mention this as the vanilla Ubuntu Linux ATLAS was not as fast as the Ubuntu SSE2 ATLAS on my Acer 9500 development laptop, that was well worth the 30-40 seconds it took to download and install :) Tuned numpy appears approximately as fast as my weave code used to be, especially regarding the genetic algorithm I use to guess initial coefficients for the nonlinear solver. I'm quite pleased with it in terms of performance and *especially* ease of maintenance (Python, blessed Python!). I only *wish* I knew some serious scipy-fu... James From dmitrey.kroshko at scipy.org Fri Sep 28 14:08:35 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 28 Sep 2007 21:08:35 +0300 Subject: [SciPy-user] ANN: Python Equations 2.0 - weaveless version In-Reply-To: <20070928131238.GB3932@zunzun.com> References: <20070928131238.GB3932@zunzun.com> Message-ID: <46FD4323.5080604@scipy.org> Hallo James, Do you have func that is scipy fsolve or leastsq equivalent? I'm connecting fsolve to OpenOpt and I'm looking for other routines with same functionality. I intended to connect NOX from Trilinos project but found it to be very complicated, + lack of documentation. Regards, D. zunzun at zunzun.com wrote: > Python Equations 2.0 is available for download > at http://sf.net/project/showfiles.php?group_id=187105 > > This version does not require Weave and only > requires python, numpy and scipy. BSD license. > > > About Python Equations > > The middleware for http://zunzun.com as a > collection of Python equations that can fit > themselves to both 2D and 3D data sets > (curve fitting), output source code in > several computing languages, and run a genetic > algorithm for initial parameter estimation. > > > James Phillips > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From zunzun at zunzun.com Fri Sep 28 14:23:36 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Fri, 28 Sep 2007 14:23:36 -0400 Subject: [SciPy-user] ANN: Python Equations 2.0 - weaveless version In-Reply-To: <46FD4323.5080604@scipy.org> References: <20070928131238.GB3932@zunzun.com> <46FD4323.5080604@scipy.org> Message-ID: <20070928182336.GA12958@zunzun.com> On Fri, Sep 28, 2007 at 09:08:35PM +0300, dmitrey wrote: > Hallo James, > Do you have func that is scipy fsolve or leastsq equivalent? If you look at the method FitToCacheData() on line 379 here: http://pythonequations.cvs.sourceforge.net/pythonequations/PythonEquations/EquationBaseClasses.py?view=markup you can see I use numpy.linalg.lstsq() and scipy.optimize.fmin(). > I intended to connect NOX from Trilinos project but found it to be very > complicated, + lack of documentation. I had great hopes for Trilinos as well some months ago, and returned to scipy/numpy/Differential Evolution as my core routines for the reasons you note. James From gnata at obs.univ-lyon1.fr Fri Sep 28 15:18:44 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Fri, 28 Sep 2007 21:18:44 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <46FD1CEF.5030901@obs.univ-lyon1.fr> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <20070928144700.GV32704@mentat.za.net> <46FD1CEF.5030901@obs.univ-lyon1.fr> Message-ID: <46FD5394.9070406@obs.univ-lyon1.fr> ok. I'm compiling scipy with gcc version 3.4.6 (Debian 3.4.6-6) (g77 -v). It is quite strange beacause I do have gfortran installed but I get this : Found executable /usr/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found Is gfortran a Fortran 90 compiler or not???? Ok : lapack, atlas and blas are compiled using g77 (3.4.6) so it could be part of problem. Can we compile lapack, atlas and blas with gfortran 4.2 ?? Xavier > Hi, > > I'm running a simple debian sid i368. > > scipy.test(verbosity=9999) provides me with this output : > > > gaussian filter 1 ... ok > gaussian filter 2 ... ok > gaussian filter 3 ... ok > gaussian filter 4 ... ok > gaussian filter 5 ... ok > gaussian filter 6 ... ok > gaussian gradient magnitude filter 1 ... ok > gaussian gradient magnitude filter 2 ... ok > gaussian laplace filter 1 ... ok > gaussian laplace filter 2 ... ok > generation of a binary structure 1 ... ok > generation of a binary structure 2 ... ok > generation of a binary structure 3 ... ok > generation of a binary structure 4 ... ok > generic filter 1Illegal instruction > > Hope this helps. > > Xavier > > >> Hi Xavier >> >> Which platform are you running on? Looks like >> >> http://projects.scipy.org/scipy/scipy/ticket/404 >> >> again. >> >> Please run the test suite with >> >> scipy.test(verbosity=9999) >> >> so we can see which test failed. >> >> Cheers >> St?fan >> >> On Fri, Sep 28, 2007 at 03:07:08PM +0200, Xavier Gnata wrote: >> >> >>> Hi, >>> >>> The scipy testsuite fails this way : >>> >>> lots of test look ok and then : >>> >>> ...Use minimum degree ordering on A'+A. >>> .............................................................................................................../usr/lib/python2.4/site-packages/scipy/ndimage/interpolation.py:41: >>> UserWarning: Mode "reflect" may yield incorrect results on boundaries. >>> Please use "mirror" instead. >>> warnings.warn('Mode "reflect" may yield incorrect results on ' >>> .............................................................................................Illegal >>> instruction >>> >>> I get no failed/success summary. >>> >>> >>> Xavier >>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> > > > -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From robert.kern at gmail.com Fri Sep 28 16:12:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 28 Sep 2007 15:12:16 -0500 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <46FD5394.9070406@obs.univ-lyon1.fr> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <20070928144700.GV32704@mentat.za.net> <46FD1CEF.5030901@obs.univ-lyon1.fr> <46FD5394.9070406@obs.univ-lyon1.fr> Message-ID: <46FD6020.3050601@gmail.com> Xavier Gnata wrote: > ok. > I'm compiling scipy with gcc version 3.4.6 (Debian 3.4.6-6) (g77 -v). > It is quite strange beacause I do have gfortran installed but I get this : > Found executable /usr/bin/g77 > gnu: no Fortran 90 compiler found > gnu: no Fortran 90 compiler found > customize GnuFCompiler > gnu: no Fortran 90 compiler found > gnu: no Fortran 90 compiler found > > Is gfortran a Fortran 90 compiler or not???? gfortran is. g77 is not. Use --fcompiler=gnu95 to tell the setup.py script to use gfortran if you want it. > Ok : lapack, atlas and blas are compiled using g77 (3.4.6) so it could > be part of problem. > Can we compile lapack, atlas and blas with gfortran 4.2 ?? I believe so. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gnata at obs.univ-lyon1.fr Fri Sep 28 17:31:09 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Fri, 28 Sep 2007 23:31:09 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <46FD6020.3050601@gmail.com> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <20070928144700.GV32704@mentat.za.net> <46FD1CEF.5030901@obs.univ-lyon1.fr> <46FD5394.9070406@obs.univ-lyon1.fr> <46FD6020.3050601@gmail.com> Message-ID: <46FD729D.3060005@obs.univ-lyon1.fr> Robert Kern wrote: > Xavier Gnata wrote: > >> ok. >> I'm compiling scipy with gcc version 3.4.6 (Debian 3.4.6-6) (g77 -v). >> It is quite strange beacause I do have gfortran installed but I get this : >> Found executable /usr/bin/g77 >> gnu: no Fortran 90 compiler found >> gnu: no Fortran 90 compiler found >> customize GnuFCompiler >> gnu: no Fortran 90 compiler found >> gnu: no Fortran 90 compiler found >> >> Is gfortran a Fortran 90 compiler or not???? >> > > gfortran is. g77 is not. Use --fcompiler=gnu95 to tell the setup.py script to > use gfortran if you want it. > > >> Ok : lapack, atlas and blas are compiled using g77 (3.4.6) so it could >> be part of problem. >> Can we compile lapack, atlas and blas with gfortran 4.2 ?? >> > > I believe so. > > I still have the same problem using python setup.py build --fcompiler=gnu95 python setup.py install --fcompiler=gnu95 works well. gfortan has been used but the bug is still present. Xavier -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From david at ar.media.kyoto-u.ac.jp Sat Sep 29 00:53:14 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 29 Sep 2007 13:53:14 +0900 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <46FD729D.3060005@obs.univ-lyon1.fr> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <20070928144700.GV32704@mentat.za.net> <46FD1CEF.5030901@obs.univ-lyon1.fr> <46FD5394.9070406@obs.univ-lyon1.fr> <46FD6020.3050601@gmail.com> <46FD729D.3060005@obs.univ-lyon1.fr> Message-ID: <46FDDA3A.6060606@ar.media.kyoto-u.ac.jp> Xavier Gnata wrote: > Robert Kern wrote: > >> Xavier Gnata wrote: >> >> >>> ok. >>> I'm compiling scipy with gcc version 3.4.6 (Debian 3.4.6-6) (g77 -v). >>> It is quite strange beacause I do have gfortran installed but I get this : >>> Found executable /usr/bin/g77 >>> gnu: no Fortran 90 compiler found >>> gnu: no Fortran 90 compiler found >>> customize GnuFCompiler >>> gnu: no Fortran 90 compiler found >>> gnu: no Fortran 90 compiler found >>> >>> Is gfortran a Fortran 90 compiler or not???? >>> >>> >> gfortran is. g77 is not. Use --fcompiler=gnu95 to tell the setup.py script to >> use gfortran if you want it. >> >> >> >>> Ok : lapack, atlas and blas are compiled using g77 (3.4.6) so it could >>> be part of problem. >>> Can we compile lapack, atlas and blas with gfortran 4.2 ?? >>> >>> >> I believe so. >> >> >> > I still have the same problem using > > python setup.py build --fcompiler=gnu95 > python setup.py install > > --fcompiler=gnu95 works well. gfortan has been used but the bug is > still present. > It is a bad idea to use gfortran on sid, because the default fortran compiler is g77, and g77 and gfortran have different ABI, making libraries compiled by them difficult to coexist. My advice would be either stick with g77, or compile everything using gfortran (eg blas, lapack, atlas and numpy/scipy). cheers, David From gnata at obs.univ-lyon1.fr Sat Sep 29 08:33:15 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Sat, 29 Sep 2007 14:33:15 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <46FDDA3A.6060606@ar.media.kyoto-u.ac.jp> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <20070928144700.GV32704@mentat.za.net> <46FD1CEF.5030901@obs.univ-lyon1.fr> <46FD5394.9070406@obs.univ-lyon1.fr> <46FD6020.3050601@gmail.com> <46FD729D.3060005@obs.univ-lyon1.fr> <46FDDA3A.6060606@ar.media.kyoto-u.ac.jp> Message-ID: <46FE460B.3020503@obs.univ-lyon1.fr> David Cournapeau wrote: > Xavier Gnata wrote: > >> Robert Kern wrote: >> >> >>> Xavier Gnata wrote: >>> >>> >>> >>>> ok. >>>> I'm compiling scipy with gcc version 3.4.6 (Debian 3.4.6-6) (g77 -v). >>>> It is quite strange beacause I do have gfortran installed but I get this : >>>> Found executable /usr/bin/g77 >>>> gnu: no Fortran 90 compiler found >>>> gnu: no Fortran 90 compiler found >>>> customize GnuFCompiler >>>> gnu: no Fortran 90 compiler found >>>> gnu: no Fortran 90 compiler found >>>> >>>> Is gfortran a Fortran 90 compiler or not???? >>>> >>>> >>>> >>> gfortran is. g77 is not. Use --fcompiler=gnu95 to tell the setup.py script to >>> use gfortran if you want it. >>> >>> >>> >>> >>>> Ok : lapack, atlas and blas are compiled using g77 (3.4.6) so it could >>>> be part of problem. >>>> Can we compile lapack, atlas and blas with gfortran 4.2 ?? >>>> >>>> >>>> >>> I believe so. >>> >>> >>> >>> >> I still have the same problem using >> >> python setup.py build --fcompiler=gnu95 >> python setup.py install >> >> --fcompiler=gnu95 works well. gfortan has been used but the bug is >> still present. >> >> > It is a bad idea to use gfortran on sid, because the default fortran > compiler is g77, and g77 and gfortran have different ABI, making > libraries compiled by them difficult to coexist. My advice would be > either stick with g77, or compile everything using gfortran (eg blas, > lapack, atlas and numpy/scipy). > > cheers, > > David > yes sure! It was only a test to see if the bug is stil there or not. The result is clear : It is still here. Could someting else help you to fix that? Can anyone reproduce that? Xavier -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From david at ar.media.kyoto-u.ac.jp Sat Sep 29 08:52:51 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 29 Sep 2007 21:52:51 +0900 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <46FE460B.3020503@obs.univ-lyon1.fr> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <20070928144700.GV32704@mentat.za.net> <46FD1CEF.5030901@obs.univ-lyon1.fr> <46FD5394.9070406@obs.univ-lyon1.fr> <46FD6020.3050601@gmail.com> <46FD729D.3060005@obs.univ-lyon1.fr> <46FDDA3A.6060606@ar.media.kyoto-u.ac.jp> <46FE460B.3020503@obs.univ-lyon1.fr> Message-ID: <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> Xavier Gnata wrote: > yes sure! It was only a test to see if the bug is stil there or not. > The result is clear : It is still here. > Could someting else help you to fix that? > Can anyone reproduce that? Well, the problem in #404 looks like the worse ones: the ones which depend on compiler/interpreter versions. The fact that the problem does not appear under valgrind is quite intriguing. cheers, David From gnata at obs.univ-lyon1.fr Sat Sep 29 12:55:38 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Sat, 29 Sep 2007 18:55:38 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <20070928144700.GV32704@mentat.za.net> <46FD1CEF.5030901@obs.univ-lyon1.fr> <46FD5394.9070406@obs.univ-lyon1.fr> <46FD6020.3050601@gmail.com> <46FD729D.3060005@obs.univ-lyon1.fr> <46FDDA3A.6060606@ar.media.kyoto-u.ac.jp> <46FE460B.3020503@obs.univ-lyon1.fr> <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> Message-ID: <46FE838A.20907@obs.univ-lyon1.fr> David Cournapeau wrote: > Xavier Gnata wrote: > >> yes sure! It was only a test to see if the bug is stil there or not. >> The result is clear : It is still here. >> Could someting else help you to fix that? >> Can anyone reproduce that? >> > Well, the problem in #404 looks like the worse ones: the ones which > depend on compiler/interpreter versions. The fact that the problem does > not appear under valgrind is quite intriguing. > > cheers, > > David > > It *does* on my box. ./valgrind_py.sh /usr/lib/python2.4/site-packages/scipy/ndimage/tests/test_ndimage.py ==8678== LEAK SUMMARY: ==8678== definitely lost: 192 bytes in 2 blocks. ==8678== possibly lost: 33,027 bytes in 45 blocks. ==8678== still reachable: 14,427,776 bytes in 3,975 blocks. ==8678== suppressed: 0 bytes in 0 blocks. ==8678== Reachable blocks (those to which a pointer was found) are not shown. ==8678== To see them, rerun with: --leak-check=full --show-reachable=yes --8678-- memcheck: sanity checks: 2666 cheap, 107 expensive --8678-- memcheck: auxmaps: 0 auxmap entries (0k, 0M) in use --8678-- memcheck: auxmaps: 0 searches, 0 comparisons --8678-- memcheck: SMs: n_issued = 454 (7264k, 7M) --8678-- memcheck: SMs: n_deissued = 14 (224k, 0M) --8678-- memcheck: SMs: max_noaccess = 65535 (1048560k, 1023M) --8678-- memcheck: SMs: max_undefined = 5 (80k, 0M) --8678-- memcheck: SMs: max_defined = 548 (8768k, 8M) --8678-- memcheck: SMs: max_non_DSM = 442 (7072k, 6M) --8678-- memcheck: max sec V bit nodes: 0 (0k, 0M) --8678-- memcheck: set_sec_vbits8 calls: 0 (new: 0, updates: 0) --8678-- memcheck: max shadow mem size: 7376k, 7M --8678-- translate: fast SP updates identified: 29,307 ( 85.4%) --8678-- translate: generic_known SP updates identified: 3,821 ( 11.1%) --8678-- translate: generic_unknown SP updates identified: 1,168 ( 3.4%) --8678-- tt/tc: 2,527,821 tt lookups requiring 3,090,497 probes --8678-- tt/tc: 2,527,821 fast-cache updates, 2 flushes --8678-- transtab: new 28,855 (653,474 -> 10,476,946; ratio 160:10) [0 scs] --8678-- transtab: dumped 0 (0 -> ??) --8678-- transtab: discarded 0 (0 -> ??) --8678-- scheduler: 266,643,629 jumps (bb entries). --8678-- scheduler: 2,666/3,334,044 major/minor sched events. --8678-- sanity: 2667 cheap, 107 expensive checks. --8678-- exectx: 30,011 lists, 7,141 contexts (avg 0 per list) --8678-- exectx: 1,014,663 searches, 1,036,327 full compares (1,021 per 1000) --8678-- exectx: 1,115,754 cmp2, 6,184 cmp4, 0 cmpAll Illegal instruction and then valgrind crashes. flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss tm pbe nx est tm2 g77 -v Reading specs from /usr/lib/gcc/i486-linux-gnu/3.4.6/specs Configured with: ../src/configure -v --enable-languages=c,c++,f77,pascal --prefix=/usr --libexecdir=/usr/lib --with-gxx-include-dir=/usr/include/c++/3.4 --enable-shared --with-system-zlib --enable-nls --without-included-gettext --program-suffix=-3.4 --enable-__cxa_atexit --enable-clocale=gnu --enable-libstdcxx-debug --with-tune=i686 i486-linux-gnu Thread model: posix Looks like compilation flags mismatch. Xavier -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From gnata at obs.univ-lyon1.fr Sat Sep 29 13:11:00 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Sat, 29 Sep 2007 19:11:00 +0200 Subject: [SciPy-user] Illegal instruction in testsuite of revision 3378 In-Reply-To: <46FE838A.20907@obs.univ-lyon1.fr> References: <46FCFC7C.2090501@obs.univ-lyon1.fr> <20070928144700.GV32704@mentat.za.net> <46FD1CEF.5030901@obs.univ-lyon1.fr> <46FD5394.9070406@obs.univ-lyon1.fr> <46FD6020.3050601@gmail.com> <46FD729D.3060005@obs.univ-lyon1.fr> <46FDDA3A.6060606@ar.media.kyoto-u.ac.jp> <46FE460B.3020503@obs.univ-lyon1.fr> <46FE4AA3.1040707@ar.media.kyoto-u.ac.jp> <46FE838A.20907@obs.univ-lyon1.fr> Message-ID: <46FE8724.80500@obs.univ-lyon1.fr> Xavier Gnata wrote: > David Cournapeau wrote: > >> Xavier Gnata wrote: >> >> >>> yes sure! It was only a test to see if the bug is stil there or not. >>> The result is clear : It is still here. >>> Could someting else help you to fix that? >>> Can anyone reproduce that? >>> >>> >> Well, the problem in #404 looks like the worse ones: the ones which >> depend on compiler/interpreter versions. The fact that the problem does >> not appear under valgrind is quite intriguing. >> >> cheers, >> >> David >> >> >> > It *does* on my box. > > ./valgrind_py.sh > /usr/lib/python2.4/site-packages/scipy/ndimage/tests/test_ndimage.py > > > ==8678== LEAK SUMMARY: > ==8678== definitely lost: 192 bytes in 2 blocks. > ==8678== possibly lost: 33,027 bytes in 45 blocks. > ==8678== still reachable: 14,427,776 bytes in 3,975 blocks. > ==8678== suppressed: 0 bytes in 0 blocks. > ==8678== Reachable blocks (those to which a pointer was found) are not > shown. > ==8678== To see them, rerun with: --leak-check=full --show-reachable=yes > --8678-- memcheck: sanity checks: 2666 cheap, 107 expensive > --8678-- memcheck: auxmaps: 0 auxmap entries (0k, 0M) in use > --8678-- memcheck: auxmaps: 0 searches, 0 comparisons > --8678-- memcheck: SMs: n_issued = 454 (7264k, 7M) > --8678-- memcheck: SMs: n_deissued = 14 (224k, 0M) > --8678-- memcheck: SMs: max_noaccess = 65535 (1048560k, 1023M) > --8678-- memcheck: SMs: max_undefined = 5 (80k, 0M) > --8678-- memcheck: SMs: max_defined = 548 (8768k, 8M) > --8678-- memcheck: SMs: max_non_DSM = 442 (7072k, 6M) > --8678-- memcheck: max sec V bit nodes: 0 (0k, 0M) > --8678-- memcheck: set_sec_vbits8 calls: 0 (new: 0, updates: 0) > --8678-- memcheck: max shadow mem size: 7376k, 7M > --8678-- translate: fast SP updates identified: 29,307 ( 85.4%) > --8678-- translate: generic_known SP updates identified: 3,821 ( 11.1%) > --8678-- translate: generic_unknown SP updates identified: 1,168 ( 3.4%) > --8678-- tt/tc: 2,527,821 tt lookups requiring 3,090,497 probes > --8678-- tt/tc: 2,527,821 fast-cache updates, 2 flushes > --8678-- transtab: new 28,855 (653,474 -> 10,476,946; ratio > 160:10) [0 scs] > --8678-- transtab: dumped 0 (0 -> ??) > --8678-- transtab: discarded 0 (0 -> ??) > --8678-- scheduler: 266,643,629 jumps (bb entries). > --8678-- scheduler: 2,666/3,334,044 major/minor sched events. > --8678-- sanity: 2667 cheap, 107 expensive checks. > --8678-- exectx: 30,011 lists, 7,141 contexts (avg 0 per list) > --8678-- exectx: 1,014,663 searches, 1,036,327 full compares (1,021 > per 1000) > --8678-- exectx: 1,115,754 cmp2, 6,184 cmp4, 0 cmpAll > Illegal instruction > > and then valgrind crashes. > > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge > mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss tm pbe nx est tm2 > > g77 -v > Reading specs from /usr/lib/gcc/i486-linux-gnu/3.4.6/specs > Configured with: ../src/configure -v --enable-languages=c,c++,f77,pascal > --prefix=/usr --libexecdir=/usr/lib > --with-gxx-include-dir=/usr/include/c++/3.4 --enable-shared > --with-system-zlib --enable-nls --without-included-gettext > --program-suffix=-3.4 --enable-__cxa_atexit --enable-clocale=gnu > --enable-libstdcxx-debug --with-tune=i686 i486-linux-gnu > Thread model: posix > > Looks like compilation flags mismatch. > > Xavier > > > Using gdb : Program received signal SIGILL, Illegal instruction. [Switching to Thread 0xb7d908c0 (LWP 9176)] 0xb4ffad43 in Py_FilterFunc (buffer=0x833a088, filter_size=2, output=0xbf8ee518, data=0xbf8ee58c) at scipy/ndimage/src/nd_image.c:346 backtrace : (gdb) backtrace #0 0xb4ffad43 in Py_FilterFunc (buffer=0x833a088, filter_size=2, output=0xbf8ee518, data=0xbf8ee58c) at scipy/ndimage/src/nd_image.c:346 #1 0xb4ffe211 in NI_GenericFilter (input=0x833a258, function=0xb4ffad40 , data=0xbf8ee58c, footprint=0x833a2b8, output=0x8339fe8, mode=NI_EXTEND_REFLECT, cvalue=0, origins=0x81d9548) at scipy/ndimage/src/ni_filters.c:858 #2 0xb4ffc5ed in Py_GenericFilter (obj=0x0, args=0xb7a21dac) at scipy/ndimage/src/nd_image.c:411 #3 0x080b9f67 in PyEval_EvalFrame () #4 0x080bb125 in PyEval_EvalCodeEx () #5 0x080b9492 in PyEval_EvalFrame () #6 0x080bb125 in PyEval_EvalCodeEx () #7 0x080b9492 in PyEval_EvalFrame () #8 0x080bb125 in PyEval_EvalCodeEx () #9 0x08101ae6 in ?? () #10 0xb70922a0 in ?? () #11 0xb7030dfc in ?? () #12 0x00000000 in ?? () So it is not the backtrace of #404...but it is close and most likely related. Xavier -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From gael.varoquaux at normalesup.org Sat Sep 29 13:38:06 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 29 Sep 2007 19:38:06 +0200 Subject: [SciPy-user] All the roots of a function in an interval Message-ID: <20070929173805.GB4826@clipper.ens.fr> Hi, Is there a tool in scipy that will give me all the roots of a function in an interval? I can always make one using optimize.minpack.fslove and a first scan to define different starting points, but if there is one, it would be sweet. Cheers, Ga?l From rob.clewley at gmail.com Sat Sep 29 14:23:03 2007 From: rob.clewley at gmail.com (Rob Clewley) Date: Sat, 29 Sep 2007 14:23:03 -0400 Subject: [SciPy-user] All the roots of a function in an interval In-Reply-To: <20070929173805.GB4826@clipper.ens.fr> References: <20070929173805.GB4826@clipper.ens.fr> Message-ID: You can't ever know that you've found all the roots of a general function in an interval unless you do a lot of analysis on it. I don't think you can do better than letting the user set what they think is the appropriate number of starting points for the types of function they are using on the intervals that they're providing. Short answer is no, I'm pretty sure scipy has no such thing. On 29/09/2007, Gael Varoquaux wrote: > Hi, > > Is there a tool in scipy that will give me all the roots of a function in > an interval? I can always make one using optimize.minpack.fslove and a > first scan to define different starting points, but if there is one, it > would be sweet. > > Cheers, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Sat Sep 29 14:36:23 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 29 Sep 2007 20:36:23 +0200 Subject: [SciPy-user] All the roots of a function in an interval In-Reply-To: References: <20070929173805.GB4826@clipper.ens.fr> Message-ID: <20070929183623.GD4826@clipper.ens.fr> On Sat, Sep 29, 2007 at 02:23:03PM -0400, Rob Clewley wrote: > You can't ever know that you've found all the roots of a general > function in an interval unless you do a lot of analysis on it. Fully agreed. I have done the analysis. I can give a minimal distance between the roots, which means I can scan the function over an interval and find all the roots. > I don't think you can do better than letting the user set what they > think is the appropriate number of starting points for the types of > function they are using on the intervals that they're providing. I am the user, in this case. Actually, I am a bit ashamed of this problem. I need to find all the x for which: m_1 * sin(k_2*x) = m_2(k_1*(x + delta_x) with k_2 ~ 780, k_1 ~ 767, m_1 and m_2 between 0 and 2, and delta_x taking any value. I need to find all the x over a finite interval (a few times 2pi/k_2). I gave up on clever analytical solutions (couldn't find one) and decided to go for the brute force approach. I'd much rather have an analytical solution, as I have done all the other calculations of the problem (a Bayesian estimator) analytically, but... So if anyone has advice on the options for a proper numerical approach (other than finding a root, and looking for one that pi/(2*k_1) further away) I'd love to hear. Thanks for your help, Ga?l From peridot.faceted at gmail.com Sat Sep 29 15:51:30 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 29 Sep 2007 15:51:30 -0400 Subject: [SciPy-user] All the roots of a function in an interval In-Reply-To: <20070929183623.GD4826@clipper.ens.fr> References: <20070929173805.GB4826@clipper.ens.fr> <20070929183623.GD4826@clipper.ens.fr> Message-ID: On 29/09/2007, Gael Varoquaux wrote: > Fully agreed. I have done the analysis. I can give a minimal distance > between the roots, which means I can scan the function over an interval > and find all the roots. The easiest and most reliable method here is to simply sample the function at equispaced points the nearest distance apart they can be; then every sign change gives you an interval bracketing a root. More generally, it should be possible to take advantage of analytic bounds on the first and second derivatives to reliably find all the roots of a function in an interval (for example, if you know f(x0)=y0 and in the interval I |f'(x)|