From dahl.joachim at gmail.com Thu Mar 1 02:38:29 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Thu, 1 Mar 2007 08:38:29 +0100 Subject: [SciPy-user] scipy and cvxopt In-Reply-To: <45E6080D.8090103@ee.byu.edu> References: <32e43bb70702281340r39d69ea5t2e1287ed340eee9b@mail.gmail.com> <45E6080D.8090103@ee.byu.edu> Message-ID: <47347f490702282338r54d5ce6cx1e9719f0e2f90c66@mail.gmail.com> On 2/28/07, Travis Oliphant wrote: > > Emin.shopper Martinian.shopper wrote: > > > Dear Experts, > > > > I need to solve some quadratic programs (and potentially other > > nonlinear programs). While scipy.optimize.fmin_cobyla seems like it > > can do this, it seems orders of magnitude slower than cvxopt. Are > > there plans to merge/include cvxopt in scipy or otherwise improve > > scipy's quadratic/nonlinear constrained optimization routines? > > > Yes, eventually. I have talked to the author of CVXOPT at NIPS 2006. > The plan is to move NumPy's matrix object into C and move CVXOPTs > implementation over to use it, possibly integrating the cvxopt > algorithms into at least a scikits library (of GPL code). That would be great! We're considering different a license also. Porting CVXOPT to Numpy/Scipy would require sparse matrices also - preferably implemented so that there's not much difference between dense and sparse matrices from the user's perspective. The sparse matrix class in SciPy looks nice, but it appears to "behave" different from a dense matrix wrt. to indexing, assignments, creation etc; although it's not essential, it would make a CVXOPT port easier if some of the differences between dense and sparse matrices were also ironed out when Travis revamps the Numpy matrix object. Please let me know if there's anything I can do to help in process. Joachim But, I won't have time for that until April or May. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From williams at astro.ox.ac.uk Thu Mar 1 05:45:12 2007 From: williams at astro.ox.ac.uk (Michael Williams) Date: Thu, 1 Mar 2007 10:45:12 +0000 Subject: [SciPy-user] [Numpy-discussion] NumPy in Teaching In-Reply-To: <45E4FFA6.9010408@shrogers.com> References: <45E4FFA6.9010408@shrogers.com> Message-ID: <20070301104511.GA23386@astro.ox.ac.uk> On Tue, Feb 27, 2007 at 09:05:58PM -0700, Steven H. Rogers wrote: > I'm doing an informal survey on the use of Array Programming Languages > for teaching. If you're using NumPy in this manner I'd like to hear > from you. What subject was/is taught, academic level, results, lessons > learned, etc. If Numeric counts, I used that back in 2002 as part of an introductory programming course I wrote for the Department of Physics at Oxford. We really only used to to provide an element-wise array method. Brief introduction: http://pentangle.net/python/pyzine.php The course (aka "Handbook") and report on the course's successes and failures: http://pentangle.net/python/ -- Mike From nwagner at iam.uni-stuttgart.de Thu Mar 1 06:59:01 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 01 Mar 2007 12:59:01 +0100 Subject: [SciPy-user] fmin_cobyla Message-ID: <45E6C005.5050507@iam.uni-stuttgart.de> Hi all, Is it possible to obtain more return values wrt to fmin_cobyla ? The output is currently limited to the minimum (if any exists). I am interested in fopt -- the value of f(xopt). func_calls -- the number of function_calls. allvecs -- if retall then this vector of the iterates is returned Nils From gruben at bigpond.net.au Thu Mar 1 08:07:22 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri, 02 Mar 2007 00:07:22 +1100 Subject: [SciPy-user] NumPy in Teaching In-Reply-To: <45E4FFA6.9010408@shrogers.com> References: <45E4FFA6.9010408@shrogers.com> Message-ID: <45E6D00A.5030706@bigpond.net.au> Hi Steven, Last year I helped out in teaching some basic programming with no prerequisites to 3rd year undergrad physics students at Monash University. It was really a 1st or 2nd year level course, but we had a wide spectrum of background experience levels - from no programming experience to proficient in C++. To deal with this variation in experience, we created some basic and some more advanced teaching labs. We divided the subject in half, giving a single C lecture first, followed by a few C labs, then a single Python lecture followed by a few Python labs; a deliberately chosen order and obviously very ambitious. Our department is fairly IDL-centric, but Python's advantage of being free and its greater general applicability/flexibility was accepted by the course coordinator as sufficient reason to teach it. The hope is to get continuing 4th year honours students comfortable with a language for their 4th year physics projects. We took the view - shared by colleagues in the Computer Science dept. - that getting the students to struggle with pointers and see the C-syntax would be good for their character :-) and that numpy would allow much higher level tasks to be attempted at an early stage and would get them used to using an array-processing mindset. I think that, since some of the students had prior C experience, they were able to help each other a bit more in the C labs. We found that we were busier answering questions in the Python labs as a result. We had them create arrays in C, populating them with sinc functions to get them to deal with division by zero etc. and repeat the exercise in Python with numpy. We had them do some file i/o in both languages - I used scipy.io read_array and write_array to read data printf'ed by their C code. We did some fft-based filtering with Python and used pylab to view the results. We used Enthought Python with "ipython -pylab" shells and Idle as the editor. One lesson learned is that I tried to be a bit too ambitious with Python - they struggled with trying to figure out how to use functions. The labs were written as a mixture of "modify this example" code and "find the function which does this" - they found the latter too hard because the number of functions in numpy/scipy is a bit overwhelming and not easily navigable for the uninitiated. We'll be re-running this course in a few weeks and perhaps introducing a physics modelling/numerical methods subject (perhaps language-neutral) later in the year. It may also be a prerequisite for some of the 3rd year astronomical data processing labs currently being written. Gary R. Steven H. Rogers wrote: > I'm doing an informal survey on the use of Array Programming Languages > for teaching. If you're using NumPy in this manner I'd like to hear > from you. What subject was/is taught, academic level, results, lessons > learned, etc. > > Regards, > Steve From lxander.m at gmail.com Thu Mar 1 08:19:45 2007 From: lxander.m at gmail.com (Alexander Michael) Date: Thu, 1 Mar 2007 08:19:45 -0500 Subject: [SciPy-user] Using SciPy/NumPy optimization THANKS! In-Reply-To: <6.0.1.1.2.20070228135020.026f3008@pop.uky.edu> References: <6.0.1.1.2.20070227131721.0287bfe0@pop.uky.edu> <45E493D7.2070703@gmail.com> <6.0.1.1.2.20070227170539.0283f608@pop.uky.edu> <6.0.1.1.2.20070228093647.02798d38@pop.uky.edu> <45E5BF5E.6020601@gmail.com> <6.0.1.1.2.20070228135020.026f3008@pop.uky.edu> Message-ID: <525f23e80703010519y6cc4952bifd9b8a728324cfbf@mail.gmail.com> On 2/28/07, Brandon Nuttall wrote: > Folks, > > Thanks to Alok Singhal and Robert Kern I have not only learned a great deal > about SciPy and NumPy, but I have code that works. Thanks for the tip on > not looping; it does make cleaner code. I have two issues: 1) there must be > a better way to convert a list of data pairs to two arrays, 2) I'm not sure > of a graceful way to transition from one plot to the next and then close. > To add to the cacophony of coding and style suggestions. The <> operator is likely to be removed in the future, so you should use '!='. My personal preference would be to move the plotting functionality to a method so that initializing the data is separate from acting on the data. I find this to be a helpful distinction as I usually don't want to plot at the time of construction, but your mileage may vary. Lastly, you can wrap the non-plotting portion of the test function into a doctest () which would then both serve as an example in the code and as a correctness test which is easy to check when other things change, like upgrades to numpy and scipy. I find this to be immensely helpful. Have fun! Alex From steve at shrogers.com Thu Mar 1 09:09:01 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Thu, 01 Mar 2007 07:09:01 -0700 Subject: [SciPy-user] [Numpy-discussion] NumPy in Teaching In-Reply-To: <20070301104511.GA23386@astro.ox.ac.uk> References: <45E4FFA6.9010408@shrogers.com> <20070301104511.GA23386@astro.ox.ac.uk> Message-ID: <45E6DE7D.9020307@shrogers.com> Thanks Mike: Michael Williams wrote: > On Tue, Feb 27, 2007 at 09:05:58PM -0700, Steven H. Rogers wrote: > >> I'm doing an informal survey on the use of Array Programming Languages >> for teaching. If you're using NumPy in this manner I'd like to hear >> from you. What subject was/is taught, academic level, results, lessons >> learned, etc. >> > > If Numeric counts, I used that back in 2002 as part of an introductory > programming course I wrote for the Department of Physics at Oxford. We > really only used to to provide an element-wise array method. > Yes, Numeric and Numarray certainly count. The comments I've received about Matlab and IDL are also welcome. > Brief introduction: http://pentangle.net/python/pyzine.php > > The course (aka "Handbook") and report on the course's successes and > failures: http://pentangle.net/python/ > > -- Mike > ________ # Steve From perry at stsci.edu Thu Mar 1 11:03:34 2007 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 1 Mar 2007 11:03:34 -0500 Subject: [SciPy-user] [Numpy-discussion] NumPy in Teaching In-Reply-To: <200703010032.l210WfhI005995@glup.physics.ucf.edu> References: <200703010032.l210WfhI005995@glup.physics.ucf.edu> Message-ID: On Feb 28, 2007, at 7:32 PM, Joe Harrington wrote: > Hi Steve, > > I have taught Astronomical Data Analysis twice at Cornell using IDL, > and I will be teaching it next Fall at UCF using NumPy. Though I've > been active here in the recent past, I'm actually not a regular NumPy > user myself yet (I used Numeric experimentally for about 6 months in > 1997), so I'm a bit nervous. There isn't the kind of documentation > and how-to support for Numpy that there is for IDL, though our web > site is a start in that direction. One thought I've had in making the > transition easier is to put up a syntax and function concordance, > similar to that available for MATLAB. I thought this existed. Maybe > Perry can point me to it. Just adding a column to the MATLAB one > would be fine. I made one for IDL, but I don't recall one for matlab. If anyone has done one, I sure would like to incorporate it into the tutorial I'm revising if possible. Perry y From strawman at astraw.com Thu Mar 1 11:57:46 2007 From: strawman at astraw.com (Andrew Straw) Date: Thu, 01 Mar 2007 08:57:46 -0800 Subject: [SciPy-user] [Numpy-discussion] NumPy in Teaching In-Reply-To: References: <200703010032.l210WfhI005995@glup.physics.ucf.edu> Message-ID: <45E7060A.9040105@astraw.com> Perry Greenfield wrote: > On Feb 28, 2007, at 7:32 PM, Joe Harrington wrote: > > >> Hi Steve, >> >> I have taught Astronomical Data Analysis twice at Cornell using IDL, >> and I will be teaching it next Fall at UCF using NumPy. Though I've >> been active here in the recent past, I'm actually not a regular NumPy >> user myself yet (I used Numeric experimentally for about 6 months in >> 1997), so I'm a bit nervous. There isn't the kind of documentation >> and how-to support for Numpy that there is for IDL, though our web >> site is a start in that direction. One thought I've had in making the >> transition easier is to put up a syntax and function concordance, >> similar to that available for MATLAB. I thought this existed. Maybe >> Perry can point me to it. Just adding a column to the MATLAB one >> would be fine. >> > > I made one for IDL, but I don't recall one for matlab. If anyone has > done one, I sure would like to incorporate it into the tutorial I'm > revising if possible. > > I believe the reference is to this: http://scipy.org/NumPy_for_Matlab_Users From oliphant at ee.byu.edu Thu Mar 1 12:34:01 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 01 Mar 2007 10:34:01 -0700 Subject: [SciPy-user] scipy and cvxopt In-Reply-To: <47347f490702282338r54d5ce6cx1e9719f0e2f90c66@mail.gmail.com> References: <32e43bb70702281340r39d69ea5t2e1287ed340eee9b@mail.gmail.com> <45E6080D.8090103@ee.byu.edu> <47347f490702282338r54d5ce6cx1e9719f0e2f90c66@mail.gmail.com> Message-ID: <45E70E89.4060501@ee.byu.edu> Joachim Dahl wrote: > > > On 2/28/07, *Travis Oliphant* > wrote: > > Emin.shopper Martinian.shopper wrote: > > > Dear Experts, > > > > I need to solve some quadratic programs (and potentially other > > nonlinear programs). While scipy.optimize.fmin_cobyla seems like it > > can do this, it seems orders of magnitude slower than cvxopt. Are > > there plans to merge/include cvxopt in scipy or otherwise improve > > scipy's quadratic/nonlinear constrained optimization routines? > > > Yes, eventually. I have talked to the author of CVXOPT at NIPS 2006. > The plan is to move NumPy's matrix object into C and move CVXOPTs > implementation over to use it, possibly integrating the cvxopt > algorithms into at least a scikits library (of GPL code). > > > That would be great! We're considering different a license also. > > Porting CVXOPT to Numpy/Scipy would require sparse matrices also - > preferably > implemented so that there's not much difference between dense and > sparse matrices from the user's perspective. The sparse matrix class > in SciPy looks nice, but it appears to > "behave" different from a dense matrix wrt. to indexing, assignments, > creation etc; > although it's not essential, it would make a CVXOPT port easier if > some of the differences between dense and sparse matrices were also > ironed out when Travis revamps the > Numpy matrix object. I'd like to move a the Sparse matrix representation into NumPy as well at that time. We will need some discussion on what exactly to move over and how that is to be done. There are some differences in indexing that CVXOPT uses for its matrix classes that we will need to discuss how to handle. For example, we could construct a separate matrix subclass with the CVXOPT-style indexing. I think it would be easier to just have one style of indexing, though and just make sure that sparse matrices also handle that style. The most important thing to me, though, is to expose more users to the great work that CVXOPT has done and integrate that work into the larger NumPy/SciPy world. -Travis From dahl.joachim at gmail.com Thu Mar 1 14:41:46 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Thu, 1 Mar 2007 20:41:46 +0100 Subject: [SciPy-user] scipy and cvxopt In-Reply-To: <45E70E89.4060501@ee.byu.edu> References: <32e43bb70702281340r39d69ea5t2e1287ed340eee9b@mail.gmail.com> <45E6080D.8090103@ee.byu.edu> <47347f490702282338r54d5ce6cx1e9719f0e2f90c66@mail.gmail.com> <45E70E89.4060501@ee.byu.edu> Message-ID: <47347f490703011141geea3bfdx4e7c3f3e155a0489@mail.gmail.com> > > I'd like to move a the Sparse matrix representation into NumPy as well > at that time. We will need some discussion on what exactly to move over > and how that is to be done. There are some differences in indexing > that CVXOPT uses for its matrix classes that we will need to discuss how > to handle. For example, we could construct a separate matrix subclass > with the CVXOPT-style indexing. I think it would be easier to just have > one style of indexing, though and just make sure that sparse matrices > also handle that style. > > The most important thing to me, though, is to expose more users to the > great work that CVXOPT has done and integrate that work into the larger > NumPy/SciPy world. that sounds great - I look forward to helping any way I can! - Joachim -------------- next part -------------- An HTML attachment was scrubbed... URL: From asreeve at maine.edu Thu Mar 1 17:03:32 2007 From: asreeve at maine.edu (ASReeve) Date: Thu, 1 Mar 2007 17:03:32 -0500 (EST) Subject: [SciPy-user] PhD Opportunity Message-ID: My apologies if this if off topic. I am seeking a Ph.D. Student interested in using python for hydrologic simulation. Funding for a student is available for 4 years through an NSF grant. The research focus of this project will be the creation of finite-volume models, likely using FiPy, to simulate ground-water flow, solute transport, and carbon accumulation in a wetland system. Prospective Ph.D. students interested in working on this project should contact Andy Reeve (e-mail:asreeve at maine dot edu) for additional details. Please e-mail me directly and do not reply to this message through the mailing list. Thanks, Andy ---------------------- Andrew Reeve Dept. of Earth Science University of Maine Orono, ME 04469 From fperez.net at gmail.com Thu Mar 1 17:06:37 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 1 Mar 2007 15:06:37 -0700 Subject: [SciPy-user] PhD Opportunity In-Reply-To: References: Message-ID: On 3/1/07, ASReeve wrote: > My apologies if this if off topic. Personally, I don't think this is off topic /at all/. I hope one day will come when python is so widely accepted as a research tool that we'll have to ask people not to post such announcements here, but I think that while we are in the 'early evangelism' phase, messages such as yours are actually very valuable. They show students that this isn't necessarily a professional dead end. Just my opinion... Best, f From drfredkin at ucsd.edu Thu Mar 1 16:06:42 2007 From: drfredkin at ucsd.edu (Donald Fredkin) Date: Thu, 1 Mar 2007 21:06:42 +0000 (UTC) Subject: [SciPy-user] Any Books on SciPy? References: <45E567A2.3050406@shrogers.com> <200702280821.35981.dd55@cornell.edu> Message-ID: Darren Dale wrote: > I haven't seen anyone mention "Numerical Methods in Engineering with > Python" by Jaan Kiusalaas. It was published in 2005, and uses > numarray, so it is somewhat dated considering all the impressive > developments with NumPy and SciPy. I mention it for the sake of > completeness. This book has a first chapter on the basics of Python. The remainder is a light weight introduction to numerical methods and does not teach anything about Python. -- From david at ar.media.kyoto-u.ac.jp Thu Mar 1 22:56:33 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 02 Mar 2007 12:56:33 +0900 Subject: [SciPy-user] [Numpy-discussion] NumPy in Teaching In-Reply-To: <45E7060A.9040105@astraw.com> References: <200703010032.l210WfhI005995@glup.physics.ucf.edu> <45E7060A.9040105@astraw.com> Message-ID: <45E7A071.5020303@ar.media.kyoto-u.ac.jp> Andrew Straw wrote: > Perry Greenfield wrote: >> On Feb 28, 2007, at 7:32 PM, Joe Harrington wrote: >> >> >>> Hi Steve, >>> >>> I have taught Astronomical Data Analysis twice at Cornell using IDL, >>> and I will be teaching it next Fall at UCF using NumPy. Though I've >>> been active here in the recent past, I'm actually not a regular NumPy >>> user myself yet (I used Numeric experimentally for about 6 months in >>> 1997), so I'm a bit nervous. There isn't the kind of documentation >>> and how-to support for Numpy that there is for IDL, though our web >>> site is a start in that direction. One thought I've had in making the >>> transition easier is to put up a syntax and function concordance, >>> similar to that available for MATLAB. I thought this existed. Maybe >>> Perry can point me to it. Just adding a column to the MATLAB one >>> would be fine. >>> >> I made one for IDL, but I don't recall one for matlab. If anyone has >> done one, I sure would like to incorporate it into the tutorial I'm >> revising if possible. >> >> > I believe the reference is to this: http://scipy.org/NumPy_for_Matlab_Users Another reference which was really useful for me was this one http://37mm.no/matlab-python-xref.html I actually found it more useful than the wiki (at the time I made the step towards numpy for all my numerical needs, that is around one year ago). Note that this site also have reference wrt IDL and R, the latter having been useful for me when I thought about ditching matlab for R David From david.warde.farley at utoronto.ca Thu Mar 1 23:13:25 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Thu, 01 Mar 2007 23:13:25 -0500 Subject: [SciPy-user] Any Books on SciPy? In-Reply-To: References: <45E567A2.3050406@shrogers.com> <200702280821.35981.dd55@cornell.edu> Message-ID: <1172808805.14827.23.camel@rodimus> On Thu, 2007-03-01 at 21:06 +0000, Donald Fredkin wrote: > > I haven't seen anyone mention "Numerical Methods in Engineering with > > Python" by Jaan Kiusalaas. It was published in 2005, and uses > > numarray, so it is somewhat dated considering all the impressive > > developments with NumPy and SciPy. I mention it for the sake of > > completeness. > > This book has a first chapter on the basics of Python. The remainder > is > a light weight introduction to numerical methods and does not teach > anything about Python. A lab mate of mine showed me this today. It does seem useful in that it contains lots of little recipes. Unfortunately, I think that the recipe I was interested in, namely the conjugate gradient solver linear systems, was incorrect or at least incompatible with NumPy (the residuals were *diverging*, somehow) and I ended up rewriting it from scratch. David From robert.vergnes at yahoo.fr Fri Mar 2 01:33:16 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Fri, 2 Mar 2007 07:33:16 +0100 (CET) Subject: [SciPy-user] QME-Dev Workbench (wxSciPy) Alpha 1.6 - RELEASED TODAY on sourceforge Message-ID: <20070302063316.92779.qmail@web27411.mail.ukl.yahoo.com> The QME-DEV Data analysis Workbench Alpha 1.6 has been released: https://sourceforge.net/project/showfiles.php?group_id=181979 Looking for courageous testers..for the workbench. Solved issues: - window resizing, load/save, graph/plotting, lost of focus, change of the editor. Lib-Needed : Nympy/Scipy/Pylab, Python 2.4 , wxpython 2.8 Best Regards, Robert --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From franckm at aims.ac.za Fri Mar 2 05:15:50 2007 From: franckm at aims.ac.za (Franck Kalala Mutombo) Date: Fri, 02 Mar 2007 12:15:50 +0200 Subject: [SciPy-user] Fitting sphere to 3d data points In-Reply-To: References: <715CC8AB-4406-41A8-83EF-72F419738896@nih.gov> <91cf711d0701250631r35a9602g8f8ed48979e7cde3@mail.gmail.com> Message-ID: <45E7F956.9050400@aims.ac.za> James Vincent wrote: > David, > > Thanks for the reply. I think I should have been clearer about the > problem. I have a surface patch of data points, not an actual whole > sphere. It will probably be a very small section of total sphere > surface. I would like to fit the sphere that best fits just those points > that I have. The center of the sphere will be highly dependent on the > curvature of the points. > > I think the leastsq routine is right, I just can't figure out how to > pass the data in yet (it's my first work with scipy). > > Jim > > > On Jan 25, 2007, at 9:31 AM, David Huard wrote: > >> Hi James, >> >> As a first guess, I'd say the center of the sphere is simply the mean >> of your data points, if they're all weighted equally. With only one >> parameter left to fit, it should be easy enough. However, you may want >> to look at the paper: >> >> Werman, Michael and Keren, Daniel >> A Bayesian method for fitting parametric and nonparametric models to >> noisy data >> Ieee Transactions on Pattern Analysis and Machine Intelligence, 23, 2001. >> >> They write that the Mean Square Error approach overestimates the >> radius in the case of circles. They don't talk about the 3D case, but >> I'd guess similar problems arise. They provide a method to fit >> parametric shapes with some robustness to data errors. >> >> Cheers, >> >> David >> >> >> >> 2007/1/25, James Vincent >: >> >> Hello, >> >> Is it possible to fit a sphere to 3D data points >> using scipy.optimize.leastsq? I'd like to minimize the residual >> for the distance from the actual x,y,z point and the fitted sphere >> surface. I can see how to minimize for z, but that's not really >> what I'm looking for. Is there a better way to do this? Thanks for >> any help. >> >> params = a,b,c and r >> a,b,c are the fitted center point of the sphere, r is the radius >> >> err = distance-to-center - radius >> err = sqrt( x-a)**2 + (y-b)**2 + (z-c)**2) - r >> >> >> >> ---- >> James J. Vincent, Ph.D. >> National Cancer Institute >> National Institutes of Health >> Laboratory of Molecular Biology >> Building 37, Room 5120 >> 37 Convent Drive, MSC 4264 >> Bethesda, MD 20892 USA >> >> 301-451-8755 >> jjv5 at nih.gov >> >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > ---- > James J. Vincent, Ph.D. > National Cancer Institute > National Institutes of Health > Laboratory of Molecular Biology > Building 37, Room 5120 > 37 Convent Drive, MSC 4264 > Bethesda, MD 20892 USA > > 301-451-8755 > jjv5 at nih.gov > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Hi, I have a sequence of decimal number (1,2,...16) for example, I want if there is any function which can convert each one in binary. Thanks -- Franck African Institute for Mathematical Sciences -- www.aims.ac.za From jelle.feringa at ezct.net Fri Mar 2 07:26:21 2007 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Fri, 2 Mar 2007 13:26:21 +0100 Subject: [SciPy-user] writing string from array Message-ID: <001301c75cc5$fcf0f130$c000a8c0@JELLE> Hi, I'm trying to send a grid of points to a subprocess.Popen object, via the .communicate(input=some_array) method. To do so I need to write a string that looks like this: xA, yA, zA, xB, yB, zB To do so I intend to use the mgrid function: sample_grid = sp.mgrid[min_x:max_x:spacing, min_y:max_y:spacing, min_z:max_z:spacing ] To obtain the xA, yA, zA values, and to add an array((0,0,0.1)) to these values to obtain xB, yB, zB Unfortuantely I have no idea on how I can do so (using the .tostring method?) Any pointers would be appreciated for sure! Thanks, -jelle Jelle Feringa EZCT ARCHITECTURE & DESIGN RESEARCH Office: +33 (0) 1 42 40 19 81 Fax: +33 (0) 1 43 14 94 61 Cell Nl: +31 (0) 6 44 02 10 15 Cell Fr: +33 (0) 6 63 51 27 46 www.ezct.net jelle.feringa at ezct.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Fri Mar 2 08:25:29 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Fri, 2 Mar 2007 14:25:29 +0100 Subject: [SciPy-user] Optimization Working only with a Specific Expression of the input Parameters Message-ID: Dear All, I was trying to fit some data using the leastsq package in scipy.optimize. The function I would like to use to fit my data is: log(10.0)*A1/sqrt(2.0*pi)/log(myvar1)*exp(-((log(x/mu1))**2.0)/2.0/log(myvar1)/log(myvar1))) where A1, mu1 and myvar1 are fitting parameters. For some reason, I used to get an error message from scipy.optimize telling me that I was not working with an array of floats. I suppose that this is due to the fact that the optimizer also tries solving for negative values of mu1 and myvar1, for which the log function (x is always positive) does not exist. In fact, if I use the fitting function: log(10.0)*A1/sqrt(2.0*pi)/log(myvar1**2.0)*exp(-((log(x/mu1**2.0))**2.0)/2.0/log(myvar1**2.0)/log(myvar1**2.0))) Where mu1 and myvar1 appear squared, then the problem does not exist any longer and the results are absolutely ok. Can anyone enlighten me here and confirm this is what is really going on? Kind Regards Lorenzo From nmarais at sun.ac.za Fri Mar 2 08:37:49 2007 From: nmarais at sun.ac.za (Neilen Marais) Date: Fri, 02 Mar 2007 15:37:49 +0200 Subject: [SciPy-user] Preconditioned iterative matrix solution Message-ID: Hi, Is there any practical examples/documentation of how to do preconditioned iterative matrix solution with scipy? If it turns out there isn't any I could put my experiences up on the wiki. Thanks Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From nwagner at iam.uni-stuttgart.de Fri Mar 2 08:50:54 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 02 Mar 2007 14:50:54 +0100 Subject: [SciPy-user] Preconditioned iterative matrix solution In-Reply-To: References: Message-ID: <45E82BBE.8040901@iam.uni-stuttgart.de> Neilen Marais wrote: > Hi, > > Is there any practical examples/documentation of how to do preconditioned > iterative matrix solution with scipy? If it turns out there isn't any I > could put my experiences up on the wiki. > > Thanks > Neilen > > Hi Neilen, I am not aware of any scipy example. Thus it would be great if you could add it. Thanks in advance ! Cheers Nils From bnuttall at uky.edu Fri Mar 2 08:50:53 2007 From: bnuttall at uky.edu (Brandon Nuttall) Date: Fri, 02 Mar 2007 08:50:53 -0500 Subject: [SciPy-user] Optimization Working only with a Specific Expression of the input Parameters In-Reply-To: References: Message-ID: <6.0.1.1.2.20070302083425.027a3548@pop.uky.edu> Hello, I'm just an amateur, but it seems to me like the array data in myvar1 are likely integers. When you raise the data to a power of type float (i.e. 2.0) all the members of the array are automatically converted to real (float) types. Easiest and fastest thing I know to do would be: myvar1 = myvar1*1.0 Or, and probably preferred (assuming you are using the numpy array type and have imported it): myvar1 = numpy.array(myvar1,dtype=float) Brandon At 08:25 AM 3/2/2007, you wrote: >Dear All, >I was trying to fit some data using the leastsq package in >scipy.optimize. The function I would like to use to fit my data is: > >log(10.0)*A1/sqrt(2.0*pi)/log(myvar1)*exp(-((log(x/mu1))**2.0)/2.0/log(myvar1)/log(myvar1))) > > where A1, mu1 and myvar1 are fitting parameters. >For some reason, I used to get an error message from scipy.optimize >telling me that I was not working with an array of floats. >I suppose that this is due to the fact that the optimizer also tries >solving for negative values of mu1 and myvar1, for which the log >function (x is always positive) does not exist. >In fact, if I use the fitting function: > >log(10.0)*A1/sqrt(2.0*pi)/log(myvar1**2.0)*exp(-((log(x/mu1**2.0))**2.0)/2.0/log(myvar1**2.0)/log(myvar1**2.0))) > >Where mu1 and myvar1 appear squared, then the problem does not exist >any longer and the results are absolutely ok. >Can anyone enlighten me here and confirm this is what is really going on? >Kind Regards > >Lorenzo >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user Brandon C. Nuttall BNUTTALL at UKY.EDU Kentucky Geological Survey (859) 257-5500 University of Kentucky (859) 257-1147 (fax) 228 Mining & Mineral Resources Bldg http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 From robert.vergnes at yahoo.fr Fri Mar 2 08:57:30 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Fri, 2 Mar 2007 14:57:30 +0100 (CET) Subject: [SciPy-user] odeint-lsoda Message-ID: <20070302135733.14339.qmail@web27403.mail.ukl.yahoo.com> Hello, is there a manual for odeint and lsoda ? On the following small code (see below) oedint gives a warning and I would like to understand what is it... exactly and how to adjust it. Thanx Robert Warning messgae: Commandline: C:\Python25\python.exe F:\RFV-WO~1\DT_DAN~1\SOFTWA~1\MQE_PE~1\PYTHON~1\FEB200~2\SmallODE.py ##Workingdirectory: F:\RFV-WorkFiles\DT_Danil_EQM\SoftwarePendulumEQM\MQE_Pendulum_2006\Python_rfv\Feb20007_QPendulum ##Timeout: 0 ms ## ## lsoda-- at current t (=r1), mxstep (=i1) steps ## taken on this call before reaching tout ## In above message, I1 = 500 ## In above message, R1 = 0.3691558106967E+02 ##Excess work done on this call (perhaps wrong Dfun type). ##Run with full_output = 1 to get quantitative information. ##[[ 0.05 0.785 ] ## [-0.08825151 0.78480528] ## [-0.22531056 0.78323728] ## ..., ## [ 0. 0. ] ## [ 0. 0. ] ## [ 0. 0. ]] ## ##Process "Pyhton Interpreter" terminated, ExitCode: 00000000 small ode code: #Import of librairies used: from scipy import * from scipy.integrate import * from math import * #from pylab import * #from string import * #import time import types #import logging ------------------------------------------------------ Small ode test ------------------------------------------------------- def func_r3(y,t): return [ - fo*sign(y[0])- w0**2*sin(y[1])+ eval(RHS), y[0]] # rhs as alpha*sin(gamma*t+beta) def func_f3(y,t): return [ - fo*sign(y[0])- w0**2*sin(y[1]), y[0]] #----Param Settings:---- #Set the time: t=arange(1. ,50. ,0.01) # from 1 to aaa in steps of bbb #Define wo w0 = sqrt (9.81/0.5) #Define fo fo = 0.15 #example value fo=0.2 #we define parameters of the electrical or mechanical circuit alpha = 1.5 # try between 0.3 and 5 gamma = 4.43 #for 50Hz - gamma = 314.159 / and for 60Hz gamma = 376.99 - or w0 for forced oscillation beta = 0 #(used only if we now the exact phase change RHS=rhs = 'alpha*sin(gamma*t+beta)' #Define Initial Condtion rfv_y0_0 = 0.05 # = theta_dot(t=0) in rd/s rfv_y1_0 = 0.785 # = theta (t=0) in rd y0 = [rfv_y0_0, rfv_y1_0] # definition of the initial condition vector # 0.08727 rd = 5 deg rfv_qm=-0.08727 rfv_qp=0.08727 #--------------------------------------- #QPendulum(Pendultype , y0, t, w0, fo =0, RHS='', qp=0, qm=0, args='', Dfun='') y_final = array([[0,0]]) # we define y_final with 2 vectors - the same y=odeint(func_f3,y0,t) print y --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.warde.farley at utoronto.ca Fri Mar 2 11:41:54 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Fri, 2 Mar 2007 11:41:54 -0500 Subject: [SciPy-user] dot product of vectors stored as "matrix"? Message-ID: <31AF6D91-60CE-42C6-B09A-224EBF10C43F@utoronto.ca> This is more a question of elegance than anything else, but, given 2 column vectors (or row vectors) stored as type "matrix", why doesn't dot(x,y) give you their dot product? I get complaints about "matrices not aligned". Of course dot(x.T, y) works (with the annoying caveat that you get a matrix and have to dereference its single element), and so does dot (x.A.squeeze(), y.A.squeeze()), and so too does sum(multiply(x,y)). Is there a preferred notation for linear-algebra dot products, one that's faster than the others? David From drfredkin at ucsd.edu Fri Mar 2 11:46:13 2007 From: drfredkin at ucsd.edu (Donald Fredkin) Date: Fri, 2 Mar 2007 16:46:13 +0000 (UTC) Subject: [SciPy-user] odeint-lsoda References: <20070302135733.14339.qmail@web27403.mail.ukl.yahoo.com> Message-ID: Robert VERGNES wrote: > is there a manual for odeint and lsoda ? You can view the source code for lsoda on netlib. The extensive comments at the head of the source file are clear and complete. Odeint is just a wrapper for lsoda, so when you understand what lsoda is doing you will be all set. In your case, you might simply be computing for to coarse a mesh in t, so "too much work" has to be done for each step. The error message tells you what value of t was reached before lsoda gave up. -- From jan at aims.ac.za Fri Mar 2 11:53:22 2007 From: jan at aims.ac.za (Jan Groenewald) Date: Fri, 2 Mar 2007 18:53:22 +0200 Subject: [SciPy-user] Integer to binary [Was: Fitting sphere to 3d data points] In-Reply-To: <45E7F956.9050400@aims.ac.za> References: <715CC8AB-4406-41A8-83EF-72F419738896@nih.gov> <91cf711d0701250631r35a9602g8f8ed48979e7cde3@mail.gmail.com> <45E7F956.9050400@aims.ac.za> Message-ID: <20070302165322.GA25987@aims.ac.za> Hi Franck When you post, please don't reply to another thread of messages with a good descriptive subject. Your message not only gets lost by any email client that sorts by threads (mails with the same subject matter) it also annoys some readers who use this feature. Please start a blank message if you have a new question, and give it a descriptive subject. On Fri, Mar 02, 2007 at 12:15:50PM +0200, Franck Kalala Mutombo wrote: > I have a sequence of decimal number (1,2,...16) for example, I want if > there is any function which can convert each one in binary. A quick google of int2bin+python found this: def int2bin(num, width=32): return ''.join(['%c'%(ord('0')+bool((1< Hi Brandon, Thanks for your advice, but I am a bit confused: myvar1 is simply a fitting parameter (i.e. it is used to return an output), nothing is stored in it to start from. I do not define it anywhere. It is not an array. Furthermore, I have to say that if I define the error function as the absolute difference between my data and the function I want to use for the fitting, then the code executes even without raising any parameter to the second power, but returns nonsense (negative variance and so on). Instead, without the abs(), I still get the same problem mentioned in my previous email. There is something I must be misunderstanding...it is not a tough optimization at all the one I am carrying out... Cheers Lorenzo Hello, I'm just an amateur, but it seems to me like the array data in myvar1 are likely integers. When you raise the data to a power of type float (i.e. 2.0) all the members of the array are automatically converted to real (float) types. Easiest and fastest thing I know to do would be: myvar1 = myvar1*1.0 Or, and probably preferred (assuming you are using the numpy array type and have imported it): myvar1 = numpy.array(myvar1,dtype=float) Brandon At 08:25 AM 3/2/2007, you wrote: > >Dear All, > >I was trying to fit some data using the leastsq package in > >scipy.optimize. The function I would like to use to fit my data is: > > > >log(10.0)*A1/sqrt(2.0*pi)/log(myvar1)*exp(-((log(x/mu1))**2.0)/2.0/log(myvar1)/log(myvar1))) > > > > where A1, mu1 and myvar1 are fitting parameters. > >For some reason, I used to get an error message from scipy.optimize > >telling me that I was not working with an array of floats. > >I suppose that this is due to the fact that the optimizer also tries > >solving for negative values of mu1 and myvar1, for which the log > >function (x is always positive) does not exist. > >In fact, if I use the fitting function: > > > >log(10.0)*A1/sqrt(2.0*pi)/log(myvar1**2.0)*exp(-((log(x/mu1**2.0))**2.0)/2.0/log(myvar1**2.0)/log(myvar1**2.0))) > > > >Where mu1 and myvar1 appear squared, then the problem does not exist > >any longer and the results are absolutely ok. > >Can anyone enlighten me here and confirm this is what is really going on? > >Kind Regards > > > >Lorenzo > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > Brandon C. Nuttall > >Dear All, > >I was trying to fit some data using the leastsq package in > >scipy.optimize. The function I would like to use to fit my data is: > > > >log(10.0)*A1/sqrt(2.0*pi)/log(myvar1)*exp(-((log(x/mu1))**2.0)/2.0/log(myvar1)/log(myvar1))) > > > > where A1, mu1 and myvar1 are fitting parameters. > >For some reason, I used to get an error message from scipy.optimize > >telling me that I was not working with an array of floats. > >I suppose that this is due to the fact that the optimizer also tries > >solving for negative values of mu1 and myvar1, for which the log > >function (x is always positive) does not exist. > >In fact, if I use the fitting function: > > > >log(10.0)*A1/sqrt(2.0*pi)/log(myvar1**2.0)*exp(-((log(x/mu1**2.0))**2.0)/2.0/log(myvar1**2.0)/log(myvar1**2.0))) > > > >Where mu1 and myvar1 appear squared, then the problem does not exist > >any longer and the results are absolutely ok. > >Can anyone enlighten me here and confirm this is what is really going on? > >Kind Regards > > > >Lorenzo > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > Brandon C. Nuttall From franckm at aims.ac.za Fri Mar 2 16:31:27 2007 From: franckm at aims.ac.za (Franck Kalala Mutombo) Date: Fri, 02 Mar 2007 23:31:27 +0200 Subject: [SciPy-user] Integer to binary [Was: Fitting sphere to 3d data points] In-Reply-To: <20070302165322.GA25987@aims.ac.za> References: <715CC8AB-4406-41A8-83EF-72F419738896@nih.gov> <91cf711d0701250631r35a9602g8f8ed48979e7cde3@mail.gmail.com> <45E7F956.9050400@aims.ac.za> <20070302165322.GA25987@aims.ac.za> Message-ID: <45E897AF.5010505@aims.ac.za> Jan Groenewald wrote: > Hi Franck > > When you post, please don't reply to another thread of messages with > a good descriptive subject. Your message not only gets lost by any > email client that sorts by threads (mails with the same subject matter) > it also annoys some readers who use this feature. > > Please start a blank message if you have a new question, and give it > a descriptive subject. > > On Fri, Mar 02, 2007 at 12:15:50PM +0200, Franck Kalala Mutombo wrote: >> I have a sequence of decimal number (1,2,...16) for example, I want if >> there is any function which can convert each one in binary. > > A quick google of int2bin+python found this: > > def int2bin(num, width=32): > return ''.join(['%c'%(ord('0')+bool((1< > cheers, > Jan Dear Jan Thank you for your advices and for your help. cheers Franck -- Franck African Institute for Mathematical Sciences -- www.aims.ac.za From wbaxter at gmail.com Fri Mar 2 16:43:38 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 3 Mar 2007 06:43:38 +0900 Subject: [SciPy-user] [Numpy-discussion] NumPy in Teaching In-Reply-To: <45E7060A.9040105@astraw.com> References: <200703010032.l210WfhI005995@glup.physics.ucf.edu> <45E7060A.9040105@astraw.com> Message-ID: On 3/2/07, Andrew Straw wrote: > Perry Greenfield wrote: > > On Feb 28, 2007, at 7:32 PM, Joe Harrington wrote: > > > > > > > > > I believe the reference is to this: http://scipy.org/NumPy_for_Matlab_Users That page also has this link at the bottom: http://37mm.no/matlab-python-xref.html That page contains a Numpy command cross reference for Matlab IDL and R. --bb From philfarm at fastmail.fm Fri Mar 2 23:27:56 2007 From: philfarm at fastmail.fm (philfarm at fastmail.fm) Date: Fri, 02 Mar 2007 22:27:56 -0600 Subject: [SciPy-user] Best way to structure dynamic system-of-systems simulations? Message-ID: <1172896076.28906.1177490111@webmail.messagingengine.com> I'm looking for any pointers on the best way to to structure the simulation of a system composed of subsystems, i.e., I want to build my system model as an assemblage of component models, where some components may use as U values the Y values from other components. Each of the components can be modeled in state-space form, i.e.: X_dot = A(X,U,t) Y = B(X,U,t) where A and B are matrix functions (not necessarily linear). Is anyone using any sort of framework to simplify this task? What's the best approach? If it makes any difference, I've been using odeint. Thanks, Phil -- http://www.fastmail.fm - And now for something completely different From ckkart at hoc.net Sat Mar 3 00:53:40 2007 From: ckkart at hoc.net (Christian Kristukat) Date: Sat, 03 Mar 2007 14:53:40 +0900 Subject: [SciPy-user] Optimization Working only with a Specific Expression of the input Parameters In-Reply-To: <45E87B5B.4000703@gmail.com> References: <45E87B5B.4000703@gmail.com> Message-ID: <45E90D64.7040304@hoc.net> Lorenzo Isella wrote: > Hi Brandon, > Thanks for your advice, but I am a bit confused: myvar1 is simply a > fitting parameter (i.e. it is used to return an output), nothing is > stored in it to start from. > > It is not an array. Furthermore, I have to say that if I define the > error function as the absolute difference between my data and the > function I want to use for the fitting, then the code executes even > without raising any parameter to the second power, but returns nonsense > (negative variance and so on). > Instead, without the abs(), I still get the same problem mentioned in my > previous email. Can you post your code, including the data? Christian From robert.vergnes at yahoo.fr Sat Mar 3 04:05:40 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Sat, 3 Mar 2007 10:05:40 +0100 (CET) Subject: [SciPy-user] RE : Re: odeint-lsoda In-Reply-To: Message-ID: <20070303090540.29592.qmail@web27408.mail.ukl.yahoo.com> I looked in the site package before but can't see anything .. (C:\Python25\Lib\site-packages\scipy\integrate) All tests from the tests folder work, but I don't understand how the odepack links to the odeint (albeit i see the def...). And can't find any file lsoda.py (or .f) I probably missed something...? Donald Fredkin a ?crit : Robert VERGNES wrote: > is there a manual for odeint and lsoda ? You can view the source code for lsoda on netlib. The extensive comments at the head of the source file are clear and complete. Odeint is just a wrapper for lsoda, so when you understand what lsoda is doing you will be all set. In your case, you might simply be computing for to coarse a mesh in t, so "too much work" has to be done for each step. The error message tells you what value of t was reached before lsoda gave up. -- _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Sat Mar 3 15:23:16 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sat, 03 Mar 2007 21:23:16 +0100 Subject: [SciPy-user] Optimization Working only with a Specific Expression of the input Parameters Message-ID: <45E9D934.20202@gmail.com> Can you post your code, including the data? Christian Hi Cristian, Actually I already posted this on the mailing list, but nevertheless I will give it another try. Now, this is the code I fixed after posting on the mailing list: #! /usr/bin/env python from scipy import * import pylab # used to read the .csv file # now I want to try reading some .csv file data = pylab.load("120km-TPM.csv",delimiter=',') vecdim=shape(data) # now I introduce a vector with the dimensions of data, the file I read print "the dimensions of data are" print vecdim # now very careful! in Python the arrays start with index zero. x=data[0:vecdim[0],0] # it means: slice the rows from the 1st one (0) to the last one ( # (vecdim[0]) for the first column (0) #plot(x,data[:,1]) #show() # uncomment them to plot a distribution # 1st problem: if uncomment the previous two lines, I get a warning and until I close # the window, the script does not progress. y_meas=data[0:vecdim[0],1] # measured data, for example the 2nd column of the .csv file # Here I define my own error function i.e. the function I want to minimize def residuals(p, y, x): A1,mu1,myvar1 = p err = abs( y-log(10.0)*A1/sqrt(2.0*pi)/log(abs(myvar1))*exp(-((log(x/abs(mu1)))**2.0)/2.0/log(abs(myvar1))/log(abs(myvar1)))) return err #def peval(x, p): # return log(10.0)*p[0]/sqrt(2.0*pi)/log(abs(p[2]))*exp(-((log(x/abs(p[1])))**2.0)/2.0/log(abs(p[2]))/log(abs(p[2]))) # NB: I am using the mean as a mu1**2. and the std as myvar1**2. # otherwise I run into problems (probably the optimizers tries some negative values of the # mean and stops as it sees a complex number and it stops). print "the initial guesses are" p0 = [50000.0,90.0, 1.59] print array(p0) # now I try performing a least-square fitting from scipy.optimize import leastsq # now I try actually solving the problem #print "ok up to here" plsq = leastsq(residuals, p0, args=(y_meas, x)) #print "ok up to here2" print "the optimized values of the parameters are:" print plsq[0] coeff=plsq[0] #coeff[1]=coeff[1]**2. #coeff[2]=coeff[2]**2. print "so the amplitude is", coeff[0] print "and the mean", coeff[1] print "and the std", coeff[2] print"and these are the same results as those provided by R!" print "So far so good" This way the code works, but if you try running it without the abs() around mu1 and myvar1, then you get again the error message concerning the floating point numbers. I also (re)attach the data, which you will be able to access via a link. Kind Regards Lorenzo -------------- next part -------------- A non-text attachment was scrubbed... Name: 120km-TPM.csv Type: text/csv Size: 9696 bytes Desc: not available URL: From ckkart at hoc.net Sat Mar 3 19:17:51 2007 From: ckkart at hoc.net (Christian Kristukat) Date: Sun, 04 Mar 2007 09:17:51 +0900 Subject: [SciPy-user] Optimization Working only with a Specific Expression of the input Parameters In-Reply-To: <45E9D934.20202@gmail.com> References: <45E9D934.20202@gmail.com> Message-ID: <45EA102F.9070505@hoc.net> Hi Lorenzo, Lorenzo Isella wrote: > Actually I already posted this on the mailing list, but nevertheless I > will give it another try. Sorry I have not seen it. The problem is as you guessed that leastsq is trying negative parameters which will turn the residual complex. But the reason is simple and easy to solve: your initital guess for the first parameters is bad: > print "the initial guesses are" > p0 = [50000.0,90.0, 1.59] If you set it to 1e8 it works very well. Christian From gael.varoquaux at normalesup.org Sun Mar 4 14:27:39 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 4 Mar 2007 20:27:39 +0100 Subject: [SciPy-user] TraitsUI tutorial Message-ID: <20070304192738.GK1070@clipper.ens.fr> TraitsUI is a package to create user interfaces by building dialogs from objects. It actually does much more and provides tools for dynamical notifications and is thus a good framework for data-flow programming and inversion of control. As it is based on WxPython it is portable and integrates in other GUIs (it can be used in an application with matplotlib, for instance). I have used it very successfully to write a software to control a lab experiment. I find that it is the easiest way to build GUI under python. I had a few difficulties in the beginning, due my ignorance of basics facts in GUI programming. I have writing a tutorial that targets the casual programmer, hopping to leverage traitsUI to just about any scientist that uses python. The tutorial has been mentioned a few times on the ML, but this time I have made the last modifications I wanted to, so I announce the first version that I am not ashamed of: http://www.gael-varoquaux.info/computers/traits_tutorial/index.html. I have also made a cookbook entry: http://scipy.org/TraitsUI If you agree with me that this is the easiest way to build graphical interfaces under python, please help me spread the word. Cheers, Ga?l From davbrow at gmail.com Mon Mar 5 03:43:12 2007 From: davbrow at gmail.com (Dave) Date: Mon, 5 Mar 2007 00:43:12 -0800 Subject: [SciPy-user] OSX _fitpack import problem Message-ID: <588f52020703050043g9cfd772kc4dbb30ef27cfbe1@mail.gmail.com> I have not been able to build a completely working scipy for python 2.5 on OS X (PPC). Much of it seems to work but I get an error when trying to import scipy.interpolate: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/fitpack.py in () 32 'bisplrep', 'bisplev'] 33 __version__ = "$Revision: 2252 $"[10:-1] ---> 34 import _fitpack 35 from numpy import atleast_1d, array, ones, zeros, sqrt, ravel, transpose, \ 36 dot, sin, cos, pi, arange, empty, int32 : dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/_fitpack.so, 2): Symbol not found: _e_wsle Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/_fitpack.so Expected in: dynamic lookup I installed using instructions from the scipy web site (http://www.scipy.org/Installing_SciPy/Mac_OS_X). Here's additional info. gfortran --version GNU Fortran 95 (GCC) 4.3.0 20061223 (experimental) gcc --version powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5367) I installed scipy from the tar archive (scipy-0.5.2), not svn. In searching I found postings where others have had this problem over the last few months but I didn't find a solution. Does anyone know what's going on and how this can be resolved? -- David From giorgio.luciano at chimica.unige.it Mon Mar 5 05:50:42 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Mon, 05 Mar 2007 11:50:42 +0100 Subject: [SciPy-user] new repository for free software and forum community started Message-ID: <45EBF602.6010902@chimica.unige.it> Dear All I'm happy and proud to announce that a repository for free chemometric software (in particular Python oriented) has started. at www.chemometrics.it Some time ago I've sent an email to the ICS asking if anybody knows the esistence of a python repository for chemometrics, I had a positive feedback from the community and with the help of other chemometricians I've decided to gather free software available for doing common chemometric tasks. In the site you will also find a forum where to discuss -> theoretical aspects of chemometrics -> job opportunities and cooperation with other chemometricians -> software request/news I've tried to link to existing sites that already give a huge contribute to chemometrics (KVL,Chemometrics.se just to cite two of them) but I will be very glad if you would like to contribute with your own links I hope this initiave could help to spread chemometrics and the use of free routine/software for chemometrics but that it also will become a place of discussion for anybody interested. Obviously we need your help and enthusiasm. If you would like to upload software links, routines etc, just register for free and do it, it's just so easy and you are very encouraged to do it. The initiave is just at its beginning so feel free to report any technical problem Any kind of feedback is appreciated Since it's a no profit personal initiave (I bought the domain and the rent the hosting) finally let me say that any mirroring or hosting will be greatly appreciated. I hope that this initiative will be useful ;) Best Regards Giorgio Luciano -- -======================- Dr Giorgio Luciano Ph.D. Di.C.T.F.A. Dipartimento di Chimica e Tecnologie Farmaceutiche e Alimentari Via Brigata Salerno (ponte) - 16147 Genova (GE) - Italy email luciano at dictfa.unige.it -======================- From gonzalezmancera+scipy at gmail.com Mon Mar 5 13:30:32 2007 From: gonzalezmancera+scipy at gmail.com (Andres Gonzalez-Mancera) Date: Mon, 5 Mar 2007 13:30:32 -0500 Subject: [SciPy-user] OSX _fitpack import problem Message-ID: David, I'm not an expert on the subject but I have Scipy installed on Mac os 10.4.8. I wasn't successful using python 2.5 and so I downgraded to 2.4 and everything builds out of the box with no problems following the instructions. I don't know if the SVN version plays nicer with 2.5 but the release version which I also used didn't. Also remember to use: sudo python setup.py config_fc --fcompiler=gnu95 build if you have both g77 and gfortran installed. I hope this helps, Andres On 3/5/07, scipy-user-request at scipy.org wrote: > > Message: 2 > Date: Mon, 5 Mar 2007 00:43:12 -0800 > From: Dave > Subject: [SciPy-user] OSX _fitpack import problem > To: scipy-user at scipy.org > Message-ID: > <588f52020703050043g9cfd772kc4dbb30ef27cfbe1 at mail.gmail.com> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > I have not been able to build a completely working scipy for python > 2.5 on OS X (PPC). Much of it seems to work but I get an error when > trying to import scipy.interpolate: > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/fitpack.py > in () > 32 'bisplrep', 'bisplev'] > 33 __version__ = "$Revision: 2252 $"[10:-1] > ---> 34 import _fitpack > 35 from numpy import atleast_1d, array, ones, zeros, sqrt, ravel, > transpose, \ > 36 dot, sin, cos, pi, arange, empty, int32 > > : > dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/_fitpack.so, > 2): Symbol not found: _e_wsle > Referenced from: > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/_fitpack.so > Expected in: dynamic lookup > > > I installed using instructions from the scipy web site > (http://www.scipy.org/Installing_SciPy/Mac_OS_X). Here's additional > info. > > gfortran --version > GNU Fortran 95 (GCC) 4.3.0 20061223 (experimental) > > gcc --version > powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5367) > > I installed scipy from the tar archive (scipy-0.5.2), not svn. > > In searching I found postings where others have had this problem over > the last few months but I didn't find a solution. Does anyone know > what's going on and how this can be resolved? > > -- David > > > ------------------------------ -- Andres Gonzalez-Mancera Biofluid Mechanics Lab Department of Mechanical Engineering University of Maryland, Baltimore County andres.gonzalez at umbc.edu 410-455-3347 From steveire at gmail.com Mon Mar 5 15:24:15 2007 From: steveire at gmail.com (Stephen Kelly) Date: Mon, 5 Mar 2007 20:24:15 +0000 Subject: [SciPy-user] Arrayfns in numpy? Message-ID: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> Hi, I attempted to post this to the numpy-discussion list, but it seems that it has not appeared. I'm working on a project that requires interpolation, and I found this post (http://mail.python.org/pipermail/python-list/2000-August/050462.html ) which works fine, but depends on Numeric and does not seem to be available on the latest numpy. Is there some other way I should be doing interpolation using numpy, or is the omission of arrayfns an oversight? Kind regards, Stephen. -------------- next part -------------- An HTML attachment was scrubbed... URL: From as8ca at virginia.edu Mon Mar 5 15:56:44 2007 From: as8ca at virginia.edu (Alok Singhal) Date: Mon, 5 Mar 2007 15:56:44 -0500 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> Message-ID: <20070305205644.GA18701@virginia.edu> On 05/03/07: 20:24, Stephen Kelly wrote: > Is there some other way I should be doing interpolation using numpy, > or is the omission of arrayfns an oversight? You could use the functions in scipy.interpolate. For linear interpolation, use something like: from scipy.interpolate import interp1d # x, y are the original data values intp = interp1d(x, y) # xx is an array of new x values for which to interpolate (or just a # scalar) yy = intp(xx) If you can't use scipy, then it shouldn't be too hard to write your own linear interpolation function. You might find searchsorted() function useful in that case (in case your original x data is not regularly spaced). -Alok -- Alok Singhal * * Graduate Student, dept. of Astronomy * * * University of Virginia http://www.astro.virginia.edu/~as8ca/ * * From drfredkin at ucsd.edu Mon Mar 5 19:49:11 2007 From: drfredkin at ucsd.edu (Donald Fredkin) Date: Tue, 6 Mar 2007 00:49:11 +0000 (UTC) Subject: [SciPy-user] RE : Re: odeint-lsoda References: <20070303090540.29592.qmail@web27408.mail.ukl.yahoo.com> Message-ID: You can find the source for lsoda at http://netlib.org/. It's good to become accustomed to searching in netlib, but you can "cheat" and go directly to the source at http://netlib.org/alliant/ode/prog/lsoda.f. Robert VERGNES wrote: > I probably missed something...? > > > Donald Fredkin a icrit : Robert VERGNES wrote: > > > is there a manual for odeint and lsoda ? > > You can view the source code for lsoda on netlib. The extensive > comments at the head of the source file are clear and complete. Odeint > is just a wrapper for lsoda, so when you understand what lsoda is > doing you will be all set. > > In your case, you might simply be computing for to coarse a mesh in t, > so "too much work" has to be done for each step. The error message > tells you what value of t was reached before lsoda gave up. -- From steveire at gmail.com Tue Mar 6 05:54:19 2007 From: steveire at gmail.com (Stephen Kelly) Date: Tue, 6 Mar 2007 10:54:19 +0000 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <20070305205644.GA18701@virginia.edu> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> Message-ID: <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> Hi, Thanks for the reply. I don't want to depend on scipy for a small task like this. I want to keep dependencies small. I looked at the searchsorted function, but don't see how it would be useful. My data is not regularly spaced. It is X-ray diffraction data in which each intensity has been shifted slightly. I am currently using arrayfns from Numeric to get data with regular spacing. Could you give more information on how to interpolate the data? I don't know where to start. As an aside, here's some things in Numeric that aren't in numpy. Is this oversight, or are there no plans to implement them? 1. arrayfns module 2. UserArray module with UserArray class (comparible to UserDict, UserList). Kind regards, Stephen. On 3/5/07, Alok Singhal wrote: > > On 05/03/07: 20:24, Stephen Kelly wrote: > > Is there some other way I should be doing interpolation using numpy, > > or is the omission of arrayfns an oversight? > > You could use the functions in scipy.interpolate. For linear > interpolation, use something like: > > from scipy.interpolate import interp1d > > # x, y are the original data values > intp = interp1d(x, y) > > # xx is an array of new x values for which to interpolate (or just a > # scalar) > yy = intp(xx) > > If you can't use scipy, then it shouldn't be too hard to write your > own linear interpolation function. You might find searchsorted() > function useful in that case (in case your original x data is not > regularly spaced). > > -Alok > > -- > Alok Singhal * * > Graduate Student, dept. of Astronomy * * * > University of Virginia > http://www.astro.virginia.edu/~as8ca/ * * > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steveire at gmail.com Tue Mar 6 05:54:19 2007 From: steveire at gmail.com (Stephen Kelly) Date: Tue, 6 Mar 2007 10:54:19 +0000 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <20070305205644.GA18701@virginia.edu> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> Message-ID: <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> Hi, Thanks for the reply. I don't want to depend on scipy for a small task like this. I want to keep dependencies small. I looked at the searchsorted function, but don't see how it would be useful. My data is not regularly spaced. It is X-ray diffraction data in which each intensity has been shifted slightly. I am currently using arrayfns from Numeric to get data with regular spacing. Could you give more information on how to interpolate the data? I don't know where to start. As an aside, here's some things in Numeric that aren't in numpy. Is this oversight, or are there no plans to implement them? 1. arrayfns module 2. UserArray module with UserArray class (comparible to UserDict, UserList). Kind regards, Stephen. On 3/5/07, Alok Singhal wrote: > > On 05/03/07: 20:24, Stephen Kelly wrote: > > Is there some other way I should be doing interpolation using numpy, > > or is the omission of arrayfns an oversight? > > You could use the functions in scipy.interpolate. For linear > interpolation, use something like: > > from scipy.interpolate import interp1d > > # x, y are the original data values > intp = interp1d(x, y) > > # xx is an array of new x values for which to interpolate (or just a > # scalar) > yy = intp(xx) > > If you can't use scipy, then it shouldn't be too hard to write your > own linear interpolation function. You might find searchsorted() > function useful in that case (in case your original x data is not > regularly spaced). > > -Alok > > -- > Alok Singhal * * > Graduate Student, dept. of Astronomy * * * > University of Virginia > http://www.astro.virginia.edu/~as8ca/ * * > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelle.feringa at ezct.net Tue Mar 6 07:18:40 2007 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Tue, 6 Mar 2007 13:18:40 +0100 Subject: [SciPy-user] tvtk GL2PS Message-ID: <001501c75fe9$942463e0$c000a8c0@JELLE> Hi, Currently I'm using the enthon (win32, 2.4) 1.0.0 release, which is terrific! Though there is a minor, but to me important glitch in (t)vtk; >>>window >>>window.scene.save_gl2ps('test.pdf') Saving as a vector PS/EPS/PDF/TeX file using GL2PS is either not supported by your version of VTK or you have not configured VTK to work with GL2PS -- read the documentation for the vtkGL2PSExporter class. Which is really too bad. -jelle Ps: If you have ran into the same issues and have been able to recompile vtk with the VTK_USE_GL2PS flag enabled, it'd be great if you would be willing to share the binaries with me! I know http://mayavi.sourceforge.net/cgi-bin/moin.cgi/BuildingVTKOnWin32 has precise building instructions, but I'm afraid I'd need to confess I'm compiler-challenged ;') -------------- next part -------------- An HTML attachment was scrubbed... URL: From as8ca at virginia.edu Tue Mar 6 08:19:20 2007 From: as8ca at virginia.edu (Alok Singhal) Date: Tue, 6 Mar 2007 08:19:20 -0500 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> Message-ID: <20070306131920.GA11989@virginia.edu> On 06/03/07: 10:54, Stephen Kelly wrote: > I don't want to depend on scipy for a small task like this. I want to keep > dependencies small. > > I looked at the searchsorted function, but don't see how it would be > useful. Given a sorted arry a, and some values v, ind = searchsorted(a, v) returns ind such that a[ind[i]] > v[i] > a[ind[i]-1] (I hope I got that right :-) ). > My data is not regularly spaced. It is X-ray diffraction data in > which each intensity has been shifted slightly. I am currently using > arrayfns from Numeric to get data with regular spacing. Could you give > more information on how to interpolate the data? I don't know where to > start. Here is how I would go about it: import numpy # Test data (non-uniform spacing in x) x = numpy.array([0.0, 3.5, 6.0, 10.0, 15.0, 25.0]) y = numpy.array([13.0, 6.0, -6.0, 0.0, 4.0, 18.0]) # The data values for which interpolation is required. xx[0] should # be > x[0], and xx[-1] should be < x[-1]. Otherwise, undefined # behavior. xx = numpy.mgrid[0.5:25:0.5] # High indices hi = numpy.searchsorted(x, xx) # Low indices lo = hi - 1 slopes = (y[hi] - y[lo])/(x[hi] - x[lo]) # Interpolated data yy = y[lo] + slopes*(xx - x[lo]) # To plot using matplotlib: import pylab pylab.plot(x, y, 'ro', xx, yy, 'b') pylab.show() > As an aside, here's some things in Numeric that aren't in numpy. Is this > oversight, or are there no plans to implement them? > 1. arrayfns module > 2. UserArray module with UserArray class (comparible to UserDict, > UserList). I don't know the answer to that - maybe interpolation etc., are more suited to be in a 'scientific library' than an 'array library', so the creators of scipy/numpy moved the functionality of those modules to scipy and removed the modules from numpy? I don't know what else (except interpolation) did the UserArray class/module have, so I can't say for sure. -Alok -- Alok Singhal * * Graduate Student, dept. of Astronomy * * * University of Virginia http://www.astro.virginia.edu/~as8ca/ * * From mathias.wagner at physik.tu-darmstadt.de Tue Mar 6 09:20:12 2007 From: mathias.wagner at physik.tu-darmstadt.de (Mathias Wagner) Date: Tue, 6 Mar 2007 15:20:12 +0100 Subject: [SciPy-user] optimize.fmin_cg Message-ID: <200703061520.14440.mathias.wagner@physik.tu-darmstadt.de> Hi, I use optimize.fmin_cg to find the minimum of 1-dimensional and n-dimensional functions. For 1-dimensional functions everything is fine, but for n-dimensional functions I get a strange result. The function returns an array-of-array-of-arry depending on the number of iterations, for example I get In [6]:scipy.optimize.fmin_ncg(a.potstmu, [60.0] , a.grad_potstmu,args=(10,0,0),disp=1) Optimization terminated successfully. Current function value: -3464534650.284432 Iterations: 3 Function evaluations: 51 Gradient evaluations: 9 Hessian evaluations: 0 Out[6]:array([[[[[[[[[[[ 121.31834513]]]]]]]]]]]) The function potstmu is quite complicated, I am still searching for an easier example which needs more than 1 iteration. Can anyone confirm this behavior? Or is it a problem with my function? Mathias From nwagner at iam.uni-stuttgart.de Tue Mar 6 09:32:51 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 06 Mar 2007 15:32:51 +0100 Subject: [SciPy-user] optimize.fmin_cg In-Reply-To: <200703061520.14440.mathias.wagner@physik.tu-darmstadt.de> References: <200703061520.14440.mathias.wagner@physik.tu-darmstadt.de> Message-ID: <45ED7B93.6020803@iam.uni-stuttgart.de> Mathias Wagner wrote: > Hi, > > I use optimize.fmin_cg to find the minimum of 1-dimensional and n-dimensional > functions. > > For 1-dimensional functions everything is fine, but for n-dimensional > functions I get a strange result. > > The function returns an array-of-array-of-arry depending on the number of > iterations, for example I get > > In [6]:scipy.optimize.fmin_ncg(a.potstmu, [60.0] , > a.grad_potstmu,args=(10,0,0),disp=1) > Optimization terminated successfully. > Current function value: -3464534650.284432 > Iterations: 3 > Function evaluations: 51 > Gradient evaluations: 9 > Hessian evaluations: 0 > Out[6]:array([[[[[[[[[[[ 121.31834513]]]]]]]]]]]) > > > The function potstmu is quite complicated, I am still searching for an easier > example which needs more than 1 iteration. > Can anyone confirm this behavior? > Or is it a problem with my function? > > > Mathias > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi Mathias, Maybe I am missing something, but you can't optimize vector valued functions. For example you can optimize the Rayleigh quotient R(v) = dot(v.T,A*v)/dot(v.T,B*v) \quad R \in \mathds{R},\quad v \in \mathds{R}^n where v is a vector. Please can you expand on your function potstmu. Nils From mathias.wagner at physik.tu-darmstadt.de Tue Mar 6 09:45:18 2007 From: mathias.wagner at physik.tu-darmstadt.de (Mathias Wagner) Date: Tue, 6 Mar 2007 15:45:18 +0100 Subject: [SciPy-user] optimize.fmin_cg In-Reply-To: <45ED7B93.6020803@iam.uni-stuttgart.de> References: <200703061520.14440.mathias.wagner@physik.tu-darmstadt.de> <45ED7B93.6020803@iam.uni-stuttgart.de> Message-ID: <200703061545.19971.mathias.wagner@physik.tu-darmstadt.de> Hi, potstmu is a scalar function. The optimization works, it is just that the output is not array([xmin]) as expect and also works for a scalar function that is optimized with respect to a n-dimensional (n>1) argument. In this case the result is array([xmin0,xmin1,...]) I will try to find a simple example for which I can also post the function I want to minimize. Mathias On Tuesday 06 March 2007 15:32, you wrote: > Mathias Wagner wrote: > > Hi, > > > > I use optimize.fmin_cg to find the minimum of 1-dimensional and > > n-dimensional functions. > > > > For 1-dimensional functions everything is fine, but for n-dimensional > > functions I get a strange result. > > > > The function returns an array-of-array-of-arry depending on the number of > > iterations, for example I get > > > > In [6]:scipy.optimize.fmin_ncg(a.potstmu, [60.0] , > > a.grad_potstmu,args=(10,0,0),disp=1) > > Optimization terminated successfully. > > Current function value: -3464534650.284432 > > Iterations: 3 > > Function evaluations: 51 > > Gradient evaluations: 9 > > Hessian evaluations: 0 > > Out[6]:array([[[[[[[[[[[ 121.31834513]]]]]]]]]]]) > > > > > > The function potstmu is quite complicated, I am still searching for an > > easier example which needs more than 1 iteration. > > Can anyone confirm this behavior? > > Or is it a problem with my function? > > > > > > Mathias > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > Hi Mathias, > > Maybe I am missing something, but you can't optimize vector valued > functions. > > For example you can optimize the Rayleigh quotient > > R(v) = dot(v.T,A*v)/dot(v.T,B*v) \quad R \in \mathds{R},\quad v \in > \mathds{R}^n > > where v is a vector. Please can you expand on your function potstmu. > > Nils -- // *************************************************************** // ** Mathias Wagner ** // ** Institut fuer Kernphysik, TU Darmstadt ** // ** Schlossgartenstr. 9, 64289 Darmstadt, Germany ** // ** ** // ** email: mathias.wagner at physik.tu-darmstadt.de ** // ** www : http://crunch.ikp.physik.tu-darmstadt.de/~wagner ** // *************************************************************** From oliphant at ee.byu.edu Tue Mar 6 11:19:22 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 06 Mar 2007 09:19:22 -0700 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> Message-ID: <45ED948A.2040708@ee.byu.edu> Stephen Kelly wrote: > Hi, > > Thanks for the reply. > > I don't want to depend on scipy for a small task like this. I want to > keep dependencies small. > > I looked at the searchsorted function, but don't see how it would be > useful. My data is not regularly spaced. It is X-ray diffraction data > in which each intensity has been shifted slightly. I am currently > using arrayfns from Numeric to get data with regular spacing. Could > you give more information on how to interpolate the data? I don't know > where to start. > > As an aside, here's some things in Numeric that aren't in numpy. Is > this oversight, or are there no plans to implement them? > 1. arrayfns module > 2. UserArray module with UserArray class (comparible to UserDict, > UserList). > The UserArray module is there (it is imported when you import numpy as numpy.lib.user_array) The UserArray class is called numpy.lib.user_array.container The arrayfns module was not an oversight. Some of the arrayfns module has been implemented except for functions that belong in SciPy. The trend has been to move more things out of NumPy and make SciPy more modular so that pieces can be installed separately. There will always be people who want something moved into NumPy. It is hard to know where to stop. Which function in arrayfns are you missing? -Travis From steveire at gmail.com Tue Mar 6 11:25:13 2007 From: steveire at gmail.com (Stephen Kelly) Date: Tue, 6 Mar 2007 17:25:13 +0100 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <45ED948A.2040708@ee.byu.edu> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> Message-ID: <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> I'm missing arrayfns.interp. It would be a shame to have to depend on either Numeric or Scipy for such a trivial but useful function. I'll see about implementing something like it myself. I have the module source code, but installing C modules in python is a little beyond what I know how to do with it. Thanks for the UserArray info. Kind regards, Stephen. On 3/6/07, Travis Oliphant wrote: > > Stephen Kelly wrote: > > Hi, > > > > Thanks for the reply. > > > > I don't want to depend on scipy for a small task like this. I want to > > keep dependencies small. > > > > I looked at the searchsorted function, but don't see how it would be > > useful. My data is not regularly spaced. It is X-ray diffraction data > > in which each intensity has been shifted slightly. I am currently > > using arrayfns from Numeric to get data with regular spacing. Could > > you give more information on how to interpolate the data? I don't know > > where to start. > > > > As an aside, here's some things in Numeric that aren't in numpy. Is > > this oversight, or are there no plans to implement them? > > 1. arrayfns module > > 2. UserArray module with UserArray class (comparible to UserDict, > > UserList). > > > > The UserArray module is there (it is imported when you import numpy as > numpy.lib.user_array) > > The UserArray class is called numpy.lib.user_array.container > > The arrayfns module was not an oversight. Some of the arrayfns module > has been implemented except for functions that belong in SciPy. > > The trend has been to move more things out of NumPy and make SciPy more > modular so that pieces can be installed separately. > > There will always be people who want something moved into NumPy. It is > hard to know where to stop. > > Which function in arrayfns are you missing? > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Tue Mar 6 11:43:27 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 06 Mar 2007 09:43:27 -0700 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> Message-ID: <45ED9A2F.3050803@ee.byu.edu> Stephen Kelly wrote: > I'm missing arrayfns.interp. It would be a shame to have to depend on > either Numeric or Scipy for such a trivial but useful function. I'll > see about implementing something like it myself. I have the module > source code, but installing C modules in python is a little beyond > what I know how to do with it. It is easy enough to just pull interp out. Some of the functions in arrayfns are in numpy.lib already (in the _compiled_base.c module. Perhaps we could put interp in there as well (it doesn't look too big). A simple 1-d interpolation would probably be a useful thing to have in NumPy. What do others think? -Travis From beckers at orn.mpg.de Tue Mar 6 11:56:05 2007 From: beckers at orn.mpg.de (Gabriel J.L. Beckers) Date: Tue, 06 Mar 2007 17:56:05 +0100 Subject: [SciPy-user] mean of recarray Message-ID: <1173200165.19821.8.camel@beckerspc> Is there an easy way to get the mean of a record array? There is a mean() method, but it doesn't seem to work for me. In [1]: import numpy In [2]: desc = numpy.dtype({'names':['a','b'],'formats':['d','d']}) In [3]: ar = numpy.array([(1.,2.),(3.,4.)],dtype=desc) In [4]: ar.mean() --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/user/ TypeError: cannot perform reduce with flexible type I would actually be able to get the mean along the second axis, like ar.mean(1). Can this be done in a simple way? Cheers, Gabriel From paul.ray at nrl.navy.mil Tue Mar 6 12:02:19 2007 From: paul.ray at nrl.navy.mil (Paul Ray) Date: Tue, 6 Mar 2007 12:02:19 -0500 Subject: [SciPy-user] SciPy-user Digest, Vol 43, Issue 9 In-Reply-To: References: Message-ID: <5F553E0C-06F7-4FBF-8117-2CEBF9AD705D@nrl.navy.mil> On Mar 6, 2007, at 11:51 AM, scipy-user-request at scipy.org wrote: > It is easy enough to just pull interp out. Some of the functions > in arrayfns are in numpy.lib already (in the _compiled_base.c module. > Perhaps we could put interp in there as well (it doesn't look too > big). A simple 1-d interpolation would probably be a useful thing > to have in NumPy. What do others think? It sounds good to me. I often have to regrid irregularly spaced data onto a uniform grid, which I use interpolate.splrep and interpolate.splev from scipy to do. This is often the only dependency on scipy in my codes that otherwise use only numpy. I'd love having a 1-d interp in numpy. Cheers, -- Paul From peridot.faceted at gmail.com Tue Mar 6 12:10:58 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 6 Mar 2007 12:10:58 -0500 Subject: [SciPy-user] SciPy-user Digest, Vol 43, Issue 9 In-Reply-To: <5F553E0C-06F7-4FBF-8117-2CEBF9AD705D@nrl.navy.mil> References: <5F553E0C-06F7-4FBF-8117-2CEBF9AD705D@nrl.navy.mil> Message-ID: On 06/03/07, Paul Ray wrote: > > On Mar 6, 2007, at 11:51 AM, scipy-user-request at scipy.org wrote: > > > It is easy enough to just pull interp out. Some of the functions > > in arrayfns are in numpy.lib already (in the _compiled_base.c module. > > Perhaps we could put interp in there as well (it doesn't look too > > big). A simple 1-d interpolation would probably be a useful thing > > to have in NumPy. What do others think? > > It sounds good to me. I often have to regrid irregularly spaced data > onto a uniform grid, which I use interpolate.splrep and > interpolate.splev from scipy to do. This is often the only > dependency on scipy in my codes that otherwise use only numpy. I'd > love having a 1-d interp in numpy. Why not just use scipy? splev works fine, and numpy doesn't need to grow any more... it seems to me that numpy should include just the basics of working with arrays (indexing functions, ufuncs, matrix multiplication, things like searchsorted) and all the mathematics (ffts, linear algebra, interpolation, integration) should go in scipy. Anne M. Archibald From oliphant at ee.byu.edu Tue Mar 6 12:24:43 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 06 Mar 2007 10:24:43 -0700 Subject: [SciPy-user] SciPy-user Digest, Vol 43, Issue 9 In-Reply-To: References: <5F553E0C-06F7-4FBF-8117-2CEBF9AD705D@nrl.navy.mil> Message-ID: <45EDA3DB.7030907@ee.byu.edu> Anne Archibald wrote: > On 06/03/07, Paul Ray wrote: > >> On Mar 6, 2007, at 11:51 AM, scipy-user-request at scipy.org wrote: >> >> >>> It is easy enough to just pull interp out. Some of the functions >>> in arrayfns are in numpy.lib already (in the _compiled_base.c module. >>> Perhaps we could put interp in there as well (it doesn't look too >>> big). A simple 1-d interpolation would probably be a useful thing >>> to have in NumPy. What do others think? >>> >> It sounds good to me. I often have to regrid irregularly spaced data >> onto a uniform grid, which I use interpolate.splrep and >> interpolate.splev from scipy to do. This is often the only >> dependency on scipy in my codes that otherwise use only numpy. I'd >> love having a 1-d interp in numpy. >> > > Why not just use scipy? splev works fine, and numpy doesn't need to > grow any more... it seems to me that numpy should include just the > basics of working with arrays (indexing functions, ufuncs, matrix > multiplication, things like searchsorted) and all the mathematics > (ffts, linear algebra, interpolation, integration) should go in scipy. > That is generally what we believe as well. The problem is that Numeric already included several additional features. Trying to maintain some semblance of backward compatibility is why NumPy has not shrunk even more. Because the interp function was already in Numeric and is not that big, perhaps it should be added to NumPy. -Travis From Glen.Mabey at swri.org Tue Mar 6 13:02:52 2007 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Tue, 6 Mar 2007 12:02:52 -0600 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <45ED9A2F.3050803@ee.byu.edu> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> Message-ID: <20070306180252.GA25007@swri16wm.electro.swri.edu> On Tue, Mar 06, 2007 at 09:43:27AM -0700, Travis Oliphant wrote: > Perhaps we could put interp in there as well (it doesn't look too big). > A simple 1-d interpolation would probably be a useful thing to have in > NumPy. What do others think? +1 From souheil.inati at nyu.edu Tue Mar 6 14:15:15 2007 From: souheil.inati at nyu.edu (Souheil Inati) Date: Tue, 6 Mar 2007 14:15:15 -0500 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <20070306180252.GA25007@swri16wm.electro.swri.edu> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> <20070306180252.GA25007@swri16wm.electro.swri.edu> Message-ID: -100 I am strongly opposed to mixing computations into NumPY. Why is 1-d interpolation good to have inside of NumPy? why not nd-interpolation? why not fft? or SVD? or some linear algebra thing? Python is good because of it lets you have a lot of granularity. NumPy should stick to what it's good at. And it should be very good at a small number of things, namely it should be a library for efficient storage and access to data that is well represented by N- dimensional arrays. --------------------------------- Souheil Inati, PhD Assistant Professor Center for Neural Science New York University 4 Washington Place, Room 809 New York, N.Y., 10003-6621 Office: (212) 998-3741 Email: souheil.inati at nyu.edu -Souheil On Mar 6, 2007, at 1:02 PM, Glen W. Mabey wrote: > On Tue, Mar 06, 2007 at 09:43:27AM -0700, Travis Oliphant wrote: >> Perhaps we could put interp in there as well (it doesn't look too >> big). >> A simple 1-d interpolation would probably be a useful thing to >> have in >> NumPy. What do others think? > > +1 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From nmarais at sun.ac.za Tue Mar 6 14:34:14 2007 From: nmarais at sun.ac.za (Neilen Marais) Date: Tue, 06 Mar 2007 21:34:14 +0200 Subject: [SciPy-user] Using umfpack to calculate a incomplete LU factorisation (ILU) Message-ID: I'm trying to get an incomplete LU factorisation of a sparse matrix using scipy.linsolve.splu. It's using umfpack as far as I can tell. I'm telling it to drop elements using the drop_tol keyword argument, though it seems to be having no effect: In [135]: splu(A, drop_tol=1.0).nnz Use minimum degree ordering on A'+A. Out[135]: 6894814 In [136]: splu(A, drop_tol=0.00001).nnz Use minimum degree ordering on A'+A. Out[136]: 6894814 In [137]: splu(A, drop_tol=1000).nnz Use minimum degree ordering on A'+A. Out[137]: 6894814 In [138]: splu(A, drop_tol=10000000000.).nnz Use minimum degree ordering on A'+A. Out[138]: 6894814 In [139]: splu(A, drop_tol=100000000000000000000000.).nnz Use minimum degree ordering on A'+A. Out[139]: 6894814 In [140]: splu(A, drop_tol=0.000000000000000001).nnz Use minimum degree ordering on A'+A. Out[140]: 6894814 Am I doing or understanding something worng? I'm using a recent SVN build: In [19]: scipy.version.version Out[20]: '0.5.3.dev2827' Any help, or pointers to better routines better suited to ILU generation welcomed! Thanks Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From vallis.35530172 at bloglines.com Tue Mar 6 14:37:31 2007 From: vallis.35530172 at bloglines.com (vallis.35530172 at bloglines.com) Date: 6 Mar 2007 19:37:31 -0000 Subject: [SciPy-user] Arrayfns in numpy? Message-ID: <1173209851.1509303837.21637.sendItem@bloglines.com> +10 This is not about adding new functionality to numpy. It's about keeping something that was in Numeric (arrayfns.interp, in particular), and that was taken out for numpy. Whereas porting code to numpy for Numeric is trivial in most cases, if the code included arrayfns.interp then a dependency on scipy must be added (which is a big deal!), or a new interpolation function must be written from scratch. To make it obvious that the new numpy should not include computations, perhaps this functionality should sit in the oldnumeric compatibility layer. Would that be satisfactory to Souheil? Michele --- SciPy Users List > I am strongly opposed to mixing computations into NumPY. Why is 1-d > interpolation good to have inside of NumPy? why not nd-interpolation? > why not fft? or SVD? or some linear algebra thing? > > Python is good because of it lets you have a lot of granularity. > > NumPy should stick to what it's good at. And it should be very good > at a small number of things, namely it should be a library for > efficient storage and access to data that is well represented by N- > dimensional arrays. > > --------------------------------- > > Souheil Inati, PhD > Assistant Professor > Center for Neural Science > New York University > 4 Washington Place, Room 809 > New York, N.Y., 10003-6621 > Office: (212) 998-3741 > Email: souheil.inati at nyu.edu > > > > -Souheil > > On Mar 6, 2007, at 1:02 PM, Glen W. Mabey wrote: > > > On Tue, Mar 06, 2007 at 09:43:27AM -0700, Travis Oliphant wrote: > >> Perhaps we could put interp in there as well (it doesn't look too > >> big). > >> A simple 1-d interpolation would probably be a useful thing to > >> have in > >> NumPy. What do others think? > > > > +1 > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Tue Mar 6 15:11:34 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 06 Mar 2007 21:11:34 +0100 Subject: [SciPy-user] Using umfpack to calculate a incomplete LU factorisation (ILU) In-Reply-To: References: Message-ID: On Tue, 06 Mar 2007 21:34:14 +0200 Neilen Marais wrote: > I'm trying to get an incomplete LU factorisation of a >sparse matrix using > scipy.linsolve.splu. It's using umfpack as far as I can >tell. I'm telling > it to drop elements using the drop_tol keyword argument, >though it seems > to be having no effect: > > In [135]: splu(A, drop_tol=1.0).nnz > Use minimum degree ordering on A'+A. > Out[135]: 6894814 > > In [136]: splu(A, drop_tol=0.00001).nnz > Use minimum degree ordering on A'+A. > Out[136]: 6894814 > > In [137]: splu(A, drop_tol=1000).nnz > Use minimum degree ordering on A'+A. > Out[137]: 6894814 > > In [138]: splu(A, drop_tol=10000000000.).nnz > Use minimum degree ordering on A'+A. > Out[138]: 6894814 > > In [139]: splu(A, >drop_tol=100000000000000000000000.).nnz > Use minimum degree ordering on A'+A. > Out[139]: 6894814 > > In [140]: splu(A, drop_tol=0.000000000000000001).nnz > Use minimum degree ordering on A'+A. > Out[140]: 6894814 > > Am I doing or understanding something worng? I'm using a >recent SVN build: > > In [19]: scipy.version.version > Out[20]: '0.5.3.dev2827' > > Any help, or pointers to better routines better suited >to ILU generation > welcomed! > > Thanks > Neilen > > -- > you know its kind of tragic > we live in the new world > but we've lost the magic > -- Battery 9 (www.battery9.co.za) > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Neilen, You might be interested in ILUPACK http://www.math.tu-berlin.de/ilupack/ Nils From thn at cs.utexas.edu Tue Mar 6 16:27:42 2007 From: thn at cs.utexas.edu (Thomas Nelson) Date: Tue, 6 Mar 2007 15:27:42 -0600 (CST) Subject: [SciPy-user] probability tools in scipy Message-ID: I'm looking for a function something like: normalcdf(lo,hi,mu,sigma) that returns the probability that lo < X < hi where X is a normal random variable with mean = mu and standard deviation = sigma. Is there something like this, or a tool to make something like this, in scipy? I looked at the scipy.stats.distributions module, but as far as I can tell that seems to be more of a random number generator. Am I correct about this? does scipy have something like the functionality I'm looking for? Thanks for your time, Thomas N. From jks at iki.fi Tue Mar 6 16:37:07 2007 From: jks at iki.fi (=?iso-8859-1?Q?Jouni_K=2E_Sepp=E4nen?=) Date: Tue, 06 Mar 2007 23:37:07 +0200 Subject: [SciPy-user] probability tools in scipy References: Message-ID: Thomas Nelson writes: > normalcdf(lo,hi,mu,sigma) > that returns the probability that > lo < X < hi > where X is a normal random variable with mean = mu and standard deviation > = sigma. Look at scipy.stats.norm.cdf. I think this is one way to define your function: def normalcdf(lo, hi, mu, sigma): return numpy.dot([-1, 1], scipy.stats.norm.cdf([lo, hi], mu, sigma)) -- Jouni K. Sepp?nen http://www.iki.fi/jks From robert.kern at gmail.com Tue Mar 6 16:40:05 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 06 Mar 2007 15:40:05 -0600 Subject: [SciPy-user] probability tools in scipy In-Reply-To: References: Message-ID: <45EDDFB5.2060701@gmail.com> Thomas Nelson wrote: > I'm looking for a function something like: > > normalcdf(lo,hi,mu,sigma) > that returns the probability that > lo < X < hi > where X is a normal random variable with mean = mu and standard deviation > = sigma. Is there something like this, or a tool to make something like > this, in scipy? I looked at the scipy.stats.distributions module, but as > far as I can tell that seems to be more of a random number generator. Am > I correct about this? does scipy have something like the functionality > I'm looking for? from scipy import stats stats.norm.cdf(hi, mu, sigma) - stats.norm.cdf(lo, mu, sigma) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From thn at cs.utexas.edu Tue Mar 6 17:06:33 2007 From: thn at cs.utexas.edu (Thomas Nelson) Date: Tue, 6 Mar 2007 16:06:33 -0600 (CST) Subject: [SciPy-user] probability tools in scipy In-Reply-To: <45EDDFB5.2060701@gmail.com> References: <45EDDFB5.2060701@gmail.com> Message-ID: Thanks, this is exactly what I needed. On Tue, 6 Mar 2007, Robert Kern wrote: > Thomas Nelson wrote: >> I'm looking for a function something like: >> >> normalcdf(lo,hi,mu,sigma) >> that returns the probability that >> lo < X < hi >> where X is a normal random variable with mean = mu and standard deviation >> = sigma. Is there something like this, or a tool to make something like >> this, in scipy? I looked at the scipy.stats.distributions module, but as >> far as I can tell that seems to be more of a random number generator. Am >> I correct about this? does scipy have something like the functionality >> I'm looking for? > > from scipy import stats > > stats.norm.cdf(hi, mu, sigma) - stats.norm.cdf(lo, mu, sigma) > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From chanley at stsci.edu Tue Mar 6 21:07:58 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 06 Mar 2007 21:07:58 -0500 Subject: [SciPy-user] PyFITS 1.1 "candidate" RELEASE Message-ID: <45EE1E7E.4050900@stsci.edu> ------------------ | PYFITS Release | ------------------ Space Telescope Science Institute is pleased to announce the candidate release of PyFITS 1.1. This release includes support for both the NUMPY and NUMARRAY array packages. This software can be downloaded at: http://www.stsci.edu/resources/software_hardware/pyfits/Download If you encounter bugs, please send bug reports to "help at stsci.edu". We intend to support NUMARRAY and NUMPY simultaneously for a transition period of no less than 6 months. Eventually, however, support for NUMARRAY will disappear. During this period, it is likely that new features will appear only for NUMPY. The support for NUMARRAY will primarily be to fix serious bugs and handle platform updates. ----------- | Version | ----------- Version 1.1rc1; March 6, 2007 ------------------------------- | Major Changes since v1.1b3 | ------------------------------- * The NUMPY version of PyFITS now supports FITS group data. * The NUMPY version of PyFITS now support variable length column tables. * Keyboard interrputs are now blocked during a file flush. * Many minor bug fixes. ------------------------- | Software Requirements | ------------------------- PyFITS Version 1.1rc1 REQUIRES: * Python 2.3 or later * NUMPY 1.0.1(or later) or NUMARRAY --------------------- | Installing PyFITS | --------------------- PyFITS 1.1rc1 is distributed as a Python distutils module. Installation simply involves unpacking the package and executing % python setup.py install to install it in Python's site-packages directory. Alternatively the command %python setup.py install --local="/destination/directory/" will install PyFITS in an arbitrary directory which should be placed on PYTHONPATH. Once numarray or numpy has been installed, then PyFITS should be available for use under Python. ----------------- | Download Site | ----------------- http://www.stsci.edu/resources/software_hardware/pyfits/Download ---------- | Usage | ---------- Users will issue an "import pyfits" command as in the past. However, the use of the NUMPY or NUMARRAY version of PyFITS will be controlled by an environment variable called NUMERIX. Set NUMERIX to 'numarray' for the NUMARRAY version of PyFITS. Set NUMERIX to 'numpy' for the NUMPY version of pyfits. If only one array package is installed, that package's version of PyFITS will be imported. If both packages are installed the NUMERIX value is used to decide which version to import. If no NUMERIX value is set then the NUMARRAY version of PyFITS will be imported. Anything else will raise an exception upon import. --------------- | Bug Reports | --------------- Please send all PyFITS bug reports to help at stsci.edu ------------------ | Advanced Users | ------------------ Users who would like the "bleeding" edge of PyFITS can retrieve the software from our SUBVERSION repository hosted at: http://astropy.scipy.org/svn/pyfits/trunk We also provide a Trac site at: http://projects.scipy.org/astropy/pyfits/wiki From oliphant at ee.byu.edu Wed Mar 7 02:13:26 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 07 Mar 2007 00:13:26 -0700 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> <20070306180252.GA25007@swri16wm.electro.swri.edu> Message-ID: <45EE6616.4080306@ee.byu.edu> > -100 > > I am strongly opposed to mixing computations into NumPY. Why is 1-d > interpolation good to have inside of NumPy? why not nd-interpolation? > why not fft? or SVD? or some linear algebra thing? > I agree with the idea of the statement. However, practicality may beat purity here because of the backward-compatibility issue. The only proposal on the table is pulling in more functions from arrayfns. I'm fine if they live in the oldnumeric name-space (but it will actually be easier to put them in the lib namespace because the arrayfnsmodule.c file became the _compiled_base.c file in numpy.lib. Perhaps they can still live in _compiled_base.c but not be pulled in to numpy.lib (only a oldnumeric.arrayfns module). -Travis From peridot.faceted at gmail.com Wed Mar 7 02:10:28 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 7 Mar 2007 02:10:28 -0500 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <45EE6616.4080306@ee.byu.edu> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> <20070306180252.GA25007@swri16wm.electro.swri.edu> <45EE6616.4080306@ee.byu.edu> Message-ID: On 07/03/07, Travis Oliphant wrote: > I agree with the idea of the statement. However, practicality may beat > purity here because of the backward-compatibility issue. The only > proposal on the table is pulling in more functions from arrayfns. I'm > fine if they live in the oldnumeric name-space (but it will actually be > easier to put them in the lib namespace because the arrayfnsmodule.c > file became the _compiled_base.c file in numpy.lib. > > Perhaps they can still live in _compiled_base.c but not be pulled in to > numpy.lib (only a oldnumeric.arrayfns module). I like this idea - can it be a general policy for things that shouldn't be in numpy but are included for backward compatibility? Anne From fperez.net at gmail.com Wed Mar 7 02:46:36 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 7 Mar 2007 00:46:36 -0700 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <45EE6616.4080306@ee.byu.edu> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> <20070306180252.GA25007@swri16wm.electro.swri.edu> <45EE6616.4080306@ee.byu.edu> Message-ID: On 3/7/07, Travis Oliphant wrote: > > > -100 > > > > I am strongly opposed to mixing computations into NumPY. Why is 1-d > > interpolation good to have inside of NumPy? why not nd-interpolation? > > why not fft? or SVD? or some linear algebra thing? > > > I agree with the idea of the statement. However, practicality may beat > purity here because of the backward-compatibility issue. The only > proposal on the table is pulling in more functions from arrayfns. I'm > fine if they live in the oldnumeric name-space (but it will actually be > easier to put them in the lib namespace because the arrayfnsmodule.c > file became the _compiled_base.c file in numpy.lib. > > Perhaps they can still live in _compiled_base.c but not be pulled in to > numpy.lib (only a oldnumeric.arrayfns module). -1. If they are going to go in, make them first-class citizens. For better or worse, there are functional units in numpy that go beyond pure array basics: linalg, fft are the leading examples. Not putting these in would break so much old Numeric code that it really would have been unacceptable. So if the argument for the interpolation stuff is the same, then it should go also as part of numpy. IMO, oldnumeric is a way to keep the old *APIs* for things that can be done differently (often better) in numpy, as a way of easing the transition of old codes. But if you put interpolation in oldnumeric, then /everyone/ who wants interpolation and doesn't have scipy around is going to pull oldnumeric in. And that will have the unintended side effect of basically promoting oldnumeric as a first-class API rather than a backwards-compatibility layer for codes that haven't been properly 'numpyfied'. Cheers, f From fperez.net at gmail.com Wed Mar 7 02:53:25 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 7 Mar 2007 00:53:25 -0700 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> <20070306180252.GA25007@swri16wm.electro.swri.edu> Message-ID: On 3/6/07, Souheil Inati wrote: > -100 > > I am strongly opposed to mixing computations into NumPY. Why is 1-d > interpolation good to have inside of NumPy? why not nd-interpolation? > why not fft? or SVD? or some linear algebra thing? We're already well past that point; ffts, svds and linear algebra are all in there already, and won't be removed anytime soon: In [1]: import numpy as N In [2]: N.linalg? Type: module Base Class: Namespace: Interactive File: /home/fperez/tmp/local/lib/python2.4/site-packages/numpy/linalg/__init__.py Docstring: Core Linear Algebra Tools =========== Linear Algebra Basics: norm --- Vector or matrix norm inv --- Inverse of a square matrix solve --- Solve a linear system of equations det --- Determinant of a square matrix lstsq --- Solve linear least-squares problem pinv --- Pseudo-inverse (Moore-Penrose) using lstsq Eigenvalues and Decompositions: eig --- Eigenvalues and vectors of a square matrix eigh --- Eigenvalues and eigenvectors of a Hermitian matrix eigvals --- Eigenvalues of a square matrix eigvalsh --- Eigenvalues of a Hermitian matrix. svd --- Singular value decomposition of a matrix cholesky --- Cholesky decomposition of a matrix In [3]: N.fft? Type: module Base Class: Namespace: Interactive File: /home/fperez/tmp/local/lib/python2.4/site-packages/numpy/fft/__init__.py Docstring: Core FFT routines ================== Standard FFTs fft ifft fft2 ifft2 fftn ifftn etc... Numpy is intended to be a reasonable replacement of all the *functionality* of Numeric. I think Travis tried very hard to make sure that if an old code could run only on Numeric (without SciPy), then it could also run on numpy (possibly after some manual work). As much as I pushed for breaking all backwards compatibility for the sake of *clean APIs* in numpy, I think it's a good decision to keep b.compat. in terms of functionality. Otherwise a number of projects could be stuck on Numeric if for whatever reason they consider SciPy too large/complex a dependency. Cheers, f From robert.kern at gmail.com Wed Mar 7 02:53:48 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 07 Mar 2007 01:53:48 -0600 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> <20070306180252.GA25007@swri16wm.electro.swri.edu> <45EE6616.4080306@ee.byu.edu> Message-ID: <45EE6F8C.8070408@gmail.com> Fernando Perez wrote: > On 3/7/07, Travis Oliphant wrote: >>> -100 >>> >>> I am strongly opposed to mixing computations into NumPY. Why is 1-d >>> interpolation good to have inside of NumPy? why not nd-interpolation? >>> why not fft? or SVD? or some linear algebra thing? >>> >> I agree with the idea of the statement. However, practicality may beat >> purity here because of the backward-compatibility issue. The only >> proposal on the table is pulling in more functions from arrayfns. I'm >> fine if they live in the oldnumeric name-space (but it will actually be >> easier to put them in the lib namespace because the arrayfnsmodule.c >> file became the _compiled_base.c file in numpy.lib. >> >> Perhaps they can still live in _compiled_base.c but not be pulled in to >> numpy.lib (only a oldnumeric.arrayfns module). > > -1. If they are going to go in, make them first-class citizens. Except for interp, those functions really don't deserve to be first class. I can't even understand what most of them do on a first (and second and third, frankly) pass through their comments. http://fresh.t-systems-sfr.com/unix/src/privat2/gmath-0.3.tar.gz:a/GMatH/lib/NumPySession/doc/NumPy/module-arrayfns.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Wed Mar 7 03:12:37 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 7 Mar 2007 01:12:37 -0700 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <45EE6F8C.8070408@gmail.com> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> <20070306180252.GA25007@swri16wm.electro.swri.edu> <45EE6616.4080306@ee.byu.edu> <45EE6F8C.8070408@gmail.com> Message-ID: On 3/7/07, Robert Kern wrote: > > -1. If they are going to go in, make them first-class citizens. > > Except for interp, those functions really don't deserve to be first class. I > can't even understand what most of them do on a first (and second and third, > frankly) pass through their comments. Well, then maybe they just don't belong in the code at all if they are so bad? My take on it is just a simple one: if it's functionally unique, it shouldn't be in oldnumeric because then people will pull it out of there forever, because no amount of 'numpyfying' their code will give them that particular feature, since it's not an API issue. Hopefully oldnumeric is a module whose usage should decrease with time, and I'm trying to avoid something that would prevent that from being possible. But that's why I qualified my statement with '*if* they are going in'. Perhaps the right solution is for them NOT to go in, if they are really bad enough. I'll shut up now, ultimately I won't be using this anyway :) Cheers, f From h.schulz at fzd.de Wed Mar 7 04:26:58 2007 From: h.schulz at fzd.de (Schulz, Henrik) Date: Wed, 07 Mar 2007 10:26:58 +0100 Subject: [SciPy-user] Problem with scipy.integrate Message-ID: <03199d44edb6d343943cd00587129f77@fz-rossendorf.de> Hi all, I have a problem with the installation of Scipy, especially using the fblas-libraries: After I fixed the problem described here: http://cens.ioc.ee/~pearu/scipy/INSTALL.html#blas-sources-shipped-with-l apack-are-incomplete by downloading blas and compiling with: ifc -shared -O3 -unroll -w90 -w95 -cm -fPIC -FI *.f -o fblas.so I get the following messages when trying to import scipy.integrate: python Python 2.3.4 (#1, Oct 9 2006, 18:28:26) [GCC 3.4.4 20050721 (Red Hat 3.4.4-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy.integrate Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.3/site-packages/scipy/integrate/__init__.py", line 9, in ? from quadrature import * File "/usr/lib64/python2.3/site-packages/scipy/integrate/quadrature.py", line 8, in ? from scipy.special.orthogonal import p_roots File "/usr/lib64/python2.3/site-packages/scipy/special/__init__.py", line 10, in ? import orthogonal File "/usr/lib64/python2.3/site-packages/scipy/special/orthogonal.py", line 65, in ? from scipy.linalg import eig File "/usr/lib64/python2.3/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/lib64/python2.3/site-packages/scipy/linalg/basic.py", line 227, in ? import decomp File "/usr/lib64/python2.3/site-packages/scipy/linalg/decomp.py", line 21, in ? from blas import get_blas_funcs File "/usr/lib64/python2.3/site-packages/scipy/linalg/blas.py", line 14, in ? from scipy.linalg import fblas ImportError: dynamic module does not define init function (initfblas) What can I do to solve this problem? Thanks! Henrik From cimrman3 at ntc.zcu.cz Wed Mar 7 04:45:01 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 07 Mar 2007 10:45:01 +0100 Subject: [SciPy-user] Using umfpack to calculate a incomplete LU factorisation (ILU) In-Reply-To: References: Message-ID: <45EE899D.9040304@ntc.zcu.cz> Neilen Marais wrote: > I'm trying to get an incomplete LU factorisation of a sparse matrix using > scipy.linsolve.splu. It's using umfpack as far as I can tell. I'm telling > it to drop elements using the drop_tol keyword argument, though it seems > to be having no effect: Just one correction: linsolve.splu uses SuperLU function *gstrf, so if the threshold does not work, the problem is there... Otherwise you can get LU factors with UmfpackContext.lu, but those are always complete. cheers, r. From davbrow at gmail.com Wed Mar 7 07:00:27 2007 From: davbrow at gmail.com (Dave) Date: Wed, 7 Mar 2007 04:00:27 -0800 Subject: [SciPy-user] Arrayfns in numpy? Message-ID: <588f52020703070400x316a2677x1c07a6ce441b1dcf@mail.gmail.com> On Tue, 06 Mar 2007 Travis Oliphant wrote: >That is generally what we believe as well. The problem is that Numeric >already included several additional features. Trying to maintain some >semblance of backward compatibility is why NumPy has not shrunk even more. >Because the interp function was already in Numeric and is not that big, >perhaps it should be added to NumPy. I just realized I have been struggling to install scipy, as it turns out, just to use an interpolation function. I usually try to minimize dependencies for anything I plan to distribute to others. My current experience seems to support arguments for both sides on whether to add interpolation to numpy. As I see it there are three useful levels of array functionality. First, python itself should ideally allow for handling array objects with ease and efficiency, and promote "...one way to do it..." as per python dogma. Hopefully ndarray added to the standard library will make this more of a reality some day. Second there are common math functions and manipulations over arrays that are potentially useful for pedestrian as well as esoteric applications. Sin, cos, basic integration and differentiation, matrix and vector math, simple statistics, etc., are all reasonably within this category. Hobbyist game programmers as well as scientists can use them. Basic 1d and perhaps 2d interpolation are also within this level of functionality in my opinion. The capabilities and the API need to be stable and extremely reliable. Numpy today provides a relatively rich and efficient package with many of these features and properties. There is no hard cutoff but obscure (to me) probability distributions, simulation frameworks, image processing functions named after a living person, field-specific packages, etc, are all wonderful to have when you need them but are best suited for the third level of functionality, scipy. Sophisticated packages will have dependencies and might be updated often or have API changes as capabilities evolve. It makes sense to isolate this from the more generic and stable features in numpy. Included in numpy for the important practical goal of backward compatibility are objects like, I would guess, numpy.kaiser and others that don't seem to fit well the stated divisions. If interpolation is in Numeric then numpy is at least deserving of a compatibility function. But as a user without legacy requirements I would prefer one designed for numpy and for future applications. The other aspect of the argument concerns the dependency issue. I have not been able to use the scipy.interpolate module because of what appears to be some dependency that I can't resolve. I certainly don't want the problem moved to numpy and I sympathize very much with "less is more" arguments in this respect. So I would like to have available basic interpolation features in numpy if the functions can be added without adding new dependencies. In general I don't see a problem with adding select numerical features if it is done carefully and with costs/benefits of the whole package in mind. In summary: +1 interpolation for numpy, -1 new dependencies for numpy, +1 balanced practical approach to adding numpy features -- David From cimrman3 at ntc.zcu.cz Wed Mar 7 07:09:19 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 07 Mar 2007 13:09:19 +0100 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <588f52020703070400x316a2677x1c07a6ce441b1dcf@mail.gmail.com> References: <588f52020703070400x316a2677x1c07a6ce441b1dcf@mail.gmail.com> Message-ID: <45EEAB6F.5070501@ntc.zcu.cz> Dave wrote: > On Tue, 06 Mar 2007 Travis Oliphant wrote: >> That is generally what we believe as well. The problem is that Numeric >> already included several additional features. Trying to maintain some >> semblance of backward compatibility is why NumPy has not shrunk even more. > >> Because the interp function was already in Numeric and is not that big, >> perhaps it should be added to NumPy. > > I just realized I have been struggling to install scipy, as it turns > out, just to use an interpolation function. I usually try to minimize > dependencies for anything I plan to distribute to others. My current > experience seems to support arguments for both sides on whether to add > interpolation to numpy. > > As I see it there are three useful levels of array functionality. > First, python itself should ideally allow for handling array objects > with ease and efficiency, and promote "...one way to do it..." as per > python dogma. Hopefully ndarray added to the standard library will > make this more of a reality some day. > > Second there are common math functions and manipulations over arrays > that are potentially useful for pedestrian as well as esoteric > applications. Sin, cos, basic integration and differentiation, > matrix and vector math, simple statistics, etc., are all reasonably > within this category. Hobbyist game programmers as well as scientists > can use them. Basic 1d and perhaps 2d interpolation are also within > this level of functionality in my opinion. The capabilities and the > API need to be stable and extremely reliable. Numpy today provides a > relatively rich and efficient package with many of these features and > properties. > > There is no hard cutoff but obscure (to me) probability distributions, > simulation frameworks, image processing functions named after a living > person, field-specific packages, etc, are all wonderful to have when > you need them but are best suited for the third level of > functionality, scipy. Sophisticated packages will have dependencies > and might be updated often or have API changes as capabilities evolve. > It makes sense to isolate this from the more generic and stable > features in numpy. > > Included in numpy for the important practical goal of backward > compatibility are objects like, I would guess, numpy.kaiser and others > that don't seem to fit well the stated divisions. If interpolation is > in Numeric then numpy is at least deserving of a compatibility > function. But as a user without legacy requirements I would prefer > one designed for numpy and for future applications. > > The other aspect of the argument concerns the dependency issue. I > have not been able to use the scipy.interpolate module because of what > appears to be some dependency that I can't resolve. I certainly don't > want the problem moved to numpy and I sympathize very much with "less > is more" arguments in this respect. So I would like to have available > basic interpolation features in numpy if the functions can be added > without adding new dependencies. In general I don't see a problem > with adding select numerical features if it is done carefully and with > costs/benefits of the whole package in mind. > > In summary: > +1 interpolation for numpy, > -1 new dependencies for numpy, > +1 balanced practical approach to adding numpy features As Dave correctly mentioned, there are three levels of functionality: 1. core of numpy (ndarray) - basic array handling 2. 'general purpose' functions (like interp, linalg.*, ...) with no external dependencies, no fortran, easy to install 3. full scipy Currently, 1. and 2. are in one package (numpy) - why not make two? r. From souheil.inati at nyu.edu Wed Mar 7 08:58:16 2007 From: souheil.inati at nyu.edu (Souheil Inati) Date: Wed, 7 Mar 2007 08:58:16 -0500 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <45EEAB6F.5070501@ntc.zcu.cz> References: <588f52020703070400x316a2677x1c07a6ce441b1dcf@mail.gmail.com> <45EEAB6F.5070501@ntc.zcu.cz> Message-ID: <798643F3-E8AA-42EB-99BB-00116F221BC2@nyu.edu> I agree with Dave and Robert's view of the three layers of functionality. I am agnostic as to whether layers 1 and 2 should be one package or two as long as the API for both is extremely stable. In this scheme, the old numeric compatibility layer is part of level 2. -Souheil On Mar 7, 2007, at 7:09 AM, Robert Cimrman wrote: > Dave wrote: >> On Tue, 06 Mar 2007 Travis Oliphant wrote: >>> That is generally what we believe as well. The problem is that >>> Numeric >>> already included several additional features. Trying to >>> maintain some >>> semblance of backward compatibility is why NumPy has not shrunk >>> even more. >> >>> Because the interp function was already in Numeric and is not >>> that big, >>> perhaps it should be added to NumPy. >> >> >snip >> >> In summary: >> +1 interpolation for numpy, >> -1 new dependencies for numpy, >> +1 balanced practical approach to adding numpy features > > As Dave correctly mentioned, there are three levels of functionality: > 1. core of numpy (ndarray) - basic array handling > 2. 'general purpose' functions (like interp, linalg.*, ...) with no > external dependencies, no fortran, easy to install > 3. full scipy > > Currently, 1. and 2. are in one package (numpy) - why not make two? > > r. From matthieu.brucher at gmail.com Wed Mar 7 09:17:09 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 7 Mar 2007 15:17:09 +0100 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <45ED9A2F.3050803@ee.byu.edu> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> Message-ID: 2007/3/6, Travis Oliphant : > > Perhaps we could put interp in there as well (it doesn't look too big). > A simple 1-d interpolation would probably be a useful thing to have in > NumPy. What do others think? > > -Travis I'm opposed to this as well, interpolation is not linear algebra, it is signal processing, and as Souheil said, why not nd-interpolation with B-Splines then ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Mar 7 10:14:39 2007 From: aisaac at american.edu (Alan Isaac) Date: Wed, 07 Mar 2007 10:14:39 -0500 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> Message-ID: <45EED6DF.9020502@american.edu> Matthieu Brucher wrote: > I'm opposed to this as well, interpolation is not linear algebra, it is > signal processing, and as Souheil said, why not nd-interpolation with > B-Splines then ? I am not taking sides on this issue, but I do take issue with trying to settle it with this kind of "argument". A rhetorical question does not set out the issues. For example, treating a rhetorical question as a real question for a moment, one might respond that a simple 1-d interpolation would - find wide use among people who need nothing more - enhance backward compatibility - impose minimum maintenance requirements Again, I am not taking sides on the issue, so I am in no way suggesting that such points are decisive. However opposition that fails to address such points is hardly decisive either. My point of reference would be: if it is likely to drain even modest developer effort away from core NumPy issues, then it is harmful. Otherwise it is harmless as long as it introduces no new dependencies. Alan Isaac From matthieu.brucher at gmail.com Wed Mar 7 11:03:51 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 7 Mar 2007 17:03:51 +0100 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <45EED6DF.9020502@american.edu> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> <45EED6DF.9020502@american.edu> Message-ID: > > Again, I am not taking sides on the issue, so I > am in no way suggesting that such points are > decisive. However opposition that fails to > address such points is hardly decisive either. I read the end of the discussion and I see your point ;) My point of reference would be: if it is likely > to drain even modest developer effort away from > core NumPy issues, then it is harmful. Otherwise > it is harmless as long as it introduces no new > dependencies. I think that separating interpolation in two parts can be harmful, but as it is needed for a backward compatiblity reason, it can be understood. The problem could be that people will want to have more complex interpolation in NumPy as the simple one exist, and then this question will be raised again :( Not quite sure there is an answer that will satisfy everybody - in fact I'm sure of the opposite :) - Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Wed Mar 7 12:25:57 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 07 Mar 2007 10:25:57 -0700 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <45EED6DF.9020502@american.edu> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> <45EED6DF.9020502@american.edu> Message-ID: <45EEF5A5.6080202@ee.byu.edu> Alan Isaac wrote: >Matthieu Brucher wrote: > > >>I'm opposed to this as well, interpolation is not linear algebra, it is >>signal processing, and as Souheil said, why not nd-interpolation with >>B-Splines then ? >> >> > >I am not taking sides on this issue, but I do >take issue with trying to settle it with this >kind of "argument". A rhetorical question does >not set out the issues. For example, treating >a rhetorical question as a real question for a >moment, one might respond that a simple 1-d >interpolation would >- find wide use among people who need nothing more >- enhance backward compatibility >- impose minimum maintenance requirements > >Again, I am not taking sides on the issue, so I >am in no way suggesting that such points are >decisive. However opposition that fails to >address such points is hardly decisive either. > >My point of reference would be: if it is likely >to drain even modest developer effort away from >core NumPy issues, then it is harmful. Otherwise >it is harmless as long as it introduces no new >dependencies. > > This is an impoirtant point for me. I'm only considering adding interp because it's already done and just a matter of cut-and-paste. My take on this, is that I would rather use developer time to make scipy more modular and easier to install (this usually translates to binary builds for multiple platforms). That way you could grab just the modules of interest. Ideally, grabbing these modules is a matter of using setuptools (or some other package management concept) to do it automatically as needed. -Travis From prabhu at aero.iitb.ac.in Wed Mar 7 23:34:31 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Thu, 8 Mar 2007 10:04:31 +0530 Subject: [SciPy-user] tvtk GL2PS In-Reply-To: <001501c75fe9$942463e0$c000a8c0@JELLE> References: <001501c75fe9$942463e0$c000a8c0@JELLE> Message-ID: <17903.37463.446065.89579@gargle.gargle.HOWL> >>>>> "Jelle" == Jelle Feringa / EZCT Architecture & Design Research writes: Jelle> Hi, Currently I'm using the enthon (win32, 2.4) 1.0.0 Jelle> release, which is terrific! Thanks to Bryce and the enthought build team (or is that a one man army?) for all their work! Jelle> Though there is a minor, but to me important glitch in Jelle> (t)vtk; [...] Jelle> Saving as a vector PS/EPS/PDF/TeX file using GL2PS is Jelle> either not supported by your version of VTK or you have not Jelle> configured VTK to work with GL2PS -- read the documentation Jelle> for the vtkGL2PSExporter class. Jelle> Which is really too bad. This would require Bryce to build VTK with the VTK_USE_GL2PS flag set to on for the next release. There are no other dependencies that this brings up. cheers, prabhu From nwagner at iam.uni-stuttgart.de Thu Mar 8 08:19:54 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 08 Mar 2007 14:19:54 +0100 Subject: [SciPy-user] Using umfpack to calculate a incomplete LU factorisation (ILU) In-Reply-To: <45EE899D.9040304@ntc.zcu.cz> References: <45EE899D.9040304@ntc.zcu.cz> Message-ID: <45F00D7A.4060603@iam.uni-stuttgart.de> Robert Cimrman wrote: > Neilen Marais wrote: > >> I'm trying to get an incomplete LU factorisation of a sparse matrix using >> scipy.linsolve.splu. It's using umfpack as far as I can tell. I'm telling >> it to drop elements using the drop_tol keyword argument, though it seems >> to be having no effect: >> > > Just one correction: linsolve.splu uses SuperLU function *gstrf, so if > the threshold does not work, the problem is there... > > Otherwise you can get LU factors with UmfpackContext.lu, but those are > always complete. > > cheers, > r. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Just out of interest Is there something comparable with sigma_solve = linsolve.splu(A - sigma*B).solve available in UMFPACK ? Nils From jelle.feringa at ezct.net Thu Mar 8 10:41:27 2007 From: jelle.feringa at ezct.net (jelle) Date: Thu, 8 Mar 2007 15:41:27 +0000 (UTC) Subject: [SciPy-user] tvtk GL2PS References: <001501c75fe9$942463e0$c000a8c0@JELLE> <17903.37463.446065.89579@gargle.gargle.HOWL> Message-ID: > This would require Bryce to build VTK with the VTK_USE_GL2PS flag set > to on for the next release. There are no other dependencies that this > brings up. > > cheers, > prabhu Thanks Prabu, That really would be wonderful! Being able to export to a vector format really is a huge feauture! Cheers, -jelle From nwagner at iam.uni-stuttgart.de Thu Mar 8 11:06:38 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 08 Mar 2007 17:06:38 +0100 Subject: [SciPy-user] Usage of fmin_ncg Message-ID: <45F0348E.6070306@iam.uni-stuttgart.de> Hi all, I have a question wrt optimize.fmin_ncg(F,v_0,Fp, fhess_p = Fhess_p). especially the last argument is of interest. I am not sure if I applied it correctly. The help function yields fhess_p -- a function to compute the Hessian of f times an arbitrary vector: fhess_p (x, p, *args) Is p the arbitrary vector ? Any hint will be appreciated. Nils P.S.: The Hessian is not directly available. from scipy import * from pylab import plot, show, semilogy, xlabel, ylabel, legend, savefig, title, figure, ylim # # def R(v): """ Rayleigh quotient """ return dot(v.T,A*v)/dot(v.T,B*v) def F(v): """ Objective function """ Bv = bsolve(v) res = linalg.norm(A*Bv-R(Bv)*B*Bv)/linalg.norm(B*Bv) h = 0.25*dot(bsolve(v).T,v)**2+0.5*dot(sigma_solve(v).T,v) data.append(res) return h def Fp(v): """ Gradient of the objective function """ return dot(bsolve(v).T,v)*bsolve(v)+sigma_solve(v) def Fhess_p(v,p): """ Hessian applied to an arbitrary vector p """ return dot(bsolve(v).T,v)*bsolve(p)+sigma_solve(p)+2*dot(outer(bsolve(v),bsolve(v)),p) # # Test matrices from MatrixMarket # A = io.mmread('bcsstk02.mtx.gz') B = io.mmread('bcsstm02.mtx.gz') n = A.shape[0] sigma = 47.5 # Preassigned shift bsolve = linsolve.splu(B).solve # inv(B)*( ) sigma_solve = linsolve.splu(A - sigma*B).solve # inv(A-sigma*B)*( ) random.seed(10) v_0 = random.rand(n) # Initial vector data=[] v = optimize.fmin_ncg(F,v_0,Fp, fhess_p = Fhess_p) semilogy(arange(0,len(data)),data) print 'Newton CG',-1./(2*sqrt(-F(v)))+sigma print R(bsolve(v)) print From peridot.faceted at gmail.com Thu Mar 8 11:16:48 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 8 Mar 2007 11:16:48 -0500 Subject: [SciPy-user] Random number generation and scipy.stats Message-ID: Hi, scipy.stats defines a number of distribution objects (scipy.stats.chi2 represents the chi-squared distribution, for example). How useful would it be to add a .generate(size=(1,)) method which generated random numbers according to the distribution? There's practically no extra effort involved for those functions which have a .ppf() method, since that allows one to use the transformation method: In [1]: import numpy, scipy.stats In [2]: scipy.stats.chi2.ppf(numpy.random.rand(5),2) Out[2]: array([ 3.19017391, 3.27772839, 3.44953492, 8.09066854, 0.29628122]) Maybe the simplicity makes it not worth implementing, but on the one hand there may be distributions that allow much more efficient generation, and on the other this may not occur to users... Anne M. Archibald From cimrman3 at ntc.zcu.cz Thu Mar 8 11:23:32 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 08 Mar 2007 17:23:32 +0100 Subject: [SciPy-user] Using umfpack to calculate a incomplete LU factorisation (ILU) In-Reply-To: <45F00D7A.4060603@iam.uni-stuttgart.de> References: <45EE899D.9040304@ntc.zcu.cz> <45F00D7A.4060603@iam.uni-stuttgart.de> Message-ID: <45F03884.3000101@ntc.zcu.cz> Nils Wagner wrote: > Just out of interest > > Is there something comparable with > > sigma_solve = linsolve.splu(A - sigma*B).solve > > available in UMFPACK ? Sure, although it is not a oneliner. This is the relevant part of the docstring: --- Sparse matrix in CSR or CSC format: mtx Righthand-side: rhs Solution: sol import scipy.linsolve.umfpack as um # Contruct the solver. umfpack = um.UmfpackContext() # Make LU decomposition. umfpack.numeric( mtx ) ... # Use already LU-decomposed matrix. sol1 = umfpack( um.UMFPACK_A, mtx, rhs1, autoTranspose = True ) sol2 = umfpack( um.UMFPACK_A, mtx, rhs2, autoTranspose = True ) # same as: sol1 = umfpack.solve( um.UMFPACK_A, mtx, rhs1, autoTranspose = True ) sol2 = umfpack.solve( um.UMFPACK_A, mtx, rhs2, autoTranspose = True ) --- So basically, after calling umfpack.numeric( mtx ), all subsequent calls to umfpack or umfpack.solve() will reuse the LU decomposition (if mtx stays the same...) Would you be interested in exposing this in linsolve, as, say, def factorized( A ): if isUmfpack and useUmfpack: # This must be written... else: return splu( A ).solve r. From jdh2358 at gmail.com Thu Mar 8 11:43:25 2007 From: jdh2358 at gmail.com (John Hunter) Date: Thu, 8 Mar 2007 10:43:25 -0600 Subject: [SciPy-user] Random number generation and scipy.stats In-Reply-To: References: Message-ID: <88e473830703080843o73f8649ei5a91840a46091816@mail.gmail.com> On 3/8/07, Anne Archibald wrote: > Hi, > > scipy.stats defines a number of distribution objects (scipy.stats.chi2 > represents the chi-squared distribution, for example). How useful > would it be to add a .generate(size=(1,)) method which generated > random numbers according to the distribution? There's practically no > extra effort involved for those functions which have a .ppf() method, > since that allows one to use the transformation method: Isn't this what rvs already does, or am I misunderstanding you In [184]: import scipy.stats as stats In [185]: stats.expon(0,5).rvs(10) Out[185]: array([ 6.49235468, 3.66423025, 11.22019501, 4.50247735, 0.95703071, 4.67328849, 1.52512936, 22.77158456, 7.32379865, 4.45632927]) JDH From cimrman3 at ntc.zcu.cz Thu Mar 8 12:10:30 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 08 Mar 2007 18:10:30 +0100 Subject: [SciPy-user] linsolve.factorized was: Re: Using umfpack to calculate a incomplete LU factorisation (ILU) In-Reply-To: <45F03884.3000101@ntc.zcu.cz> References: <45EE899D.9040304@ntc.zcu.cz> <45F00D7A.4060603@iam.uni-stuttgart.de> <45F03884.3000101@ntc.zcu.cz> Message-ID: <45F04386.6010105@ntc.zcu.cz> Robert Cimrman wrote: > Nils Wagner wrote: >> Just out of interest >> >> Is there something comparable with >> >> sigma_solve = linsolve.splu(A - sigma*B).solve >> >> available in UMFPACK ? > > Sure, although it is not a oneliner. This is the relevant part of the > docstring: > Would you be interested in exposing this in linsolve, as, say, > > def factorized( A ): > if isUmfpack and useUmfpack: > # This must be written... > else: > return splu( A ).solve Well, I did it since I am going to need this, too :-) In [3]:scipy.linsolve.factorized? ... Definition: scipy.linsolve.factorized(A) Docstring: Return a fuction for solving a linear system, with A pre-factorized. Example: solve = factorized( A ) # Makes LU decomposition. x1 = solve( rhs1 ) # Uses the LU factors. x2 = solve( rhs2 ) # Uses again the LU factors. This uses UMFPACK if available. cheers, r. From peridot.faceted at gmail.com Thu Mar 8 12:32:37 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 8 Mar 2007 12:32:37 -0500 Subject: [SciPy-user] Random number generation and scipy.stats In-Reply-To: <88e473830703080843o73f8649ei5a91840a46091816@mail.gmail.com> References: <88e473830703080843o73f8649ei5a91840a46091816@mail.gmail.com> Message-ID: On 08/03/07, John Hunter wrote: > Isn't this what rvs already does, or am I misunderstanding you Indeed. I feel silly now. Sorry for the noise! Anne From jpeacock at mesoscopic.mines.edu Thu Mar 8 12:34:36 2007 From: jpeacock at mesoscopic.mines.edu (Jared Peacock) Date: Thu, 8 Mar 2007 10:34:36 -0700 Subject: [SciPy-user] Phase Unwrapping Algorithm Message-ID: <17904.18732.283675.88678@hipparchus.mines.edu> Does anybody know of a good phase unwrapping algorithm in Python? J. Peacock From jdh2358 at gmail.com Thu Mar 8 12:55:26 2007 From: jdh2358 at gmail.com (John Hunter) Date: Thu, 8 Mar 2007 11:55:26 -0600 Subject: [SciPy-user] Phase Unwrapping Algorithm In-Reply-To: <17904.18732.283675.88678@hipparchus.mines.edu> References: <17904.18732.283675.88678@hipparchus.mines.edu> Message-ID: <88e473830703080955u7d677dd2he524aee7380d704a@mail.gmail.com> On 3/8/07, Jared Peacock wrote: > > Does anybody know of a good phase unwrapping algorithm in Python? Does scipy.signal.hilbert do what you want? """ Analytic signal s(t) = a(t) * sin( phi(t) ) """ from pylab import figure, nx, show from scipy.signal import hilbert t = nx.arange(0.0, 0.5, 0.001) s = nx.sin(2*nx.pi*10*t) + nx.mlab.randn(len(t)) h = hilbert(s) a = nx.absolute(h) phase = nx.angle(h) fig = figure() ax1 = fig.add_subplot(211) ax1.plot(t, s, 'g-', t, a*nx.cos(phase), 'b-') ax2 = fig.add_subplot(212) ax2.plot(t, s-a*nx.cos(phase)) show() From lorenzo.isella at gmail.com Thu Mar 8 13:11:59 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 8 Mar 2007 19:11:59 +0100 Subject: [SciPy-user] Reading Binary Files Message-ID: Dear All, I hope this is not too off-topic. I have been given an old but reliable fortran code for fluid dynamic simulations. It saves a a lot of data using 3D arrays (q1,q2,q3,pr) as a binary file. I cut and paste the part of the fortran code saving the data into a binary file: (in the following iav=1 and iwrq2=1). if(iav.eq.1) then namfil='field'//ipfi//'.dat' pnamfil='field'//ipfi//'.dat' open(13,file=namfil,form='unformatted') else pnamfil=filcnw print*, "filcnw is", filcnw open(13,file=filcnw,form='unformatted') endif print*, "iav is", iav write(6,*) pnamfil,'written at t=', 1 time, ' prma mi=',prma,prmi nfil=13 rewind(nfil) write(nfil) n1,n2,n3 write(nfil) ros,alx3d,ren,time if(iwrq2.eq.1) then c c large memory occupancy c print*,"I am saving the extended results" write(nfil) (((q1(i,j,k),i=1,n1),j=1,n2),k=1,n3), 1 (((q2(i,j,k),i=1,n1),j=1,n2),k=1,n3), 1 (((q3(i,j,k),i=1,n1),j=1,n2),k=1,n3), 1 (((pr(i,j,k),i=1,n1),j=1,n2),k=1,n3) else c c reduced memory occupancy c the pressure is not necessary for restarting files c even for post processing can be saved but then c the advancement of a time step should be performed c write(nfil) (((q1(i,j,k),i=1,n1),j=1,n2),k=1,n3), 1 (((q3(i,j,k),i=1,n1),j=1,n2),k=1,n3), 1 (((pr(i,j,k),i=1,n1),j=1,n2),k=1,n3) endif write(nfil) ntime,ntt,nav close(nfil) The results is for instance file field0010.dat, which I try reading in Python by using pylab and the statement: s = file( './field0010.dat','rb' ).read( ) newarr = fromstring(s ,Float) but the content of newarr seems absolutely wrong (number of the order of 1e+309 which are not produced or saved in the simulations...). Am I doing something wrong? Kind Regards Lorenzo From nwagner at iam.uni-stuttgart.de Thu Mar 8 13:17:32 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 08 Mar 2007 19:17:32 +0100 Subject: [SciPy-user] linsolve.factorized was: Re: Using umfpack to calculate a incomplete LU factorisation (ILU) In-Reply-To: <45F04386.6010105@ntc.zcu.cz> References: <45EE899D.9040304@ntc.zcu.cz> <45F00D7A.4060603@iam.uni-stuttgart.de> <45F03884.3000101@ntc.zcu.cz> <45F04386.6010105@ntc.zcu.cz> Message-ID: On Thu, 08 Mar 2007 18:10:30 +0100 Robert Cimrman wrote: > Robert Cimrman wrote: >> Nils Wagner wrote: >>> Just out of interest >>> >>> Is there something comparable with >>> >>> sigma_solve = linsolve.splu(A - sigma*B).solve >>> >>> available in UMFPACK ? >> >> Sure, although it is not a oneliner. This is the >>relevant part of the >> docstring: > >> Would you be interested in exposing this in linsolve, >>as, say, >> >> def factorized( A ): >> if isUmfpack and useUmfpack: >> # This must be written... >> else: >> return splu( A ).solve > > > Well, I did it since I am going to need this, too :-) > > In [3]:scipy.linsolve.factorized? > ... > Definition: scipy.linsolve.factorized(A) > Docstring: > Return a fuction for solving a linear system, with A >pre-factorized. > > Example: > solve = factorized( A ) # Makes LU decomposition. > x1 = solve( rhs1 ) # Uses the LU factors. > x2 = solve( rhs2 ) # Uses again the LU factors. > > This uses UMFPACK if available. > > cheers, > r. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Great ! Thank you very much ! Nils From wollez at gmx.net Thu Mar 8 13:32:29 2007 From: wollez at gmx.net (WolfgangZ) Date: Thu, 08 Mar 2007 19:32:29 +0100 Subject: [SciPy-user] Reading Binary Files In-Reply-To: References: Message-ID: Lorenzo Isella schrieb: > Dear All, > I hope this is not too off-topic. I have been given an old but > reliable fortran code for fluid dynamic simulations. It saves a a lot > of data using 3D arrays (q1,q2,q3,pr) as a binary file. > I cut and paste the part of the fortran code saving the data into a binary file: > (in the following iav=1 and iwrq2=1). > > > if(iav.eq.1) then > namfil='field'//ipfi//'.dat' > pnamfil='field'//ipfi//'.dat' > open(13,file=namfil,form='unformatted') > else > > pnamfil=filcnw > print*, "filcnw is", filcnw > open(13,file=filcnw,form='unformatted') > endif > print*, "iav is", iav > > write(6,*) pnamfil,'written at t=', > 1 time, ' prma mi=',prma,prmi > nfil=13 > rewind(nfil) > write(nfil) n1,n2,n3 > write(nfil) ros,alx3d,ren,time > if(iwrq2.eq.1) then > c > c large memory occupancy > c > print*,"I am saving the extended results" > write(nfil) (((q1(i,j,k),i=1,n1),j=1,n2),k=1,n3), > 1 (((q2(i,j,k),i=1,n1),j=1,n2),k=1,n3), > 1 (((q3(i,j,k),i=1,n1),j=1,n2),k=1,n3), > 1 (((pr(i,j,k),i=1,n1),j=1,n2),k=1,n3) > else > c > c reduced memory occupancy > c the pressure is not necessary for restarting files > c even for post processing can be saved but then > c the advancement of a time step should be performed > c > write(nfil) (((q1(i,j,k),i=1,n1),j=1,n2),k=1,n3), > 1 (((q3(i,j,k),i=1,n1),j=1,n2),k=1,n3), > 1 (((pr(i,j,k),i=1,n1),j=1,n2),k=1,n3) > endif > write(nfil) ntime,ntt,nav > close(nfil) > > > > The results is for instance file field0010.dat, which I try reading in > Python by using pylab and the statement: > > s = file( './field0010.dat','rb' ).read( ) > > newarr = fromstring(s ,Float) > > but the content of newarr seems absolutely wrong (number of the order > of 1e+309 which are not produced or saved in the simulations...). > Am I doing something wrong? > > Kind Regards > > Lorenzo are you sure that fortran saves binary files? Have you checked the files with a text editor? From hasslerjc at comcast.net Thu Mar 8 14:41:35 2007 From: hasslerjc at comcast.net (John Hassler) Date: Thu, 08 Mar 2007 14:41:35 -0500 Subject: [SciPy-user] Reading Binary Files In-Reply-To: References: Message-ID: <45F066EF.5040709@comcast.net> An HTML attachment was scrubbed... URL: From gnurser at googlemail.com Thu Mar 8 14:58:25 2007 From: gnurser at googlemail.com (George Nurser) Date: Thu, 8 Mar 2007 19:58:25 +0000 Subject: [SciPy-user] Reading Binary Files In-Reply-To: <45F066EF.5040709@comcast.net> References: <45F066EF.5040709@comcast.net> Message-ID: <1d1e6ea70703081158o5dc1d3c8x775406815f9a7939@mail.gmail.com> This short routine might be useful it gets rid of the extra stuff at the beginning and end that fortran adds. But I can't guarantee it will work for you. use swap=True if you are reading little-endian on big endian/big-endian on little endian. f8=false if you are using real*4 instead of real*8 ( double precision) You also need to put somewhere: from numpy import * def readbin(file_in,swap=False,f8=True): if f8: htot =fromfile(file = file_in ,dtype=Float) c = htot.view(single) hc = c[1:-1].view(double) else: htot =fromfile(file = file_in ,dtype=Float32) hc = c[1:-1] if swap: hc = hc.byteswap() return hc From hasslerjc at comcast.net Thu Mar 8 15:28:11 2007 From: hasslerjc at comcast.net (John Hassler) Date: Thu, 08 Mar 2007 15:28:11 -0500 Subject: [SciPy-user] Reading Binary Files In-Reply-To: References: Message-ID: <45F071DB.9050209@comcast.net> I realize that I was so eager to answer your question that I neglected to try to solve your problem. If you have the source (and if this isn't legacy data files), just recompile with more usable I/O formats. We used to use unformatted I/O back in the day when a BIG computer might have several tens of K (as in K) of memory, and the high speed I/O was mag. tape. We'd dump to tape to get some free memory, backspace, and read it back in when we needed it. "Unformatted" I/O was intended to be used by programs made by the same compiler on the same computer, and not for any external use. john Lorenzo Isella wrote: > Dear All, > I hope this is not too off-topic. I have been given an old but > reliable fortran code for fluid dynamic simulations. It saves a a lot > of data using 3D arrays (q1,q2,q3,pr) as a binary file. > I cut and paste the part of the fortran code saving the data into a binary file: > (in the following iav=1 and iwrq2=1). > > > if(iav.eq.1) then > namfil='field'//ipfi//'.dat' > pnamfil='field'//ipfi//'.dat' > open(13,file=namfil,form='unformatted') > else > > pnamfil=filcnw > print*, "filcnw is", filcnw > open(13,file=filcnw,form='unformatted') > endif > print*, "iav is", iav > > write(6,*) pnamfil,'written at t=', > 1 time, ' prma mi=',prma,prmi > nfil=13 > rewind(nfil) > write(nfil) n1,n2,n3 > write(nfil) ros,alx3d,ren,time > if(iwrq2.eq.1) then > c > c large memory occupancy > c > print*,"I am saving the extended results" > write(nfil) (((q1(i,j,k),i=1,n1),j=1,n2),k=1,n3), > 1 (((q2(i,j,k),i=1,n1),j=1,n2),k=1,n3), > 1 (((q3(i,j,k),i=1,n1),j=1,n2),k=1,n3), > 1 (((pr(i,j,k),i=1,n1),j=1,n2),k=1,n3) > else > c > c reduced memory occupancy > c the pressure is not necessary for restarting files > c even for post processing can be saved but then > c the advancement of a time step should be performed > c > write(nfil) (((q1(i,j,k),i=1,n1),j=1,n2),k=1,n3), > 1 (((q3(i,j,k),i=1,n1),j=1,n2),k=1,n3), > 1 (((pr(i,j,k),i=1,n1),j=1,n2),k=1,n3) > endif > write(nfil) ntime,ntt,nav > close(nfil) > > > > The results is for instance file field0010.dat, which I try reading in > Python by using pylab and the statement: > > s = file( './field0010.dat','rb' ).read( ) > > newarr = fromstring(s ,Float) > > but the content of newarr seems absolutely wrong (number of the order > of 1e+309 which are not produced or saved in the simulations...). > Am I doing something wrong? > > Kind Regards > > Lorenzo > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From bnuttall at uky.edu Thu Mar 8 15:32:03 2007 From: bnuttall at uky.edu (Brandon Nuttall) Date: Thu, 08 Mar 2007 15:32:03 -0500 Subject: [SciPy-user] Reading Binary Files In-Reply-To: <45F066EF.5040709@comcast.net> References: <45F066EF.5040709@comcast.net> Message-ID: <6.0.1.1.2.20070308152737.0280ed60@pop.uky.edu> An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Thu Mar 8 17:28:57 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 8 Mar 2007 17:28:57 -0500 Subject: [SciPy-user] Phase Unwrapping Algorithm In-Reply-To: <17904.18732.283675.88678@hipparchus.mines.edu> References: <17904.18732.283675.88678@hipparchus.mines.edu> Message-ID: On 08/03/07, Jared Peacock wrote: > > Does anybody know of a good phase unwrapping algorithm in Python? If by phase unwrapping you mean taking values that have been reduced modulo 1 (say) and trying to reconstitute the originals, the problem is in principle underdetermined - we really can't distinguish big jumps in the data from the results of phase wrapping. Still, for reasonable data it's often enough to consider every jump bigger than 0.5 a phase wrap: def unwrap(a): return a-numpy.concatenate(([0],numpy.cumsum(numpy.round(a[1:]-a[:-1])))) That is, a[1:]-a[:-1] extracts the first difference, round converts anything less than -0.5 to -1 and anything more than 0,5 to 1, we concatenate a zero on the front, and then we subtract (because a jump down of 0.6 is rounded to -1 and we want it to be a jump up) the result from our original data. I suppose this could be made more efficient by rewriting more of the operations to be in-place. Anne From drfredkin at ucsd.edu Thu Mar 8 19:12:17 2007 From: drfredkin at ucsd.edu (Donald Fredkin) Date: Fri, 9 Mar 2007 00:12:17 +0000 (UTC) Subject: [SciPy-user] Phase Unwrapping Algorithm References: <17904.18732.283675.88678@hipparchus.mines.edu> Message-ID: Jared Peacock wrote: > > Does anybody know of a good phase unwrapping algorithm in Python? > Whatis wrong with scipy.unwrap, which is actually numpy.lib.function_base.unwrap? -- From maschke at molgen.mpg.de Fri Mar 9 09:54:02 2007 From: maschke at molgen.mpg.de (Elisabeth Maschke-Dutz) Date: Fri, 09 Mar 2007 15:54:02 +0100 Subject: [SciPy-user] Python embedded in C : import scipy fails Message-ID: <45F1750A.6070002@molgen.mpg.de> Hallo, I want to use some programs that uses scipy in an in C embedded python program. import sys import os from string import join .......... all those comands work fine. The sys.path is found, it looks as if the whole environment is the same as it is when I start python in a shell. The directory .../python2.4/site-packages/scipy is also set in sys.path. But it is impossible to import scipy, the program terminates with a segmentation fault or the function is not found. The same problem occurs if I try to import numpy. It seems that packages, which do not belong to python can not be importet. Single programs anywhere do not make any problems if the path is set in sys.path. The import of scipy in all other python programs do not make any problems: >>> import scipy >>> the testmodel.c code: #include int main(int argc, char **argv) { char *modelp = "modelp"; char *testmodel = "testmodel"; PyObject *pName, *pModule, *pFunc; PyObject *pArgs1, *pArgs2, *pValue, *hallop; int i; double l; l=100.0; Py_SetProgramName(argv[0]); Py_Initialize(); PySys_SetArgv(argc, argv); pName = PyString_FromString(modelp); if (pName != NULL) { printf("pName != NULL"); } else { printf("pName == NULL"); } pModule = PyImport_Import(pName); Py_DECREF(pName); if (Py_IsInitialized()) printf("Initialized!!!"); if (pModule != NULL) { pFunc = PyObject_GetAttrString(pModule,testmodel); printf("Function found!!!!!!!!!"); } else { printf("pModule == NULL!!!!!!!!!"); } if (PyCallable_Check(pFunc)) printf("function callable"); pArgs1 = PyFloat_FromDouble(l); pArgs2 = PyTuple_New(1); i=0; PyTuple_SetItem(pArgs2, i,pArgs1); if (pArgs2 != NULL) printf("pArgs2!=Null"); pValue = PyObject_CallObject(pFunc, pArgs2); Py_DECREF(pArgs1); Py_DECREF(pArgs2); Py_DECREF(pFunc); Py_DECREF(pModule); printf("3Result of call..............."); printf("Result of call: %ld\n", PyInt_AsLong(pValue)); return 1; } the python code: import sys import os from string import join #from math import * from string import * sys.path.append(".................../dir2") sys.path.append(".......................python2.4/site-packages/scipy") #import scipy import testimport # a single file (program) def testmodel(a): print 'in testmodel................ a= ',a print sys.path print os.environ import distutils.sysconfig t1=distutils.sysconfig.get_config_var('LINKFORSHARED') print t1 t2=distutils.sysconfig.get_config_vars() print t2 t3=distutils.sysconfig.get_python_lib() print t3 print 'nach import' #import numpy return 17 I use gcc with CFLAGS = -I/............../linux/include/python2.4 -c and link with: -pthread -lm -ldl -lutil ................lib/python2.4/config/libpython2.4.a .../getpath.o ..../config.o Has somebody an idee what I can do? Elisabeth M. From lorenzo.isella at gmail.com Fri Mar 9 10:57:36 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Fri, 9 Mar 2007 16:57:36 +0100 Subject: [SciPy-user] Array Bounds Message-ID: Dear All, Again a newbie question: I noticed that Python (or at least Numpy/Scipy) follows the C convention of labeling as zero the first element in an array. I can live with this, but I do not find very intuitive the loop behavior: for i in range (1,5 ) means that i actually takes values 1,2,3,4 and not 5! Why this choice? Is it a consequence of the array labeling? In case one should not like it, can it be changed by the user? Kind Regards Lorenzo From perry at stsci.edu Fri Mar 9 12:12:09 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 9 Mar 2007 12:12:09 -0500 Subject: [SciPy-user] job opportunity at Space Telescope Science Institute Message-ID: We are looking for someone to fill a position at the Space Telescope Science Institute (in Baltimore, MD) to work on Python tools for astronomical data processing and analysis. Details can be found at: http://www.stsci.edu/institute/brc/hr/co/external/Req559.html From lbolla at gmail.com Fri Mar 9 12:19:49 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 9 Mar 2007 18:19:49 +0100 Subject: [SciPy-user] job opportunity at Space Telescope Science Institute In-Reply-To: References: Message-ID: <80c99e790703090919l5cb4dd0mb06216c8eaccdb77@mail.gmail.com> I cannot access the link... -lorenzo On 3/9/07, Perry Greenfield wrote: > > We are looking for someone to fill a position at the Space Telescope > Science Institute (in Baltimore, MD) to work on Python tools for > astronomical data processing and analysis. Details can be found at: > > http://www.stsci.edu/institute/brc/hr/co/external/Req559.html > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Mar 9 13:07:27 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 09 Mar 2007 12:07:27 -0600 Subject: [SciPy-user] Array Bounds In-Reply-To: References: Message-ID: <45F1A25F.7010302@gmail.com> Lorenzo Isella wrote: > Dear All, > Again a newbie question: I noticed that Python (or at least > Numpy/Scipy) follows the C convention of labeling as zero the first > element in an array. > I can live with this, but I do not find very intuitive the loop behavior: > > for i in range (1,5 ) > > means that i actually takes values 1,2,3,4 and not 5! > Why this choice? Is it a consequence of the array labeling? It is a consequence of *Python* using zero-indexing everywhere. numpy simply follows suit. It means that range(len(sequence)) gives you all of the indices into sequence. Just give it some time. You will grow used to it. > In case > one should not like it, can it be changed by the user? No. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Fri Mar 9 13:20:01 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 10 Mar 2007 03:20:01 +0900 Subject: [SciPy-user] Python embedded in C : import scipy fails In-Reply-To: <45F1750A.6070002@molgen.mpg.de> References: <45F1750A.6070002@molgen.mpg.de> Message-ID: Don't know if this is the problem, but in my app, I call "import_array1(false);" right after calling Py_Initialize(). --bb On 3/9/07, Elisabeth Maschke-Dutz wrote: > > Hallo, > > I want to use some programs that uses scipy > in an in C embedded python program. > import sys > import os > from string import join > .......... > all those comands work fine. > The sys.path is found, it looks as if the whole environment > is the same as it is when I start python in a shell. > The directory .../python2.4/site-packages/scipy is also set > in sys.path. > > But it is impossible to import scipy, > the program terminates with a segmentation fault or the function is not > found. > The same problem occurs if I try to import numpy. > ... > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perry at stsci.edu Fri Mar 9 14:09:58 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 9 Mar 2007 14:09:58 -0500 Subject: [SciPy-user] job opportunity at Space Telescope Science Institute In-Reply-To: <80c99e790703090919l5cb4dd0mb06216c8eaccdb77@mail.gmail.com> References: <80c99e790703090919l5cb4dd0mb06216c8eaccdb77@mail.gmail.com> Message-ID: I'm told this problem has been corrected (but not being on an external network I can't conveniently check). Please try again. Thanks, Perry On Mar 9, 2007, at 12:19 PM, lorenzo bolla wrote: > I cannot access the link... > -lorenzo > > On 3/9/07, Perry Greenfield wrote: We are looking > for someone to fill a position at the Space Telescope > Science Institute (in Baltimore, MD) to work on Python tools for > astronomical data processing and analysis. Details can be found at: > > http://www.stsci.edu/institute/brc/hr/co/external/Req559.html > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From pwang at enthought.com Fri Mar 9 14:18:09 2007 From: pwang at enthought.com (Peter Wang) Date: Fri, 9 Mar 2007 13:18:09 -0600 Subject: [SciPy-user] Array Bounds In-Reply-To: <45F1A25F.7010302@gmail.com> References: <45F1A25F.7010302@gmail.com> Message-ID: <6D8DF23D-66D0-41E2-9523-6103E528B2AE@enthought.com> On Mar 9, 2007, at 12:07 PM, Robert Kern wrote: >> In case one should not like it, can it be changed by the user? > > No. Well, this is not strictly true. The range() function is a built-in, but you can override it by defining your own that includes the final index: def range(a,b): return __builtins__.range(a,b+1) >>> range(1,5) [1, 2, 3, 4, 5] Of course, you should NEVER do this. Other people that catch you redefining built-in functions will probably call you all sorts of bad names. -Peter From robert.kern at gmail.com Fri Mar 9 14:22:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 09 Mar 2007 13:22:18 -0600 Subject: [SciPy-user] job opportunity at Space Telescope Science Institute In-Reply-To: References: <80c99e790703090919l5cb4dd0mb06216c8eaccdb77@mail.gmail.com> Message-ID: <45F1B3EA.9010402@gmail.com> Perry Greenfield wrote: > I'm told this problem has been corrected (but not being on an > external network I can't conveniently check). Please try again. Works for me. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lanceboyle at qwest.net Fri Mar 9 19:56:19 2007 From: lanceboyle at qwest.net (Jerry) Date: Fri, 9 Mar 2007 17:56:19 -0700 Subject: [SciPy-user] Array Bounds In-Reply-To: References: Message-ID: <19B1090D-8692-4895-AFDD-04FF662072CA@qwest.net> On Mar 9, 2007, at 8:57 AM, Lorenzo Isella wrote: > Dear All, > Again a newbie question: I noticed that Python (or at least > Numpy/Scipy) follows the C convention of labeling as zero the first > element in an array. > I can live with this, but I do not find very intuitive the loop > behavior: > > for i in range (1,5 ) > > means that i actually takes values 1,2,3,4 and not 5! > Why this choice? Is it a consequence of the array labeling? In case > one should not like it, can it be changed by the user? > Kind Regards > > Lorenzo In my opinion, this is poor language design (along with Henry Ford- style array indexing--any array you want as long as it starts with zero) and an impediment to writing correct programs. Good design lets the programmer abstract his program to resemble the real-world problem that he is working on. Get used to it or move along--every time I post about this, that's the advice I get. (1) Python arrays will _never_ change; (2) it is blasphemous to even mention it; (3) someone will ask, why in the world would anyone want an array that doesn't start with zero; (4) someone will flame me. Jerry From aisaac at american.edu Fri Mar 9 20:18:03 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 9 Mar 2007 20:18:03 -0500 Subject: [SciPy-user] Array Bounds In-Reply-To: <19B1090D-8692-4895-AFDD-04FF662072CA@qwest.net> References: <19B1090D-8692-4895-AFDD-04FF662072CA@qwest.net> Message-ID: On Fri, 9 Mar 2007, Jerry apparently wrote: > (2) it is blasphemous to even mention it; No, just pointless. Surely you do not think you are the first? Here's the opposite request for a unit-based language: http://www-gatago.com/comp/soft-sys/matlab/30395164.html Here is an oft-cited by not completely persuasive discussion: http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html Usually, what does it matter? Just know what you are saying. Cheers, Alan Isaac From lanceboyle at qwest.net Fri Mar 9 21:20:41 2007 From: lanceboyle at qwest.net (Jerry) Date: Fri, 9 Mar 2007 19:20:41 -0700 Subject: [SciPy-user] Array Bounds In-Reply-To: References: <19B1090D-8692-4895-AFDD-04FF662072CA@qwest.net> Message-ID: The opposite request is not for unit-based arrays but for arrays of user-specified lower bounds. Jerry On Mar 9, 2007, at 6:18 PM, Alan G Isaac wrote: > > Here's the opposite request for a unit-based language: > http://www-gatago.com/comp/soft-sys/matlab/30395164.html > From robert.kern at gmail.com Fri Mar 9 22:20:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 09 Mar 2007 21:20:16 -0600 Subject: [SciPy-user] Array Bounds In-Reply-To: <19B1090D-8692-4895-AFDD-04FF662072CA@qwest.net> References: <19B1090D-8692-4895-AFDD-04FF662072CA@qwest.net> Message-ID: <45F223F0.8030403@gmail.com> Jerry wrote: > In my opinion, this is poor language design (along with Henry Ford- > style array indexing--any array you want as long as it starts with > zero) and an impediment to writing correct programs. Good design lets > the programmer abstract his program to resemble the real-world > problem that he is working on. Get used to it or move along--every > time I post about this, that's the advice I get. (1) Python arrays > will _never_ change; (2) it is blasphemous to even mention it; (3) > someone will ask, why in the world would anyone want an array that > doesn't start with zero; (4) someone will flame me. (0) Someone will tell you that you can define whatever classes you like with whatever index semantics that you like. It's all very well to *say* that things would be better if one could change the lower index bound at will, but you have the ability to *show* us, so why don't you? Try it out! Work out the inevitable kinks with writing generic code that is agnostic to the heterogeneous index semantics. Show us the code! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From perry at stsci.edu Fri Mar 9 22:30:51 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 9 Mar 2007 22:30:51 -0500 Subject: [SciPy-user] Array Bounds In-Reply-To: <19B1090D-8692-4895-AFDD-04FF662072CA@qwest.net> References: <19B1090D-8692-4895-AFDD-04FF662072CA@qwest.net> Message-ID: <5BC5B737-7E08-451F-A0DA-D6E2F809F9E5@stsci.edu> On Mar 9, 2007, at 7:56 PM, Jerry wrote: > > In my opinion, this is poor language design (along with Henry Ford- > style array indexing--any array you want as long as it starts with > zero) and an impediment to writing correct programs. Good design lets Me wonders why you waste your time here. You should be using one of those good language designs instead of getting fools to fix their stupid language. It would be much more productive (and then show all what we should be using by showing us how much better your applications are). > the programmer abstract his program to resemble the real-world > problem that he is working on. Get used to it or move along--every > time I post about this, that's the advice I get. (1) Python arrays > will _never_ change; (2) it is blasphemous to even mention it; (3) > someone will ask, why in the world would anyone want an array that > doesn't start with zero; (4) someone will flame me. > ^^^^^^^^^^^^^^^^^^^^^^^^^^ Er, that's what you were hoping for, wasn't it? ;-) > Jerry > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From perry at stsci.edu Fri Mar 9 22:59:55 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 9 Mar 2007 22:59:55 -0500 Subject: [SciPy-user] Array Bounds In-Reply-To: References: Message-ID: <20FCA004-7DB8-431C-B666-EE29DF9C70F4@stsci.edu> On Mar 9, 2007, at 10:57 AM, Lorenzo Isella wrote: > Dear All, > Again a newbie question: I noticed that Python (or at least > Numpy/Scipy) follows the C convention of labeling as zero the first > element in an array. > I can live with this, but I do not find very intuitive the loop > behavior: > > for i in range (1,5 ) > > means that i actually takes values 1,2,3,4 and not 5! > Why this choice? Is it a consequence of the array labeling? In case > one should not like it, can it be changed by the user? > Kind Regards > > Lorenzo Your reaction is quite common for those coming from a language that uses 1-based indexing. But keep in mind there are also advantages for 0-based indexing that you aren't aware of since you haven't used it. There are pros and cons to both (and yes, I've used both). Since Python uses 0-based, it generally isn't a good idea to adopt different conventions than the language uses for arrays. It may make your array usage more comfortable, but will lead to all sorts of problems when you use Python lists and tuples. (Note that you mention arrays, but your example is actually due to Python list behavior and the built-in range function, which has nothing to do with numpy; in principle array indexing behavior could be made different than Python list indexing behavior, but that would be a really bad thing to do). Actually, your complaint is not directly due to that issue (though it's related). As I read it, it's because you are surprised that the end of the range does not include the number that's used for the end of the range. And that's a natural enough reaction. It's not intuitive to many people. But what's intuitive is not always the best choice. It works that way (I'm guessing here) because slices in Python work that way (e.g., mylist[1:5] doesn't include mylist[5]). Ok, why don't slices work that way? For various reasons, but one is that when you break a list into two pieces, e.g., mylist = mylist[:7] + mylist[7:], you don't have to worry about dealing with those annoying +1 or -1 offsets to get things right. In this case, the number of items sliced is simply the difference of the two indices. It may seem like a small point, but a lot of errors arise from getting stuff like this wrong, especially the edge cases of when the slice point is at 0 or the end of an array. The rules Python has chosen for this stuff are consistent, and they do work pretty well. So it isn't the most intuitive, especially to scientific or engineering types, but it's not that hard to get used to, and there are definite advantages to this approach. From as8ca at virginia.edu Sat Mar 10 01:31:07 2007 From: as8ca at virginia.edu (Alok Singhal) Date: Sat, 10 Mar 2007 01:31:07 -0500 Subject: [SciPy-user] Array Bounds In-Reply-To: <20FCA004-7DB8-431C-B666-EE29DF9C70F4@stsci.edu> References: <20FCA004-7DB8-431C-B666-EE29DF9C70F4@stsci.edu> Message-ID: <20070310063107.GB5256@virginia.edu> On 09/03/07: 22:59, Perry Greenfield wrote: > > Actually, your complaint is not directly due to that issue (though > it's related). As I read it, it's because you are surprised that the > end of the range does not include the number that's used for the end > of the range. And that's a natural enough reaction. It's not > intuitive to many people. But what's intuitive is not always the best > choice. It works that way (I'm guessing here) because slices in > Python work that way (e.g., mylist[1:5] doesn't include mylist[5]). It might also be (I am guessing too) because range(n) should, intuitively, return n numbers. Given 0-based indices, it makes (some) sense to start the numbers at 0, and hence, stop at n-1. So, then by generalization, range(m,n) should generate integers m, m+1, .., n-1. That is the least surprising result. -Alok -- Alok Singhal * * Graduate Student, dept. of Astronomy * * * University of Virginia http://www.astro.virginia.edu/~as8ca/ * * From hasslerjc at comcast.net Sat Mar 10 10:23:05 2007 From: hasslerjc at comcast.net (John Hassler) Date: Sat, 10 Mar 2007 10:23:05 -0500 Subject: [SciPy-user] Array Bounds In-Reply-To: <20070310063107.GB5256@virginia.edu> References: <20FCA004-7DB8-431C-B666-EE29DF9C70F4@stsci.edu> <20070310063107.GB5256@virginia.edu> Message-ID: <45F2CD59.3090704@comcast.net> An HTML attachment was scrubbed... URL: From steve at shrogers.com Sat Mar 10 13:36:21 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Sat, 10 Mar 2007 11:36:21 -0700 Subject: [SciPy-user] Array Bounds In-Reply-To: <45F2CD59.3090704@comcast.net> References: <20FCA004-7DB8-431C-B666-EE29DF9C70F4@stsci.edu> <20070310063107.GB5256@virginia.edu> <45F2CD59.3090704@comcast.net> Message-ID: <45F2FAA5.2090905@shrogers.com> John Hassler wrote: > There is no way to make everybody happy, I'm afraid. Early Fortran was > one-based. A text I used to use (Carnahan, et al, "Applied Numerical > Methods," 1969) warns of the difficulty of translating mathematical sums > starting with zero to Fortran indices starting with 1. On the other > hand, Press, et al., "Numerical Recipes in C," 1988, 1992, spend an > entire page justifying their practice of converting all C arrays to > one-based. (This, by the way, has been roundly criticized by "real" C > programmers.) > > Pascal, Fortran 77, and perhaps other 'modern' languages, allow > specifying both ends of the range, as in x[2:7] or something. Some > people like this. I find that it's just one more thing to keep track > of. (By this I mean, if you see x[4] somewhere in a program, you have > to look back at the declaration to know what it means.) I suppose that > something like this in Python would also require that one ALWAYS specify > both ends of range(). > For casual use, many would find it convenient to be able to change from 0-based to 1-based indexing (APL allows this). For larger programs touched by multiple programmers over time, it's a maintenance issue. # Steve From steve at shrogers.com Sat Mar 10 15:30:02 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Sat, 10 Mar 2007 13:30:02 -0700 Subject: [SciPy-user] Teaching Array Languages Message-ID: <45F3154A.5070808@shrogers.com> Thanks to all who responded to my question about teaching array programming. I've compiled a brief summary of the responses. NumPy ===== * Subject - Physics/Astronomy 3 - Biotechnology 1 - Engineering 2 - Microeconomics 1 * Level - College/University 7 J = * Subject - Math 1 - Engineering 1 - Technical English * Level - College/University 3 - Mathematica =========== * Subject - Math 1 * Level - College/University 1 Matlab ====== * Subject - Engineering 1 IDL === * Subject - Physics/Astronomy 2 * Level - College/University 2 Discussion ========== While this informal survey is neither precise nor comprehensive, I think it is interesting. I queried the NumPy/SciPy and J mailing lists because those are the ones that I follow. A more rigorous and broadly distributed survey may be in order. Regards, Steve From nwagner at iam.uni-stuttgart.de Mon Mar 12 08:30:33 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 12 Mar 2007 13:30:33 +0100 Subject: [SciPy-user] Structured matrices Message-ID: <45F547E9.3050208@iam.uni-stuttgart.de> Hi all, I was wondering if the matrix family (see below) has a special name ? And/or is there a way to construct this matrix via special matrices (like Hankel, Toeplitz, etc.) ? [[ 3. 1.] [ 1. 1.]] [[ 5. 3. 1.] [ 3. 3. 1.] [ 1. 1. 1.]] [[ 7. 5. 3. 1.] [ 5. 5. 3. 1.] [ 3. 3. 3. 1.] [ 1. 1. 1. 1.]] [[ 9. 7. 5. 3. 1.] [ 7. 7. 5. 3. 1.] [ 5. 5. 5. 3. 1.] [ 3. 3. 3. 3. 1.] [ 1. 1. 1. 1. 1.]] Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: structure.py Type: text/x-python Size: 196 bytes Desc: not available URL: From mwojc at p.lodz.pl Mon Mar 12 10:09:25 2007 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Mon, 12 Mar 2007 15:09:25 +0100 Subject: [SciPy-user] Index of a value In-Reply-To: References: Message-ID: <200703121509.26269.mwojc@p.lodz.pl> Hallo! Is there any shorthand for: maxvalue = A.max() maxindex = A.tolist().index(maxvalue) where A is a 1-dimensional numpy array. Thanks Marek From sgarcia at olfac.univ-lyon1.fr Mon Mar 12 10:17:05 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Mon, 12 Mar 2007 15:17:05 +0100 Subject: [SciPy-user] Index of a value In-Reply-To: <200703121509.26269.mwojc@p.lodz.pl> References: <200703121509.26269.mwojc@p.lodz.pl> Message-ID: <45F560E1.20308@olfac.univ-lyon1.fr> Hi, argmax argmin Sam Marek Wojciechowski wrote: > Hallo! > Is there any shorthand for: > > maxvalue = A.max() > maxindex = A.tolist().index(maxvalue) > > where A is a 1-dimensional numpy array. > > Thanks > Marek > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logisique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From peridot.faceted at gmail.com Mon Mar 12 13:52:09 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 12 Mar 2007 13:52:09 -0400 Subject: [SciPy-user] Structured matrices In-Reply-To: <45F547E9.3050208@iam.uni-stuttgart.de> References: <45F547E9.3050208@iam.uni-stuttgart.de> Message-ID: On 12/03/07, Nils Wagner wrote: > Hi all, > > I was wondering if the matrix family (see below) has a special name ? > And/or is there a way to construct this matrix via special matrices > (like Hankel, Toeplitz, etc.) ? Well, I've never come across them before, but they're easy to make directly: In [8]: minimum.outer(r_[7:-1:-2],r_[7:-1:-2]) Out[8]: array([[7, 5, 3, 1], [5, 5, 3, 1], [3, 3, 3, 1], [1, 1, 1, 1]]) Anne From bratona at yahoo.co.uk Tue Mar 13 12:39:59 2007 From: bratona at yahoo.co.uk (Adam Malinowski) Date: Tue, 13 Mar 2007 16:39:59 +0000 (UTC) Subject: [SciPy-user] difficulties compiling the arpack module from the sandbox Message-ID: Hi all, I am having a lot of difficulty building the arpack module from the scipy sandbox, running windows 2000 and using mingw with cygwin. I have compiled numpy and scipy (latest svn) with no problems, and have sucessfully installed a number of scipy sandboxed modules by running the command: 'python setup.py build --compiler=mingw32 install', in the root of the module. However, when I try this command for arpack, I get the following error: --- No module named msvccompiler in numpy.distutils, trying from distutils.. FOUND: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['c:\\BlasLapackLibs'] language = f77 define_macros = [('ATLAS_INFO', '"\\"?.?.?\\""')] running build running config_fc running build_src building library "arpack" sources building extension "arpack._arpack" sources f2py options: [] adding 'build\src.win32-2.5\fortranobject.c' to sources. adding 'build\src.win32-2.5' to include_dirs. adding 'build\src.win32-2.5\build\src.win32-2.5\_arpack-f2pywrappers.f' to sou rces. building data_files sources running build_py running build_clib customize Mingw32CCompiler customize Mingw32CCompiler using build_clib 0 Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_clib building 'arpack' library compiling Fortran sources Fortran f77 compiler: C:\cygwin\bin\g77.exe -g -Wall -fno-second-underscore -mno -cygwin -O3 -funroll-loops -march=pentium4 -mmmx -msse2 -msse -fomit-frame-point er -malign-double compile options: '-c' g77.exe:f77: .\ARPACK\SRC\cgetv0.f .\ARPACK\SRC\cgetv0.f: In subroutine `cgetv0': .\ARPACK\SRC\cgetv0.f:123: include 'debug.h' ^ Unable to open INCLUDE file `debug.h' at (^) .\ARPACK\SRC\cgetv0.f:124: include 'stat.h' ^ Unable to open INCLUDE file `stat.h' at (^) error: Command "C:\cygwin\bin\g77.exe -g -Wall -fno-second-underscore -mno-cygwi n -O3 -funroll-loops -march=pentium4 -mmmx -msse2 -msse -fomit-frame-pointer -ma lign-double -c -c .\ARPACK\SRC\cgetv0.f -o build\temp.win32-2.5\ARPACK\SRC\cgetv 0.o" failed with exit status 1 --- I have also tried using the 'build_ext' command instead, which gives this error: ---- No module named msvccompiler in numpy.distutils, trying from distutils.. FOUND: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['c:\\BlasLapackLibs'] language = f77 define_macros = [('ATLAS_INFO', '"\\"?.?.?\\""')] running build_ext customize Mingw32CCompiler customize Mingw32CCompiler using build_ext 0 Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_ext building 'arpack._arpack' extension compiling C sources C compiler: gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes error: unknown file type '.src' (from '.\arpack.pyf.src') --- Both of these errors have completely stumped me, as I'm quite new to fortran/ python compiling. Could it be a problem with the ATLAS or LAPACK setup? Or perhaps its a problem with numpy distutils? Any help would very much appreciated. Thanks, Adam From ryanlists at gmail.com Tue Mar 13 10:00:55 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 13 Mar 2007 09:00:55 -0500 Subject: [SciPy-user] dumb install question Message-ID: I have a dumb question. I am having some troubles getting scipy/numpy/matplotlib installed in Ubuntu Edgy (I have done it from source in Breezy and used Andrew Straw's packages for Dapper), and I read on http://scipy.org/Installing_SciPy/Linux?highlight=%28%28----%28-%2A%29%28%5Cr%29%3F%5Cn%29%28.%2A%29CategoryInstallation%5Cb%29 that I need to install either atlas-3dnow-dev or atlas-sse2-dev or atlas-sse3-dev depending on my system. If I have an Athlon X2 processor (running 32-bit Ubuntu), which do I need? I think it is sse2, but I am not completely sure. Thanks, Ryan From ryanlists at gmail.com Tue Mar 13 10:03:59 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 13 Mar 2007 09:03:59 -0500 Subject: [SciPy-user] problem with _num.seterr when importing scipy In-Reply-To: <200702091531.11770.krlong@sandia.gov> References: <200702091522.01125.krlong@sandia.gov> <45CCE67E.1090508@gmail.com> <200702091531.11770.krlong@sandia.gov> Message-ID: I was having the same problem. Did this ever get solved? Thanks, Ryan On 2/9/07, Kevin Long wrote: > > Hi Robert, > > Numpy 1.0b5 and scipy 0.5.2. > > - kevin > > On Friday 09 February 2007 15:24, Robert Kern wrote: > > Kevin Long wrote: > > > Hello, > > > > > > I'm getting an error message about "_num.seterr" when importing scipy. > > > Output is below. > > > > > > python > > > Python 2.4.4 (#1, Feb 9 2007, 14:45:36) > > > [GCC 3.4.6] on linux2 > > > Type "help", "copyright", "credits" or "license" for more information. > > > > > >>>> import scipy > > > > > > Traceback (most recent call last): > > > File "", line 1, in ? > > > File "/usr/local/lib/python2.4/site-packages/scipy/__init__.py", line > > > 37, in ? > > > _num.seterr(all='ignore') > > > TypeError: seterr() got an unexpected keyword argument 'all' > > > > What versions of numpy and scipy do you have installed? E.g.: > > >>> import numpy > > >>> print numpy.__version__ > > > > 1.0.2.dev3521 > > > > >>> import scipy > > >>> print scipy.__version__ > > > > 0.5.3.dev2620 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed Mar 14 00:32:58 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 13 Mar 2007 23:32:58 -0500 Subject: [SciPy-user] difficulties compiling the arpack module from the sandbox In-Reply-To: References: Message-ID: <45F77AFA.5030506@gmail.com> Adam Malinowski wrote: > Hi all, > > I am having a lot of difficulty building the arpack module from the scipy > sandbox, running windows 2000 and using mingw with cygwin. I have compiled > numpy and scipy (latest svn) with no problems, and have sucessfully installed a > number of scipy sandboxed modules by running the command: 'python setup.py > build --compiler=mingw32 install', in the root of the module. That won't work. build doesn't take a --compiler option. Try these (pardon the linebreaks): $ python setup.py build_src build_clib --compiler=mingw32 build_ext --compiler=mingw32 build $ python setup.py install -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Mar 14 00:39:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 13 Mar 2007 23:39:16 -0500 Subject: [SciPy-user] problem with _num.seterr when importing scipy In-Reply-To: References: <200702091522.01125.krlong@sandia.gov> <45CCE67E.1090508@gmail.com> <200702091531.11770.krlong@sandia.gov> Message-ID: <45F77C74.1090901@gmail.com> Ryan Krauss wrote: > I was having the same problem. Did this ever get solved? The 'all' keyword was introduced after version 1.0b5, the version that Kevin Long was using. That was the problem. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bldrake at adaptcs.com Tue Mar 13 13:44:00 2007 From: bldrake at adaptcs.com (Barry Drake) Date: Tue, 13 Mar 2007 10:44:00 -0700 (PDT) Subject: [SciPy-user] Structured matrices In-Reply-To: <45F547E9.3050208@iam.uni-stuttgart.de> Message-ID: <759494.19283.qm@web414.biz.mail.mud.yahoo.com> Nils, Does this matrix come from a particular application? I'm working on algorithms for the non-negative matrix factorization (NMF). With this matrix as input I'm getting some very strange results. So I'm curious about potential applications. Cheers! Barry L. Drake GA Tech Nils Wagner wrote: Hi all, I was wondering if the matrix family (see below) has a special name ? And/or is there a way to construct this matrix via special matrices (like Hankel, Toeplitz, etc.) ? [[ 3. 1.] [ 1. 1.]] [[ 5. 3. 1.] [ 3. 3. 1.] [ 1. 1. 1.]] [[ 7. 5. 3. 1.] [ 5. 5. 3. 1.] [ 3. 3. 3. 1.] [ 1. 1. 1. 1.]] [[ 9. 7. 5. 3. 1.] [ 7. 7. 5. 3. 1.] [ 5. 5. 5. 3. 1.] [ 3. 3. 3. 3. 1.] [ 1. 1. 1. 1. 1.]] Nils from scipy import * def M(n): tmp = ones((n,n)) for k in arange(n,0,-1): tmp[:k-1,:k-1] = tmp[:k-1,:k-1] + 2 return tmp for n in arange(2,8): print M(n) print _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Mar 14 00:56:22 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 13 Mar 2007 22:56:22 -0600 Subject: [SciPy-user] dumb install question In-Reply-To: References: Message-ID: On 3/13/07, Ryan Krauss wrote: > I have a dumb question. I am having some troubles getting > scipy/numpy/matplotlib installed in Ubuntu Edgy (I have done it from > source in Breezy and used Andrew Straw's packages for Dapper), and I > read on http://scipy.org/Installing_SciPy/Linux?highlight=%28%28----%28-%2A%29%28%5Cr%29%3F%5Cn%29%28.%2A%29CategoryInstallation%5Cb%29 > that I need to install either atlas-3dnow-dev or atlas-sse2-dev or > atlas-sse3-dev depending on my system. If I have an Athlon X2 > processor (running 32-bit Ubuntu), which do I need? I think it is > sse2, but I am not completely sure. Yes (sse2). cat /proc/cpuinfo for the gory details. cheers, f From philfarm at fastmail.fm Mon Mar 12 23:57:52 2007 From: philfarm at fastmail.fm (philfarm at fastmail.fm) Date: Mon, 12 Mar 2007 22:57:52 -0500 Subject: [SciPy-user] Best way to structure dynamic system-of-systems simulations? In-Reply-To: <1172896076.28906.1177490111@webmail.messagingengine.com> References: <1172896076.28906.1177490111@webmail.messagingengine.com> Message-ID: <1173758272.25956.1179135235@webmail.messagingengine.com> I'll try again: Can anyone offer any direction for me? I've looked a bit at PyDSTool, but it strikes me (perhaps unjustifiably) as being rather immature. Conversely, if anyone could confirm that this is an area of need, then I'll go off and work on this myself. If I manage to come up with something workable, I'll offer it back to the community. Thanks, Phil On Fri, 02 Mar 2007 22:27:56 -0600, philfarm at fastmail.fm said: > I'm looking for any pointers on the best way to to structure the > simulation of a system composed of subsystems, i.e., I want to build my > system model as an assemblage of component models, where some components > may use as U values the Y values from other components. Each of the > components can be modeled in state-space form, i.e.: > > X_dot = A(X,U,t) > Y = B(X,U,t) > > where A and B are matrix functions (not necessarily linear). > > Is anyone using any sort of framework to simplify this task? What's the > best approach? > > If it makes any difference, I've been using odeint. > > Thanks, > Phil > > -- > http://www.fastmail.fm - And now for something completely different > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- http://www.fastmail.fm - One of many happy users: http://www.fastmail.fm/docs/quotes.html From bratona at yahoo.co.uk Wed Mar 14 01:33:37 2007 From: bratona at yahoo.co.uk (Adam Malinowski) Date: Wed, 14 Mar 2007 05:33:37 +0000 (UTC) Subject: [SciPy-user] difficulties compiling the arpack module from the sandbox References: <45F77AFA.5030506@gmail.com> Message-ID: Robert Kern gmail.com> writes: > > Adam Malinowski wrote: > > Hi all, > > > > I am having a lot of difficulty building the arpack module from the scipy > > sandbox, running windows 2000 and using mingw with cygwin. I have compiled > > numpy and scipy (latest svn) with no problems, and have sucessfully installed a > > number of scipy sandboxed modules by running the command: 'python setup.py > > build --compiler=mingw32 install', in the root of the module. > > That won't work. build doesn't take a --compiler option. Try these (pardon the > linebreaks): > > $ python setup.py build_src build_clib --compiler=mingw32 build_ext > --compiler=mingw32 build > $ python setup.py install > Many thanks for your help Robert, though I appear to have solved the problem myself. To my embarrassment, the problem seems to be simply that somewhere the path to the *.h files is incorrect (as indicated in the error message "Unable to open INCLUDE file"). To solve the problem I copied the files into the arpack root directory. Not ideal, since I don't understand what I should have done, but it worked! Also, the build command did work with "--compiler=mingw32", as I used it successfully before seeing your reply, I guess it knows to pass it to build_clib. Thanks again, Adam From lechtlr at yahoo.com Tue Mar 13 14:46:44 2007 From: lechtlr at yahoo.com (lechtlr) Date: Tue, 13 Mar 2007 11:46:44 -0700 (PDT) Subject: [SciPy-user] Using SciPy/NumPy optimization Message-ID: <971382.30683.qm@web57911.mail.re3.yahoo.com> Hi there: I am looking for a non-linear, constrained optimization tool, and thought fmin_tnc would do the job that I wanted to do. As a starter, I tried the attached script posted here recently. However, I get the following error when I run this script: "TypeError: only length-1 arrays can be converted to Python scalars" Can any one help me to figure out what I was doing wrong ? Thanks, Lex >>> Traceback (most recent call last): File "C:\Python24\lib\site-packages\scipy\optimize\tnc.py", line 200, in fmin_tnc fmin, ftol, rescale) File "C:\Python24\lib\site-packages\scipy\optimize\tnc.py", line 165, in func_and_grad g = approx_fprime(x, func, epsilon, *args) File "C:\Python24\lib\site-packages\scipy\optimize\optimize.py", line 555, in approx_fprime grad[k] = (apply(f,(xk+ei,)+args) - f0)/epsilon TypeError: only length-1 arrays can be converted to Python scalars >>> from numpy import * from scipy.optimize import fmin_tnc class LossFunction(object): def __init__(self, x, y): self.x = x self.y = y def __call__(self, abc): """ A function suitable for passing to the fmin() minimizers. """ a, b, c = abc y = a*(1.0 + b*c*self.x) ** (-1.0/b) dy = self.y - y return dy*dy x = array([1502.0, 1513.7,1517.5,1545.5,1578.9,1587.3,1600.4,1636.1,1682.9,1697.6,1813.4,1907.5]) y = array([0.28,0.22,0.26,0.18,0.12,0.13,0.09,0.07,0.06,0.05,0.01,0.01]) lf = LossFunction(x, y) abc0 = [10., 2.5, 0.0] retcode, nfeval, abc_optimal = fmin_tnc(lf, abc0, approx_grad=True, bounds=None, epsilon=1e-008) --------------------------------- Don't pick lemons. See all the new 2007 cars at Yahoo! Autos. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelle.feringa at ezct.net Tue Mar 13 10:34:48 2007 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Tue, 13 Mar 2007 15:34:48 +0100 Subject: [SciPy-user] tvtk GL2PS In-Reply-To: <17903.37463.446065.89579@gargle.gargle.HOWL> References: <001501c75fe9$942463e0$c000a8c0@JELLE> <17903.37463.446065.89579@gargle.gargle.HOWL> Message-ID: <37fac3b90703130734q4a6e5d33m3c710f0fe6aaccc6@mail.gmail.com> That would be major progress! Having mayavi/tvtk being able to produce postscripts (ps/eps/pdf) output would be terrific! It's a very important feauture since outputting for hardcopy is such a common operation. # bryce, would you be so kind to signal me if the vtk egg is updated? much appreciated! cheers, -jelle On 3/8/07, Prabhu Ramachandran wrote: > > >>>>> "Jelle" == Jelle Feringa / EZCT Architecture & Design Research < > jelle.feringa at ezct.net> writes: > > Jelle> Hi, Currently I'm using the enthon (win32, 2.4) 1.0.0 > Jelle> release, which is terrific! > > Thanks to Bryce and the enthought build team (or is that a one man > army?) for all their work! > > Jelle> Though there is a minor, but to me important glitch in > Jelle> (t)vtk; > [...] > Jelle> Saving as a vector PS/EPS/PDF/TeX file using GL2PS is > Jelle> either not supported by your version of VTK or you have not > Jelle> configured VTK to work with GL2PS -- read the documentation > Jelle> for the vtkGL2PSExporter class. > > Jelle> Which is really too bad. > > This would require Bryce to build VTK with the VTK_USE_GL2PS flag set > to on for the next release. There are no other dependencies that this > brings up. > > cheers, > prabhu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nauss at lcrs.de Wed Mar 14 02:51:36 2007 From: nauss at lcrs.de (Thomas Nauss) Date: Wed, 14 Mar 2007 07:51:36 +0100 Subject: [SciPy-user] Sort 3-D arrays by two columns Message-ID: <45F79B78.4070806@lcrs.de> Dear Experts, I want plot some satellite datasets using matplotlib with basemap. For that task, I need a 3-D array (or three 1-D arrays) containing latitude, longitude, and corresponding data values. Since basemap needs latitude and longitude values in increasing order, I'm looking for a function to sort the 3-D array by the two lat/lon columns (i.e. first by latitude, then by longitude but conserving the value combinations). Thanks, Thomas From robert.kern at gmail.com Wed Mar 14 03:00:57 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 Mar 2007 02:00:57 -0500 Subject: [SciPy-user] Using SciPy/NumPy optimization In-Reply-To: <971382.30683.qm@web57911.mail.re3.yahoo.com> References: <971382.30683.qm@web57911.mail.re3.yahoo.com> Message-ID: <45F79DA9.1020005@gmail.com> lechtlr wrote: > Hi there: > > I am looking for a non-linear, constrained optimization tool, and thought fmin_tnc would do the job that I wanted to do. > As a starter, I tried the attached script posted here recently. However, I get the following error when I run this script: > "TypeError: only length-1 arrays can be converted to Python scalars" > Can any one help me to figure out what I was doing wrong ? > def __call__(self, abc): > """ A function suitable for passing to the fmin() minimizers. > """ > a, b, c = abc > y = a*(1.0 + b*c*self.x) ** (-1.0/b) > dy = self.y - y > return dy*dy Sorry, I screwed up this example. The function needs to return a scalar. return dot(dy, dy) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Wed Mar 14 03:09:22 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 14 Mar 2007 16:09:22 +0900 Subject: [SciPy-user] Sort 3-D arrays by two columns In-Reply-To: <45F79B78.4070806@lcrs.de> References: <45F79B78.4070806@lcrs.de> Message-ID: Does lexsort do what you need? Also, have you seen the numpy examples list? It's a good place to look for this sort of thing: http://www.scipy.org/Numpy_Example_List http://www.scipy.org/Numpy_Example_List_With_Doc --bb On 3/14/07, Thomas Nauss wrote: > Dear Experts, > I want plot some satellite datasets using matplotlib with basemap. For > that task, I need a 3-D array (or three 1-D arrays) containing latitude, > longitude, and corresponding data values. Since basemap needs latitude > and longitude values in increasing order, I'm looking for a function to > sort the 3-D array by the two lat/lon columns (i.e. first by latitude, > then by longitude but conserving the value combinations). > Thanks, > Thomas > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Wed Mar 14 03:31:06 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Mar 2007 08:31:06 +0100 Subject: [SciPy-user] Structured matrices In-Reply-To: <759494.19283.qm@web414.biz.mail.mud.yahoo.com> References: <759494.19283.qm@web414.biz.mail.mud.yahoo.com> Message-ID: <45F7A4BA.7070302@iam.uni-stuttgart.de> Barry Drake wrote: > Nils, > Does this matrix come from a particular application? I'm working on > algorithms for the non-negative matrix factorization (NMF). With this > matrix as input I'm getting some very strange results. So I'm curious > about potential applications. > > Cheers! > Barry L. Drake > GA Tech > Barry, Yes indeed. The mass matrix of a multi-link pendulum (provided that the equations of motion are linearized) exhibits this structure. What do you mean by strange results ? Here are some references @article{Lobas, author={Lobas, L. G.}, title={Generalized mathematical model of an inverted multilink pendulum with follower force}, journal={International Applied Mechanics}, volume={41}, pages={566-572}, year={2005}} @article{Gallina, author={Gallina, P.}, title={About the stability of non-conservative undamped systems}, journal={Journal of Sound and Vibration}, volume={262}, pages={977-988}, year={2003}} @article{Gasparini, author={Gasparini, A.~M. and Saetta, A.~V. and Vitaliani, R.~V.}, title={On the stability and instability regions of non-conservative continuous system under partially follower forces}, journal={Computer Methods in Applied Mechanics and Engineering}, pages={63-78}, volume={124}, year={1995}} I look forward to hearing from you. Cheers, Nils > */Nils Wagner/* wrote: > > Hi all, > > I was wondering if the matrix family (see below) has a special name ? > And/or is there a way to construct this matrix via special matrices > (like Hankel, Toeplitz, etc.) ? > > > [[ 3. 1.] > [ 1. 1.]] > > [[ 5. 3. 1.] > [ 3. 3. 1.] > [ 1. 1. 1.]] > > [[ 7. 5. 3. 1.] > [ 5. 5. 3. 1.] > [ 3. 3. 3. 1.] > [ 1. 1. 1. 1.]] > > [[ 9. 7. 5. 3. 1.] > [ 7. 7. 5. 3. 1.] > [ 5. 5. 5. 3. 1.] > [ 3. 3. 3. 3. 1.] > [ 1. 1. 1. 1. 1.]] > > Nils > > from scipy import * > > def M(n): > > tmp = ones((n,n)) > for k in arange(n,0,-1): > tmp[:k-1,:k-1] = tmp[:k-1,:k-1] + 2 > return tmp > > for n in arange(2,8): > print M(n) > print > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Wed Mar 14 04:05:32 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Mar 2007 09:05:32 +0100 Subject: [SciPy-user] Structured matrices In-Reply-To: References: <45F547E9.3050208@iam.uni-stuttgart.de> Message-ID: <45F7ACCC.8060300@iam.uni-stuttgart.de> Anne Archibald wrote: > On 12/03/07, Nils Wagner wrote: > >> Hi all, >> >> I was wondering if the matrix family (see below) has a special name ? >> And/or is there a way to construct this matrix via special matrices >> (like Hankel, Toeplitz, etc.) ? >> > > Well, I've never come across them before, but they're easy to make directly: > > In [8]: minimum.outer(r_[7:-1:-2],r_[7:-1:-2]) > Out[8]: > array([[7, 5, 3, 1], > [5, 5, 3, 1], > [3, 3, 3, 1], > [1, 1, 1, 1]]) > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I like one-liners. Thank you very much ! Nils From pgmdevlist at gmail.com Mon Mar 12 16:13:31 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 12 Mar 2007 16:13:31 -0400 Subject: [SciPy-user] SVN: sandbox/pyloess Message-ID: <200703121613.32441.pgmdevlist@gmail.com> All, A few weeks ago, an implementation of loess was requested on this list. I just uploaded a beta version in the 'pyloess' directory of the sandbox. There are actually one interface for two modules: lowess, the univariate locally weighted linear estimation, and STL, the seasonal/trend decomposition. The modules are just f2py wrappers of the initial fortran sources (converted to double precision). I'd be happy to get your feedback, as usual. I've been also trying to port the loess algorithm (a more generic version of lowess that can accept up to 8 variables). However, I've been hitting a wall on this one. The initial code is in C, with regular calls to fortran subroutines. The base object is a nested structure, and the operations are performed on this nested structure. Initially, I tried to code the nested structure in python directly, so that I would just have to call the C/fortran routines, but that didn't really work (the results from some tests didn't match the ones I was getting from the C code directly). I then tried to use SWIG, before realizing I was spending most of my time writing extensions. Eventually, I decided to stick to Pyrex. I wrote as many classes as sub-structure to get access to the underlying objects, plus an extra class for the combination of the substructures. The result is so far fairly disappointing, as the final object doesn't even get initialized properly. After one week of this "going-nowhere-fast", I'm giving up temporarily. Nevertheless, I uploaded this prototype in a 'sandbox' subfolder of the package, hoping that a kind soul would provide me with some pointers about the generic approach and some ideas about why I segfault all the time (I'm thinking about the pyrex specialists among us...). Thanks a lot in advance for your comments and feedback. Pierre From cimrman3 at ntc.zcu.cz Wed Mar 14 09:21:25 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 14 Mar 2007 14:21:25 +0100 Subject: [SciPy-user] sparse generalized eigenvalue solver Message-ID: <45F7F6D5.2070500@ntc.zcu.cz> Hi all, I need to solve a problem of the type D * u = nu * M * u, nu .. eigenvalues, u .. eigenvectors where M = B * A^-1 B^T, the matrices come from a linear system of the following structure: A * u - B^T * d/dt p = 0 B * u + D * p = 0 For curious, A is a linear elasticity matrix, B^T, B discrete grad, div operators and D a diffusion matrix; it is related to modeling deformable porous medium. I need all the eigenvalues/eigenvectors. M cannot be formed explicitely in real situations, it will be available via its 'matvec' action. All matrices are stored in the CSR format. I am aware of the ARPACK being included in the scipy.sandbox - what is its status? What other software (Python-ready or not) I should look at? Thanks, r. From rshepard at appl-ecosys.com Wed Mar 14 09:28:33 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Wed, 14 Mar 2007 06:28:33 -0700 (PDT) Subject: [SciPy-user] sparse generalized eigenvalue solver In-Reply-To: <45F7F6D5.2070500@ntc.zcu.cz> References: <45F7F6D5.2070500@ntc.zcu.cz> Message-ID: On Wed, 14 Mar 2007, Robert Cimrman wrote: > I need all the eigenvalues/eigenvectors. M cannot be formed explicitely in > real situations, it will be available via its 'matvec' action. All > matrices are stored in the CSR format. Have you looked at eig() and found it wanting? Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From nwagner at iam.uni-stuttgart.de Wed Mar 14 09:34:27 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Mar 2007 14:34:27 +0100 Subject: [SciPy-user] sparse generalized eigenvalue solver In-Reply-To: References: <45F7F6D5.2070500@ntc.zcu.cz> Message-ID: <45F7F9E3.4090009@iam.uni-stuttgart.de> Rich Shepard wrote: > On Wed, 14 Mar 2007, Robert Cimrman wrote: > > >> I need all the eigenvalues/eigenvectors. M cannot be formed explicitely in >> real situations, it will be available via its 'matvec' action. All >> matrices are stored in the CSR format. >> > > Have you looked at eig() and found it wanting? > > Rich > > eig is for dense matrices. Nils From nwagner at iam.uni-stuttgart.de Wed Mar 14 09:37:51 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Mar 2007 14:37:51 +0100 Subject: [SciPy-user] sparse generalized eigenvalue solver In-Reply-To: <45F7F6D5.2070500@ntc.zcu.cz> References: <45F7F6D5.2070500@ntc.zcu.cz> Message-ID: <45F7FAAF.8090902@iam.uni-stuttgart.de> Robert Cimrman wrote: > Hi all, > > I need to solve a problem of the type > > D * u = nu * M * u, nu .. eigenvalues, u .. eigenvectors > > where M = B * A^-1 B^T, the matrices come from a linear system of the > following structure: > > A * u - B^T * d/dt p = 0 > B * u + D * p = 0 > > For curious, A is a linear elasticity matrix, B^T, B discrete grad, div > operators and D a diffusion matrix; it is related to modeling deformable > porous medium. > > I need all the eigenvalues/eigenvectors. M cannot be formed explicitely > in real situations, it will be available via its 'matvec' action. All > matrices are stored in the CSR format. > > I am aware of the ARPACK being included in the scipy.sandbox - what is > its status? What other software (Python-ready or not) I should look at? > > Thanks, > r. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > LOBPCG looks promising. I have filed a ticket http://projects.scipy.org/scipy/scipy/ticket/373 Trilinos includes Anasazi. Unfortunately Pytrilinos doesn't support Anasazi right now. http://software.sandia.gov/trilinos/packages/anasazi/ http://www.sandia.gov/~rblehou/*anasazi*-toms.pdf There is SLEPc which is based on PETSc. http://www.grycap.upv.es/slepc/ Cheers, Nils From sgarcia at olfac.univ-lyon1.fr Wed Mar 14 09:52:12 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Wed, 14 Mar 2007 14:52:12 +0100 Subject: [SciPy-user] stats.kstest Message-ID: <45F7FE0C.2010501@olfac.univ-lyon1.fr> Hi list, stats.kstest permit to compare the distribution of a sample with a theorical distribution. In matlab kstest exists to but there is also kstest2 that permit to compare the distribution of two samples. Is there a equivalent in scipy ? Thanks Sam -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logisique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From sgarcia at olfac.univ-lyon1.fr Wed Mar 14 09:58:13 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Wed, 14 Mar 2007 14:58:13 +0100 Subject: [SciPy-user] stats.kstest In-Reply-To: <45F7FE0C.2010501@olfac.univ-lyon1.fr> References: <45F7FE0C.2010501@olfac.univ-lyon1.fr> Message-ID: <45F7FF75.9020408@olfac.univ-lyon1.fr> I find it : stats.ks_2samp ! Samuel GARCIA wrote: > Hi list, > > stats.kstest permit to compare the distribution of a sample with a > theorical distribution. > > In matlab kstest exists to but there is also kstest2 that permit to > compare the distribution of two samples. > Is there a equivalent in scipy ? > > Thanks > > Sam > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logisique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From olivetti at itc.it Wed Mar 14 10:17:25 2007 From: olivetti at itc.it (Emanuele Olivetti) Date: Wed, 14 Mar 2007 15:17:25 +0100 Subject: [SciPy-user] clustering using custom distance Message-ID: <45F803F5.20804@itc.it> Hi, I'm approaching clustering module in scipy but as far as I can see there is only one distance measure available (euclidean distance). Is there a way to easy embed a different distance measure between samples? Something like correlation or in general custum functions I mean. Thanks in advance, Emanuele ------------------ ITC -> dall'1 marzo 2007 Fondazione Bruno Kessler ITC -> since 1 March 2007 Fondazione Bruno Kessler ------------------ From zpincus at stanford.edu Wed Mar 14 10:31:50 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Wed, 14 Mar 2007 09:31:50 -0500 Subject: [SciPy-user] problems with ndimage spline filtering Message-ID: <1755A525-A801-4F5A-8D68-64C93BCE88DD@stanford.edu> Hello folks, I've started to use the ndimage package in scipy, and had noticed that I got really ugly results from image interpolation with spline orders greater than 1. I tracked this down to the 'pre-filtering' step with scipy.ndimage.spline_filter, which is (I presume) supposed to smooth out the image signal so that splines with large support can be used for interpolation. Unfortunately, this function doesn't smooth anything; instead it appears to introduce very strong ringing artifacts (and not much else). I've attached example images, filtered with orders 2, 3, and 4. The ringing is extremely pronounced on the piecewise-constant image (input2.png), but even for natural, continuous tone images (e.g. input1.png) there is a big problem. Here are examples: http://www.stanford.edu/~zpincus/spline.html Note that in all cases, the image was processed as 64-bit floating point before clamping to 8-bit for output, so this isn't likely to be a precision problem. the filter command I used was: >>> filtered = scipy.ndimage.spline_filter(input_array, order=n) where n varied. Is this the expected results? Does anyone have any idea what's going on here? Thanks, Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine From rhc28 at cornell.edu Wed Mar 14 10:47:41 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Wed, 14 Mar 2007 10:47:41 -0400 Subject: [SciPy-user] Best way to structure dynamic system-of-systems simulations? In-Reply-To: <1173758272.25956.1179135235@webmail.messagingengine.com> References: <1172896076.28906.1177490111@webmail.messagingengine.com> <1173758272.25956.1179135235@webmail.messagingengine.com> Message-ID: Phil, I'd happily give you some direction to get this working in PyDSTool, which is the only python framework I know of that will give you this kind of functionality. It will certainly run a lot faster than odeint. Before branching out on your own, perhaps you'd consider talking to me about solving your problem with our code, so that you can give us constructive feedback about it. If people make the effort to communicate with us we can speed up the maturation process. You can offer us code too if you want to get more involved. We provide a couple of example scripts for systems built from modular components in the way you describe (PyDSTool/tests/ModelSpec_test.py and CIN.py). Rob From cimrman3 at ntc.zcu.cz Wed Mar 14 11:18:59 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 14 Mar 2007 16:18:59 +0100 Subject: [SciPy-user] sparse generalized eigenvalue solver In-Reply-To: <45F7FAAF.8090902@iam.uni-stuttgart.de> References: <45F7F6D5.2070500@ntc.zcu.cz> <45F7FAAF.8090902@iam.uni-stuttgart.de> Message-ID: <45F81263.8050006@ntc.zcu.cz> Nils Wagner wrote: > > LOBPCG looks promising. > I have filed a ticket > http://projects.scipy.org/scipy/scipy/ticket/373 > > Trilinos includes Anasazi. Unfortunately Pytrilinos doesn't support > Anasazi right now. > http://software.sandia.gov/trilinos/packages/anasazi/ > > http://www.sandia.gov/~rblehou/*anasazi*-toms.pdf > > There is SLEPc which is based on PETSc. > http://www.grycap.upv.es/slepc/ Thanks for the links! LOBPCG looks small enough to be easily wrapped for Python. cheers, r. From nwagner at iam.uni-stuttgart.de Wed Mar 14 11:25:42 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Mar 2007 16:25:42 +0100 Subject: [SciPy-user] sparse generalized eigenvalue solver In-Reply-To: <45F81263.8050006@ntc.zcu.cz> References: <45F7F6D5.2070500@ntc.zcu.cz> <45F7FAAF.8090902@iam.uni-stuttgart.de> <45F81263.8050006@ntc.zcu.cz> Message-ID: <45F813F6.1010308@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> LOBPCG looks promising. >> I have filed a ticket >> http://projects.scipy.org/scipy/scipy/ticket/373 >> >> Trilinos includes Anasazi. Unfortunately Pytrilinos doesn't support >> Anasazi right now. >> http://software.sandia.gov/trilinos/packages/anasazi/ >> >> http://www.sandia.gov/~rblehou/*anasazi*-toms.pdf >> >> There is SLEPc which is based on PETSc. >> http://www.grycap.upv.es/slepc/ >> > > Thanks for the links! LOBPCG looks small enough to be easily wrapped for > Python. > > cheers, > r. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Would you take the trouble and add LOBPCG to the sandbox ;-) ? Cheers, Nils From steveire at gmail.com Wed Mar 14 11:33:44 2007 From: steveire at gmail.com (Stephen Kelly) Date: Wed, 14 Mar 2007 15:33:44 +0000 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <45EEF5A5.6080202@ee.byu.edu> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> <45EED6DF.9020502@american.edu> <45EEF5A5.6080202@ee.byu.edu> Message-ID: <18fbbe5a0703140833o4cc196a4pa653439f60a6bd4e@mail.gmail.com> Interesting discussion. I'd obviously prefer that the copy/paste takes place, but I'd like to know what's going to happen now that the discussion has taken place. On 3/7/07, Travis Oliphant wrote: > Alan Isaac wrote: > > >Matthieu Brucher wrote: > > > > > >>I'm opposed to this as well, interpolation is not linear algebra, it is > >>signal processing, and as Souheil said, why not nd-interpolation with > >>B-Splines then ? > >> > >> > > > >I am not taking sides on this issue, but I do > >take issue with trying to settle it with this > >kind of "argument". A rhetorical question does > >not set out the issues. For example, treating > >a rhetorical question as a real question for a > >moment, one might respond that a simple 1-d > >interpolation would > >- find wide use among people who need nothing more > >- enhance backward compatibility > >- impose minimum maintenance requirements > > > >Again, I am not taking sides on the issue, so I > >am in no way suggesting that such points are > >decisive. However opposition that fails to > >address such points is hardly decisive either. > > > >My point of reference would be: if it is likely > >to drain even modest developer effort away from > >core NumPy issues, then it is harmful. Otherwise > >it is harmless as long as it introduces no new > >dependencies. > > > > > This is an impoirtant point for me. I'm only considering adding interp > because it's already done and just a matter of cut-and-paste. > > My take on this, is that I would rather use developer time to make scipy > more modular and easier to install (this usually translates to binary > builds for multiple platforms). That way you could grab just the > modules of interest. Ideally, grabbing these modules is a matter of > using setuptools (or some other package management concept) to do it > automatically as needed. > > -Travis > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Wed Mar 14 11:45:58 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 14 Mar 2007 10:45:58 -0500 Subject: [SciPy-user] problem with _num.seterr when importing scipy In-Reply-To: <45F77C74.1090901@gmail.com> References: <200702091522.01125.krlong@sandia.gov> <45CCE67E.1090508@gmail.com> <200702091531.11770.krlong@sandia.gov> <45F77C74.1090901@gmail.com> Message-ID: So is this a manifestation of trying to run versions of numpy and scipy that don't go together? On 3/13/07, Robert Kern wrote: > Ryan Krauss wrote: > > I was having the same problem. Did this ever get solved? > > The 'all' keyword was introduced after version 1.0b5, the version that Kevin > Long was using. That was the problem. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From anand at soe.ucsc.edu Wed Mar 14 12:06:58 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Wed, 14 Mar 2007 09:06:58 -0700 Subject: [SciPy-user] Structured matrices In-Reply-To: References: Message-ID: <45F81DA2.2000201@cse.ucsc.edu> Nils, Barry, That matrix is also the covariance of Brownian motion evaluated at times [9, 7, 5, 3, 1]. The inverse matrix bears a strong resemblance to the numerical second derivative operator. This makes sense because the second derivative in x of min(x, \xi) is a delta function concentrated at \xi, but there's probably a more important connection that I don't know about. Maybe this helps explain the strange results? I found some related refs, but haven't looked at them: http://citeseer.ist.psu.edu/347057.html, http://adsabs.harvard.edu/abs/2003EAEJA.....2001X . Cheers, Anand -------------------------- Nils, Does this matrix come from a particular application? I'm working on algorithms for the non-negative matrix factorization (NMF). With this matrix as input I'm getting some very strange results. So I'm curious about potential applications. Cheers! Barry L. Drake GA Tech Nils Wagner wrote: Hi all, I was wondering if the matrix family (see below) has a special name ? And/or is there a way to construct this matrix via special matrices (like Hankel, Toeplitz, etc.) ? [[ 3. 1.] [ 1. 1.]] [[ 5. 3. 1.] [ 3. 3. 1.] [ 1. 1. 1.]] [[ 7. 5. 3. 1.] [ 5. 5. 3. 1.] [ 3. 3. 3. 1.] [ 1. 1. 1. 1.]] [[ 9. 7. 5. 3. 1.] [ 7. 7. 5. 3. 1.] [ 5. 5. 5. 3. 1.] [ 3. 3. 3. 3. 1.] [ 1. 1. 1. 1. 1.]] Nils From lucas.barcelos at gmail.com Wed Mar 14 13:08:14 2007 From: lucas.barcelos at gmail.com (Lucas Barcelos de Oliveira) Date: Wed, 14 Mar 2007 14:08:14 -0300 Subject: [SciPy-user] Help building CVXOPT for W32/Python 2.3 Message-ID: Hello all, It's been a week now since I started trying to build cvxopt for python 2.3under Win XP with no success. I've build ATLAS using Cygwin (followed the instructions in http://www.scipy.org/Installing_SciPy/Windows) and manage to create the lib files, but when I try to compile, using the command /cygdrive/c/Python23/python setup.py config --compiler=cygwin build --compiler=cygwin, I get a lot undefined references to several functions: C:\cygwin\bin\gcc.exe -mcygwin -shared -s build\temp.win32- 2.3\Release\c\base.o build\temp.win32-2.3\Release\c\dense.obuild\temp.win32- 2.3\Release\c\sparse.o build\temp.win32-2.3\Release\c\base.def-L/cygdrive/c/BLAS-LAPACK/ -Lc:\Python23\libs -Lc: \Python23\PCBuild -lm -llapack -lblas -lg2c -lpython23 -o build\lib.win32- 2.3\cvxopt\base.pyd build\temp.win32-2.3\Release\c\base.o:base.c:(.text+0x587): undefined reference to `_dscal' build\temp.win32-2.3\Release\c\base.o:base.c:(.text+0x62f): undefined reference to `_zscal' ... build\temp.win32-2.3\Release\c\dense.o:dense.c:(.text+0x57cb): undefined reference to `_cpow' build\temp.win32-2.3\Release\c\dense.o:dense.c:(.text+0x5914): undefined reference to `__assert' ... build\temp.win32-2.3\Release\c\sparse.o:sparse.c:(.text+0xf8b6): undefined reference to `__assert' collect2: ld returned 1 exit status Since I couldn't find a solution for this, I tried compiling with Borland C++ compiler, here is the result; C:\cvxopt-0.8.2\src>python setup.py build --compiler=bcpp running build running build_py running build_ext building 'base' extension C:\Borland\BCC55\Bin\bcc32.exe -c /tWM /O2 /q /g0 -IC:\Python23\include -IC:\Python23\PC -obuild\temp.win32-2.3\Release\c\dense.obj C\dense.c C\dense.c: Fatal F1003 c:\Borland\Bcc55\include\stdcomp.h 5: Error directive: Must use C++ for STDCOMP.H *** 1 errors in Compile *** error: command 'bcc32.exe' failed with exit status 1 My next attempt was to use Visual Studio 6. The result: C:\cvxopt-0.8.2\src>python setup.py build running build running build_py running build_ext building 'base' extension C:\Program Files\Microsoft Visual Studio\VC98\BIN\cl.exe /c /nologo /Ox /MD /W3 /GX /DNDEBUG -IC:\Python23\include -IC:\Python23\PC /TcC/sparse.c /Fobuild\temp.win32-2.3\Release\C/sparse.obj sparse.c C/cvxopt.h(13) : fatal error C1083: Cannot open include file: 'complex.h': No such file or directory error: command '"C:\Program Files\Microsoft Visual Studio\VC98\BIN\cl.exe"' failed with exit status 2 I've checked MSVC6 include dir and there really isn't a complex.h file, but there is a COMPLEX file, if I rename it to COMPLEX.H i get the error: C:\Program Files\Microsoft Visual Studio\VC98\INCLUDE\eh.h(32) : fatal error C1189: #error : "eh.h is only for C++!" error: command '"C:\Program Files\Microsoft Visual Studio\VC98\BIN\cl.exe"' failed with exit status 2 So I am pretty lost and desperate, I wish I could upgrade my Python version to use a win32 installer but I am stuck at 2.3 because of the traffic simulator I use (AIMSUN). Any help will be really appreciated! Best regards, -- ---------------------------------------------------------- Lucas Barcelos de Oliveira P?s Controle e Automa??o - UFSC lucas.barcelos at gmail.com ---------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Mar 14 13:38:22 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 Mar 2007 12:38:22 -0500 Subject: [SciPy-user] problem with _num.seterr when importing scipy In-Reply-To: References: <200702091522.01125.krlong@sandia.gov> <45CCE67E.1090508@gmail.com> <200702091531.11770.krlong@sandia.gov> <45F77C74.1090901@gmail.com> Message-ID: <45F8330E.9070603@gmail.com> Ryan Krauss wrote: > So is this a manifestation of trying to run versions of numpy and > scipy that don't go together? Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Mar 14 13:40:28 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 Mar 2007 12:40:28 -0500 Subject: [SciPy-user] Help building CVXOPT for W32/Python 2.3 In-Reply-To: References: Message-ID: <45F8338C.9050108@gmail.com> Lucas Barcelos de Oliveira wrote: > Hello all, > > It's been a week now since I started trying to build cvxopt for python > 2.3 under Win XP with no success. CVXOPT is not a scipy project, and I don't know of anyone here who uses it. I recommend asking the authors for help. http://www.ee.ucla.edu/~vandenbe/cvxopt/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lechtlr at yahoo.com Wed Mar 14 13:52:08 2007 From: lechtlr at yahoo.com (lechtlr) Date: Wed, 14 Mar 2007 10:52:08 -0700 (PDT) Subject: [SciPy-user] Using SciPy/NumPy optimization Message-ID: <929095.44776.qm@web57903.mail.re3.yahoo.com> Robert: Thanks for the fmin_tnc example and it's working now. However, I have trouble getting optimal parameters for the model. I have tied varies options from 'fmin_tnc', and nothing seems to work so far. I would appreciate, if you can give some clues to get optimal values. I have attached my python script. Thanks, Lex from numpy import * from scipy.optimize import fmin_tnc class LossFunction(object): def __init__(self, x, y): self.x = x self.y = y def __call__(self, abc): """ A function suitable for passing to the fmin() minimizers. """ a, b, c = abc y = a*(1.0 + b*c*self.x) ** (-1.0/b) dy = self.y - y return dot(dy,dy) #Generating y for given set of x a_true = 100.0 b_true = 1.0 c_true = 10.0 T = range(1000, 2500, 10) data = zeros([len(T), 2], 'd') for m in range(len(T)): error = rand(1)/T[m] r = a_true*(1.0 + b_true*c_true*T[m])**(-1.0/b_true) + error data[m][0] = T[m] data[m][1] = r x = data[:,0] y = data[:,1] print 'numer of data points:', len(T) lf = LossFunction(x, y) abc0 = [10.0, 1.0, 5.0] retcode, nfeval, abc_optimal = fmin_tnc(lf, abc0, bounds=None, approx_grad=True) print 'retcode:', retcode print 'nfeval:', nfeval print 'Optimal Parameters:', abc_optimal --------------------------------- Expecting? Get great news right away with email Auto-Check. Try the Yahoo! Mail Beta. -------------- next part -------------- An HTML attachment was scrubbed... URL: From corzneffect at gmail.com Wed Mar 14 14:36:49 2007 From: corzneffect at gmail.com (Cory Davis) Date: Wed, 14 Mar 2007 18:36:49 +0000 Subject: [SciPy-user] import numpy segmentation fault Message-ID: <4948f97c0703141136j522fd32kbe202874ff19e3cf@mail.gmail.com> Hi there, I have just installed numpy-1.0.1 from source, which seemed to go fine. However when I try to "import numpy" I get a segmentation fault. A have a 64 bit machine running RedHat Enterprise Linux and Python 2.34 Any clues greatly appreciated. Cheers, Cory. From zpincus at stanford.edu Wed Mar 14 14:49:10 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Wed, 14 Mar 2007 13:49:10 -0500 Subject: [SciPy-user] [Numpy-discussion] import numpy segmentation fault In-Reply-To: <4948f97c0703141136j522fd32kbe202874ff19e3cf@mail.gmail.com> References: <4948f97c0703141136j522fd32kbe202874ff19e3cf@mail.gmail.com> Message-ID: <0F58E0ED-B2C4-4CF6-8EE3-4A25BEC73854@stanford.edu> If I recall correctly, there's a bug in numpy 1.0.1 on Linux-x86-64 that causes this segfault. This is fixed in the latest SVN version of numpy, so if you can grab that, it should work. I can't find the trac ticket, but I ran into this some weeks ago. Zach On Mar 14, 2007, at 1:36 PM, Cory Davis wrote: > Hi there, > > I have just installed numpy-1.0.1 from source, which seemed to go > fine. However when I try to "import numpy" I get a segmentation > fault. > > A have a 64 bit machine running RedHat Enterprise Linux and Python > 2.34 > > Any clues greatly appreciated. > > Cheers, > Cory. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From stefan at sun.ac.za Wed Mar 14 15:08:39 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 14 Mar 2007 21:08:39 +0200 Subject: [SciPy-user] problems with ndimage spline filtering In-Reply-To: <1755A525-A801-4F5A-8D68-64C93BCE88DD@stanford.edu> References: <1755A525-A801-4F5A-8D68-64C93BCE88DD@stanford.edu> Message-ID: <20070314190839.GH7562@mentat.za.net> Hi Zachary On Wed, Mar 14, 2007 at 09:31:50AM -0500, Zachary Pincus wrote: > I've started to use the ndimage package in scipy, and had noticed > that I got really ugly results from image interpolation with spline > orders greater than 1. I've noticed this too, hence http://projects.scipy.org/scipy/scipy/ticket/213 Unfortunately I havn't had time to find the source of the problem. Please add your information to the ticket, hopefully we can narrow it down somewhat. Cheers St?fan From stefan at sun.ac.za Wed Mar 14 15:08:39 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 14 Mar 2007 21:08:39 +0200 Subject: [SciPy-user] problems with ndimage spline filtering In-Reply-To: <1755A525-A801-4F5A-8D68-64C93BCE88DD@stanford.edu> References: <1755A525-A801-4F5A-8D68-64C93BCE88DD@stanford.edu> Message-ID: <20070314190839.GH7562@mentat.za.net> Hi Zachary On Wed, Mar 14, 2007 at 09:31:50AM -0500, Zachary Pincus wrote: > I've started to use the ndimage package in scipy, and had noticed > that I got really ugly results from image interpolation with spline > orders greater than 1. I've noticed this too, hence http://projects.scipy.org/scipy/scipy/ticket/213 Unfortunately I havn't had time to find the source of the problem. Please add your information to the ticket, hopefully we can narrow it down somewhat. Cheers St?fan From ryanlists at gmail.com Wed Mar 14 16:38:46 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 14 Mar 2007 15:38:46 -0500 Subject: [SciPy-user] scipy.multiply and scipy.divide Message-ID: I was about to reuse some old code of mine and noticed that it uses scipy.multiply and scipy.divide which are ufuncs for element-wise multiplication and division. With the new API for numpy, is this any different from just using a*b or a/b is a and b are vectors (i.e. arrays with shape (n,))? Thanks, Ryan From robert.kern at gmail.com Wed Mar 14 16:51:54 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 Mar 2007 15:51:54 -0500 Subject: [SciPy-user] scipy.multiply and scipy.divide In-Reply-To: References: Message-ID: <45F8606A.7000308@gmail.com> Ryan Krauss wrote: > I was about to reuse some old code of mine and noticed that it uses > scipy.multiply and scipy.divide which are ufuncs for element-wise > multiplication and division. With the new API for numpy, is this any > different from just using a*b or a/b is a and b are vectors (i.e. > arrays with shape (n,))? Not any more. I assume that you used the scipy (actually scipy_base) functions instead of the operators because they were the faster/nan-tastic versions at the time. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tom.denniston at alum.dartmouth.org Wed Mar 14 17:11:42 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Wed, 14 Mar 2007 16:11:42 -0500 Subject: [SciPy-user] Sort 3-D arrays by two columns In-Reply-To: References: <45F79B78.4070806@lcrs.de> Message-ID: I think you want lexsort. It takes a tuple of columns. On 3/14/07, Bill Baxter wrote: > Does lexsort do what you need? > > Also, have you seen the numpy examples list? It's a good place to > look for this sort of thing: > http://www.scipy.org/Numpy_Example_List > http://www.scipy.org/Numpy_Example_List_With_Doc > > --bb > > On 3/14/07, Thomas Nauss wrote: > > Dear Experts, > > I want plot some satellite datasets using matplotlib with basemap. For > > that task, I need a 3-D array (or three 1-D arrays) containing latitude, > > longitude, and corresponding data values. Since basemap needs latitude > > and longitude values in increasing order, I'm looking for a function to > > sort the 3-D array by the two lat/lon columns (i.e. first by latitude, > > then by longitude but conserving the value combinations). > > Thanks, > > Thomas > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From corzneffect at gmail.com Thu Mar 15 08:20:22 2007 From: corzneffect at gmail.com (Cory Davis) Date: Thu, 15 Mar 2007 12:20:22 +0000 Subject: [SciPy-user] New Scipy not finding ATLAS Message-ID: <4948f97c0703150520o7b63e71t289d43165a5d4d5d@mail.gmail.com> Hi All, Previously I had scipy-0.4.9 working perfectly with the complete atlas3.6.0 libraries. But now for some reason scipy-0.5.2 is not seeing these during installation, where for both 0.4.9 and 0.5.2 the location of the atlas libraries is specified by the ATLAS environment variable. Can anyone tell me what is going on? What has changed between 0.4.9 and 0.5.2 to make this happen? If it helps, I have a 64 bit Linux machine and Python 2.3.4 Cheers, Cory From corzneffect at gmail.com Thu Mar 15 12:32:40 2007 From: corzneffect at gmail.com (Cory Davis) Date: Thu, 15 Mar 2007 16:32:40 +0000 Subject: [SciPy-user] New Scipy not finding ATLAS In-Reply-To: <4948f97c0703150520o7b63e71t289d43165a5d4d5d@mail.gmail.com> References: <4948f97c0703150520o7b63e71t289d43165a5d4d5d@mail.gmail.com> Message-ID: <4948f97c0703150932x52039e75y549f9d9a73f9a710@mail.gmail.com> Solved. It seems you have to edit the numpy site.cfg file. On 3/15/07, Cory Davis wrote: > Hi All, > Previously I had scipy-0.4.9 working perfectly with the complete > atlas3.6.0 libraries. But now for some reason scipy-0.5.2 is not > seeing these during installation, where for both 0.4.9 and 0.5.2 the > location of the atlas libraries is specified by the ATLAS environment > variable. Can anyone tell me what is going on? What has changed > between 0.4.9 and 0.5.2 to make this happen? > > If it helps, I have a 64 bit Linux machine and Python 2.3.4 > > Cheers, > Cory > From ldavid at MIT.EDU Thu Mar 15 12:36:59 2007 From: ldavid at MIT.EDU (Lawrence David) Date: Thu, 15 Mar 2007 12:36:59 -0400 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? Message-ID: <0118D1DC-BB70-44F3-8B6A-1329EA267CDF@mit.edu> Hi there, I'm pulling the little hair I have out trying to install SciPy on my G4 PPC running OS 10.4.9. I've followed the instructions on the SciPy.org website: http:// www.scipy.org/Installing_SciPy/Mac_OS_X. I've got gcc 4.0.1 and gfortran (gnu95 -> 4.3). Running build: "python setup.py build_src build_clib -- fcompiler=gnu95 build_ext --fcompiler=gnu95 build", yields a long error message that terminates with a litany of "Undefined symbols" and this last message: collect2: ld returned 1 exit status error: Command "/usr/local/bin/gfortran -L/sw/lib build/ temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/Lib/fftpack/ _fftpackmodule.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/ zfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/drfft.o build/ temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o build/ temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o build/ temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/ fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/powerpc-apple- darwin8.8.0/4.3.0 -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -lfftw3 -lgfortran -o build/lib.macosx-10.3-fat-2.5/scipy/fftpack/ _fftpack.so" failed with exit status 1 I've noticed several other folks have had similar error messages on this board, but none of the posted solutions worked for me. If anyone could offer any suggestions on what to try, I'd love to hear them! (I've even tried installing the SuperPack binary, which lies through its teeth when it says that it's installed correctly; import scipy still fails. Perhaps my problem is that I'm using gfortran? Can anyone tell me how to revert to g77?) Much thanks!! - lawrence http://www.stinkpot.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Thu Mar 15 12:49:42 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 15 Mar 2007 17:49:42 +0100 Subject: [SciPy-user] Vectorize vs Map Message-ID: Dear All, Probably another newbie question: I like quite a lot the vectorize() command which allows me to skip iterations on functions, but the map() command on a list performs a similar task if I am not mistaken. Is there any reason to favour one above the other or is it just a matter of taste? Kind Regards Lorenzo From robert.kern at gmail.com Thu Mar 15 12:58:52 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 Mar 2007 11:58:52 -0500 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? In-Reply-To: <0118D1DC-BB70-44F3-8B6A-1329EA267CDF@mit.edu> References: <0118D1DC-BB70-44F3-8B6A-1329EA267CDF@mit.edu> Message-ID: <45F97B4C.4080404@gmail.com> Lawrence David wrote: > Hi there, > > I'm pulling the little hair I have out trying to install SciPy on my G4 > PPC running OS 10.4.9. > > I've followed the instructions on the SciPy.org > website: http://www.scipy.org/Installing_SciPy/Mac_OS_X. I've got gcc > 4.0.1 and gfortran (gnu95 -> 4.3). > > Running build: "python setup.py build_src build_clib --fcompiler=gnu95 > build_ext --fcompiler=gnu95 build", yields a long error message that > terminates with a litany of "Undefined symbols" and this last message: > > collect2: ld returned 1 exit status > error: Command "/usr/local/bin/gfortran -L/sw/lib > build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/Lib/fftpack/_fftpackmodule.o > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfft.o > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/drfft.o > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o > build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/fortranobject.o > -L/usr/local/lib -L/usr/local/lib/gcc/powerpc-apple-darwin8.8.0/4.3.0 > -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -lfftw3 -lgfortran -o > build/lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so" failed with > exit status 1 > > I've noticed several other folks have had similar error messages on this > board, but none of the posted solutions worked for me. If anyone could > offer any suggestions on what to try, I'd love to hear them! It looks like you have LDFLAGS defined (thus, the -L/sw/lib). That *replaces* all of the Python-specific linker flags that distutils provides. > (I've even tried installing the SuperPack binary, which lies through its > teeth when it says that it's installed correctly; import scipy still > fails. Perhaps my problem is that I'm using gfortran? Can anyone tell > me how to revert to g77?) You can't with a Universal Python. It needs gcc 4, which requires gfortran, not g77. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Thu Mar 15 12:58:56 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 15 Mar 2007 09:58:56 -0700 Subject: [SciPy-user] Vectorize vs Map In-Reply-To: References: Message-ID: <45F97B50.9050109@ee.byu.edu> Lorenzo Isella wrote: >Dear All, >Probably another newbie question: I like quite a lot the vectorize() >command which allows me to skip iterations on functions, but the map() >command on a list performs a similar task if I am not mistaken. >Is there any reason to favour one above the other or is it just a >matter of taste? > > map only works with sequences (i.e. 1-d arrays) and does not do broadcasting for multiple-valued functions. vectorize works with multiple dimensional arrays and also does broadcasting. I have not done any timings but I suspect there are lots of cases where map is probably faster because it does less. -Travis From robert.kern at gmail.com Thu Mar 15 13:04:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 Mar 2007 12:04:07 -0500 Subject: [SciPy-user] Vectorize vs Map In-Reply-To: References: Message-ID: <45F97C87.8080506@gmail.com> Lorenzo Isella wrote: > Dear All, > Probably another newbie question: I like quite a lot the vectorize() > command which allows me to skip iterations on functions, but the map() > command on a list performs a similar task if I am not mistaken. > Is there any reason to favour one above the other or is it just a > matter of taste? vectorize() takes a Python function and turns it into a ufunc. ufuncs do a lot more than map() does. They can take multidimensional arrays. n-ary ufuncs can take multiple inputs and broadcast them against each other. ufuncs have methods like .inner() and .reduce() which are quite powerful. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Thu Mar 15 13:15:34 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 15 Mar 2007 11:15:34 -0600 Subject: [SciPy-user] Vectorize vs Map In-Reply-To: <45F97C87.8080506@gmail.com> References: <45F97C87.8080506@gmail.com> Message-ID: On 3/15/07, Robert Kern wrote: > Lorenzo Isella wrote: > > Dear All, > > Probably another newbie question: I like quite a lot the vectorize() > > command which allows me to skip iterations on functions, but the map() > > command on a list performs a similar task if I am not mistaken. > > Is there any reason to favour one above the other or is it just a > > matter of taste? > > vectorize() takes a Python function and turns it into a ufunc. ufuncs do a lot > more than map() does. They can take multidimensional arrays. n-ary ufuncs can > take multiple inputs and broadcast them against each other. ufuncs have methods > like .inner() and .reduce() which are quite powerful. Mmh, isn't that what 'frompyfunc' does instead? vectorize doesn't seem to produce a true ufunc. Perhaps I'm just misunderstanding something: In [10]: def foo(x):return x ....: In [11]: vfoo = N.vectorize(foo) In [12]: vfoo.outer? Object `vfoo.outer` not found. In [13]: type vfoo -------> type(vfoo) Out[13]: In [14]: ufoo = N.frompyfunc(foo,1,1) In [15]: type ufoo -------> type(ufoo) Out[15]: In [16]: ufoo.outer? Type: builtin_function_or_method Base Class: Namespace: Interactive Docstring: I'm actually not exactly sure why the two do exist, to be honest. I realize frompyfunc has (probably by necessity) a more complicated API, but it does return a true ufunc, which vectorize() doesn't. Insight welcome. Cheers, f From robert.kern at gmail.com Thu Mar 15 13:17:24 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 Mar 2007 12:17:24 -0500 Subject: [SciPy-user] Vectorize vs Map In-Reply-To: References: <45F97C87.8080506@gmail.com> Message-ID: <45F97FA4.6050902@gmail.com> Fernando Perez wrote: > On 3/15/07, Robert Kern wrote: >> Lorenzo Isella wrote: >>> Dear All, >>> Probably another newbie question: I like quite a lot the vectorize() >>> command which allows me to skip iterations on functions, but the map() >>> command on a list performs a similar task if I am not mistaken. >>> Is there any reason to favour one above the other or is it just a >>> matter of taste? >> vectorize() takes a Python function and turns it into a ufunc. ufuncs do a lot >> more than map() does. They can take multidimensional arrays. n-ary ufuncs can >> take multiple inputs and broadcast them against each other. ufuncs have methods >> like .inner() and .reduce() which are quite powerful. > > Mmh, isn't that what 'frompyfunc' does instead? vectorize doesn't > seem to produce a true ufunc. Perhaps I'm just misunderstanding > something: No, just my poor memory. I don't use either. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cscheid at sci.utah.edu Thu Mar 15 14:04:24 2007 From: cscheid at sci.utah.edu (Carlos Scheidegger) Date: Thu, 15 Mar 2007 12:04:24 -0600 Subject: [SciPy-user] should vectorize honor array/matrix distinctions? Message-ID: <45F98AA8.30408@sci.utah.edu> Hi, should scipy.vectorize honor array/matrix distinctions? http://www.scipy.org/NumPy_for_Matlab_Users claims "[returning an array when given a matrix] shouldn't happen with NumPy functions (if it does it's a bug)". However, this is what I get on my amd64 ubuntu edgy box: $ python Python 2.4.4c1 (#2, Oct 11 2006, 20:00:03) [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> import numpy >>> scipy.__version__ '0.5.1' >>> numpy.__version__ '1.0rc1' >>> import math >>> def v(x): return int(math.log(abs(x) * 10.0, 10.0)) ... >>> m = scipy.matrix([[1.0, 5.0], [5.0, 10.0]]) >>> a = scipy.array([[1.0, 5.0], [5.0, 10.0]]) >>> vv = scipy.vectorize(v) >>> vv(m) array([[1, 1], [1, 2]]) >>> vv(a) array([[1, 1], [1, 2]]) >>> type(m) >>> type(a) >>> type(vv(m)) >>> type(vv(a)) Is this expected behavior? I know there are trivial workarounds, but I just wanted to clarify. FWIW, I'm using the universe numpy package. Thank you very much for your time, -carlos From ldavid at MIT.EDU Thu Mar 15 15:07:26 2007 From: ldavid at MIT.EDU (Lawrence David) Date: Thu, 15 Mar 2007 15:07:26 -0400 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? In-Reply-To: <45F97B4C.4080404@gmail.com> References: <0118D1DC-BB70-44F3-8B6A-1329EA267CDF@mit.edu> <45F97B4C.4080404@gmail.com> Message-ID: <77114BB9-AB85-4BB2-85F5-DE324676EC3C@mit.edu> Hot darn Batman - you nailed it!! Much thanks, - lawrence http://www.stinkpot.org On Mar 15, 2007, at 12:58 PM, Robert Kern wrote: > Lawrence David wrote: >> Hi there, >> >> I'm pulling the little hair I have out trying to install SciPy on >> my G4 >> PPC running OS 10.4.9. >> >> I've followed the instructions on the SciPy.org >> website: http://www.scipy.org/Installing_SciPy/Mac_OS_X. I've got >> gcc >> 4.0.1 and gfortran (gnu95 -> 4.3). >> >> Running build: "python setup.py build_src build_clib -- >> fcompiler=gnu95 >> build_ext --fcompiler=gnu95 build", yields a long error message that >> terminates with a litany of "Undefined symbols" and this last >> message: >> >> collect2: ld returned 1 exit status >> error: Command "/usr/local/bin/gfortran -L/sw/lib >> build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/Lib/ >> fftpack/_fftpackmodule.o >> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfft.o >> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/drfft.o >> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o >> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o >> build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/ >> fortranobject.o >> -L/usr/local/lib -L/usr/local/lib/gcc/powerpc-apple-darwin8.8.0/4.3.0 >> -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -lfftw3 -lgfortran -o >> build/lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so" failed with >> exit status 1 >> >> I've noticed several other folks have had similar error messages >> on this >> board, but none of the posted solutions worked for me. If anyone >> could >> offer any suggestions on what to try, I'd love to hear them! > > It looks like you have LDFLAGS defined (thus, the -L/sw/lib). That > *replaces* > all of the Python-specific linker flags that distutils provides. > >> (I've even tried installing the SuperPack binary, which lies >> through its >> teeth when it says that it's installed correctly; import scipy still >> fails. Perhaps my problem is that I'm using gfortran? Can anyone >> tell >> me how to revert to g77?) > > You can't with a Universal Python. It needs gcc 4, which requires > gfortran, not g77. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Fri Mar 16 05:37:21 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 16 Mar 2007 10:37:21 +0100 Subject: [SciPy-user] Vectorize vs Map In-Reply-To: <45F97FA4.6050902@gmail.com> References: <45F97C87.8080506@gmail.com> <45F97FA4.6050902@gmail.com> Message-ID: <80c99e790703160237n1367c895pba2ba31c9a3bf9d@mail.gmail.com> here is a simple timing on my machine. --------------------------------------------------- import scipy as S from timeit import Timer def f(x): return S.sqrt(S.absolute(x)**2) x = S.array(S.rand(1000)) fv = S.vectorize(f) fu = S.frompyfunc(f,1,1) def test(): #f(x) #fv(x) fu(x) if __name__ == '__main__': t = Timer('test()', 'from __main__ import test') n = 100 print "%.2f usec/pass" % (1e6*t.timeit(number=n)/n) --------------------------------------------------- I get: 229.84 usec/pass for f(x) 119410.40 usec/pass for fv(x) 114513.80 usec/pass for fu(x) vectorize and frompyfunc create functions roughly 500 times slower than the one using ndarrays arithmetics (even if it cannot operate on lists, just ndarrays). lorenzo. On 3/15/07, Robert Kern wrote: > > Fernando Perez wrote: > > On 3/15/07, Robert Kern wrote: > >> Lorenzo Isella wrote: > >>> Dear All, > >>> Probably another newbie question: I like quite a lot the vectorize() > >>> command which allows me to skip iterations on functions, but the map() > >>> command on a list performs a similar task if I am not mistaken. > >>> Is there any reason to favour one above the other or is it just a > >>> matter of taste? > >> vectorize() takes a Python function and turns it into a ufunc. ufuncs > do a lot > >> more than map() does. They can take multidimensional arrays. n-ary > ufuncs can > >> take multiple inputs and broadcast them against each other. ufuncs have > methods > >> like .inner() and .reduce() which are quite powerful. > > > > Mmh, isn't that what 'frompyfunc' does instead? vectorize doesn't > > seem to produce a true ufunc. Perhaps I'm just misunderstanding > > something: > > No, just my poor memory. I don't use either. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgarcia at olfac.univ-lyon1.fr Fri Mar 16 05:40:47 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Fri, 16 Mar 2007 10:40:47 +0100 Subject: [SciPy-user] Problem with weave blitz win32 Message-ID: <45FA661F.9020004@olfac.univ-lyon1.fr> Hi list, I have problem running weave on windowsXP. Version : Python 2.5 MiGW 3.4.2 Scipy 0.5.2 Numpy 1.0.1 array3d.py give me : numpy: [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]] Pure Inline: img[ 0][ 0]= 0 1 2 3 img[ 0][ 1]= 4 5 6 7 img[ 0][ 2]= 8 9 10 11 img[ 1][ 0]= 12 13 14 15 img[ 1][ 1]= 16 17 18 19 img[ 1][ 2]= 20 21 22 23 Blitz Inline: g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:5: warning: ignoring #pragma warning g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:6: warning: ignoring #pragma warning In file included from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/applics.h:400, from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/vecexpr.h:32, from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/vecpick.cc:16, from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/vecpick.h:293, from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/vector.h:449, from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/tinyvec.h:430, from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array-impl.h:44, from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array.h:32, from g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:9: G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/mathfunc.h: In static member function `static double blitz::_bz_expm1::apply(P_numtype1)': G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/mathfunc.h:1353: error: `::expm1' has not been declared In file included from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array/funcs.h:29, from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array/newet.h:29, from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array/et.h:27, from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array-impl.h:2515, from G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array.h:32, from g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:9: G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/funcs.h: In static member function `static T_numtype1 blitz::Fn_expm1::apply(T_numtype1)': G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/funcs.h:113: error: `::expm1' has not been declared g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp: In function `PyObject* file_to_py(FILE*, char*, char*)': g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:404: warning: unused variable 'py_obj' g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp: In function `blitz::Array convert_to_blitz(PyArrayObject*, const char*) [with T = int, int N = 3]': g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:708: instantiated from here g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:657: warning: unused variable 'stride_acc' Traceback (most recent call last): File "array3d.py", line 105, in main() File "array3d.py", line 101, in main blitz_inline(arr) File "array3d.py", line 89, in blitz_inline weave.inline(code, ['arr'], type_converters=converters.blitz) File "G:\Python25\Lib\site-packages\scipy\weave\inline_tools.py", line 339, in inline **kw) File "G:\Python25\Lib\site-packages\scipy\weave\inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "G:\Python25\Lib\site-packages\scipy\weave\ext_tools.py", line 365, in compile verbose = verbose, **kw) File "G:\Python25\Lib\site-packages\scipy\weave\build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "G:\Python25\Lib\site-packages\numpy\distutils\core.py", line 174, in setup return old_setup(**new_attr) File "G:\Python25\lib\distutils\core.py", line 168, in setup raise SystemExit, "error: " + str(msg) distutils.errors.CompileError: error: Command "g++ -mno-cygwin -O2 -Wall -IG:\Python25\lib\site-packages\scipy\weave -IG:\Python25\lib\site-packages\scipy\weave\scxx -IG:\Python25\lib\site-packages\scipy\weave\blitz -IG:\Python25\lib\site-packages\numpy\core\include -IG:\Python25\include -IG:\Python25\PC -c g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp -o g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_intermediate\compiler_894ad5ed761bb51736c6d2b7872dc212\Release\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.o" failed with exit status 1 And weave.test() give me : Found 0 tests for scipy.weave.c_spec Found 2 tests for scipy.weave.blitz_tools building extensions here: g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\m7 Found 1 tests for scipy.weave.ext_tools Found 9 tests for scipy.weave.build_tools Found 0 tests for scipy.weave.inline_tools Found 1 tests for scipy.weave.ast_tools Warning: FAILURE importing tests for G:\Python25\Lib\site-packages\scipy\weave\tests\test_wx_spec.py:16: ImportError: No module named wxPython (in ) Found 3 tests for scipy.weave.standard_array_spec Found 74 tests for scipy.weave.size_check Found 26 tests for scipy.weave.catalog Found 16 tests for scipy.weave.slice_handler Found 0 tests for __main__ ...warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations .....warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ....................................F..F..................................................................removing 'g:\docume~1\sgarcia\locals~1\temp\tmpga71mmcat_test' (and everything under it) error removing g:\docume~1\sgarcia\locals~1\temp\tmpga71mmcat_test: g:\docume~1\sgarcia\locals~1\temp\tmpga71mmcat_test: Le r?pertoire n'est pas vide .removing 'g:\docume~1\sgarcia\locals~1\temp\tmpimmvs6cat_test' (and everything under it) ................. ====================================================================== FAIL: check_1d_3 (scipy.weave.tests.test_size_check.test_dummy_array_indexing) ---------------------------------------------------------------------- Traceback (most recent call last): File "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", line 168, in check_1d_3 self.generic_1d('a[-11:]') File "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", line 135, in generic_1d self.generic_wrap(a,expr) File "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", line 127, in generic_wrap self.generic_test(a,expr,desired) File "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", line 123, in generic_test assert_array_equal(actual,desired, expr) File "G:\Python25\lib\site-packages\numpy\testing\utils.py", line 223, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "G:\Python25\lib\site-packages\numpy\testing\utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not equal a[-11:] (mismatch 100.0%) x: array([1]) y: array([10]) ====================================================================== FAIL: check_1d_6 (scipy.weave.tests.test_size_check.test_dummy_array_indexing) ---------------------------------------------------------------------- Traceback (most recent call last): File "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", line 174, in check_1d_6 self.generic_1d('a[:-11]') File "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", line 135, in generic_1d self.generic_wrap(a,expr) File "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", line 127, in generic_wrap self.generic_test(a,expr,desired) File "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", line 123, in generic_test assert_array_equal(actual,desired, expr) File "G:\Python25\lib\site-packages\numpy\testing\utils.py", line 223, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "G:\Python25\lib\site-packages\numpy\testing\utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not equal a[:-11] (mismatch 100.0%) x: array([9]) y: array([0]) ---------------------------------------------------------------------- Ran 132 tests in 2.266s FAILED (failures=2) >Exit code: 0 And of course my main problem is that some of my code that work on linux debian does'nt work on win32. (But it used to work on the old enthon distribution version!) Any idea ? Thank Sam -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logisique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nauss at lcrs.de Fri Mar 16 06:16:55 2007 From: nauss at lcrs.de (Thomas Nauss) Date: Fri, 16 Mar 2007 11:16:55 +0100 Subject: [SciPy-user] Sort 3-D arrays by two columns In-Reply-To: References: <45F79B78.4070806@lcrs.de> Message-ID: <45FA6E97.5060505@lcrs.de> Thank's a lot! That helps. Thomas Tom Denniston wrote: > I think you want lexsort. It takes a tuple of columns. > > On 3/14/07, Bill Baxter wrote: >> Does lexsort do what you need? >> >> Also, have you seen the numpy examples list? It's a good place to >> look for this sort of thing: >> http://www.scipy.org/Numpy_Example_List >> http://www.scipy.org/Numpy_Example_List_With_Doc >> >> --bb >> >> On 3/14/07, Thomas Nauss wrote: >>> Dear Experts, >>> I want plot some satellite datasets using matplotlib with basemap. For >>> that task, I need a 3-D array (or three 1-D arrays) containing latitude, >>> longitude, and corresponding data values. Since basemap needs latitude >>> and longitude values in increasing order, I'm looking for a function to >>> sort the 3-D array by the two lat/lon columns (i.e. first by latitude, >>> then by longitude but conserving the value combinations). >>> Thanks, >>> Thomas >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- _____________________________________________ Dr. Thomas Nauss Philipps-University of Marburg Department of Geography Laboratory for Climatology and Remote Sensing Deutschhausstr. 10 D-35032 Marburg fon: ++49(6421)2824252 fax: ++49(6421)2828950 mobile:++49(163)1462714 mail: nauss at lcrs.de web: http://www.lcrs.de From ryanlists at gmail.com Fri Mar 16 10:18:16 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 16 Mar 2007 09:18:16 -0500 Subject: [SciPy-user] install from source problem Message-ID: I am trying to install from source and am running into some problems. The first problem (that I don't know if it is really a problem) is that the svn check out seems to go well, but at the very end I get this message: svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' svn: REPORT of '/svn/numpy/!svn/vcc/default': 200 OK (http://svn.scipy.org) Is that an issue? I am running svn version 1.3.1 if that matters. This may be a problem because the svn check out only has the folders f2py fft random under numpy, so that when I try "python setup.py build" I get: Traceback (most recent call last): File "setup.py", line 89, in ? setup_package() File "setup.py", line 59, in setup_package from numpy.distutils.core import setup ImportError: No module named numpy.distutils.core Am I doing something wrong? Thanks, Ryan From stefan at sun.ac.za Fri Mar 16 10:43:55 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 16 Mar 2007 16:43:55 +0200 Subject: [SciPy-user] install from source problem In-Reply-To: References: Message-ID: <20070316144355.GL6168@mentat.za.net> Hi Ryan On Fri, Mar 16, 2007 at 09:18:16AM -0500, Ryan Krauss wrote: > I am trying to install from source and am running into some problems. > The first problem (that I don't know if it is really a problem) is > that the svn check out seems to go well, but at the very end I get > this message: > svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' > svn: REPORT of '/svn/numpy/!svn/vcc/default': 200 OK > (http://svn.scipy.org) Sounds like a proxy problem. Try https://svn.scipy.org instead of http://svn.scipy.org and see if that helps? Cheers St?fan From ryanlists at gmail.com Fri Mar 16 11:02:41 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 16 Mar 2007 10:02:41 -0500 Subject: [SciPy-user] install from source problem In-Reply-To: References: Message-ID: Thank you Stefan, that did it. On 3/16/07, Ryan Krauss wrote: > I am trying to install from source and am running into some problems. > The first problem (that I don't know if it is really a problem) is > that the svn check out seems to go well, but at the very end I get > this message: > svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' > svn: REPORT of '/svn/numpy/!svn/vcc/default': 200 OK (http://svn.scipy.org) > > Is that an issue? I am running svn version 1.3.1 if that matters. > > This may be a problem because the svn check out only has the folders > f2py fft random > under numpy, so that when I try "python setup.py build" I get: > Traceback (most recent call last): > File "setup.py", line 89, in ? > setup_package() > File "setup.py", line 59, in setup_package > from numpy.distutils.core import setup > ImportError: No module named numpy.distutils.core > > Am I doing something wrong? > > Thanks, > > Ryan > From ryanlists at gmail.com Fri Mar 16 11:09:59 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 16 Mar 2007 10:09:59 -0500 Subject: [SciPy-user] clapack and cblas modules are empty In-Reply-To: <4570D4D8.30806@gmail.com> References: <1165001752.5095.20.camel@jrpirone-desktop> <4570D4D8.30806@gmail.com> Message-ID: I think I have this exact situation and don't understand the final answer. Do I need to be concerned with the warning that my clapack and cblas modules are empty? ryan at ubuntu:~$ ldd /usr/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so linux-gate.so.1 => (0xffffe000) liblapack.so.3 => /usr/lib/atlas/sse2/liblapack.so.3 (0xb7945000) libblas.so.3 => /usr/lib/atlas/sse2/libblas.so.3 (0xb73a8000) libg2c.so.0 => /usr/lib/libg2c.so.0 (0xb7381000) libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb735f000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7355000) libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7226000) /lib/ld-linux.so.2 (0x80000000) ryan at ubuntu:~$ locate *clapack.so /usr/lib/python2.4/site-packages/scipy/lib/lapack/clapack.so /usr/lib/python2.4/site-packages/scipy/linalg/clapack.so ryan at ubuntu:/usr/lib/python2.4/site-packages/scipy/lib/lapack$ ls -al total 704 drwxr-xr-x 3 root root 4096 2007-03-16 09:50 . drwxr-xr-x 4 root root 4096 2007-03-16 09:50 .. -rwxr-xr-x 1 root root 13022 2007-03-16 09:49 atlas_version.so -rwxr-xr-x 1 root root 103466 2007-03-16 09:49 calc_lwork.so -rwxr-xr-x 1 root root 48398 2007-03-16 09:49 clapack.so -rwxr-xr-x 1 root root 494047 2007-03-16 09:49 flapack.so . . . What does it all mean? Thanks, Ryan On 12/1/06, Robert Kern wrote: > jrpirone wrote: > > Hello, > > > > I've installed scipy and numpy from Andrew Straw's repository, all other > > libraries are from the standard Ubuntu repositories (6.10). When I > > execute scipy.test(), the following warnings appear: > > > > **************************************************************** > > WARNING: clapack module is empty > > ----------- > > See scipy/INSTALL.txt for troubleshooting. > > Notes: > > * If atlas library is not found by numpy/distutils/system_info.py, > > then scipy uses flapack instead of clapack. > > **************************************************************** > > > > **************************************************************** > > WARNING: cblas module is empty > > ----------- > > See scipy/INSTALL.txt for troubleshooting. > > Notes: > > * If atlas library is not found by numpy/distutils/system_info.py, > > then scipy uses fblas instead of cblas. > > **************************************************************** > > > > Running the command: > > ldd /usr/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so > > > > Gives me: > > linux-gate.so.1 => (0xffffe000) > > liblapack.so.3 => /usr/lib/atlas/sse2/liblapack.so.3 > > (0xb7958000) > > libblas.so.3 => /usr/lib/atlas/sse2/libblas.so.3 (0xb7387000) > > libg2c.so.0 => /usr/lib/libg2c.so.0 (0xb735f000) > > libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb7339000) > > libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb732e000) > > libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb71fa000) > > /lib/ld-linux.so.2 (0x80000000) > > > > Is there a problem with the libraries I have installed? Should I be > > worried about these warnings, or am I just being compulsive? > > They're not *necessary* but they are nice to have. It demonstrates that scipy > was not compiled with ATLAS. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From hirani at cs.uiuc.edu Fri Mar 16 11:10:09 2007 From: hirani at cs.uiuc.edu (Anil N. Hirani) Date: Fri, 16 Mar 2007 10:10:09 -0500 Subject: [SciPy-user] Question about test failures Message-ID: <899620A2-9F3A-4CE0-B089-F0D26DD0F6E5@cs.uiuc.edu> I built and installed scipy from the svn respository (svn co http:// svn.scipy.org/svn/scipy/trunk scipy) on March 15, on a Mac OS X 10.4.8 with python2.5. Running the scipy test(1,10) gave 3 faiilures. What is curious is that even though the dot product fails in the test (see second and third FAIL below), the same expression gives the right result when I typed it in : >>> dot([3j,-4,3-4j],[2,3,1]) (-9+2j) Can someone please let me know the possible reasons, fixes and consequences of the test failures listed below. Regards Anil Hirani ====================================================================== FAIL: check loadmat case sparse ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/sw/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 80, in _check_case self._check_level(k_label, expected, matdict[k]) File "/sw/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 63, in _check_level decimal = 5) File "/sw/lib/python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/sw/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal test sparse; file /sw/lib/python2.5/site-packages/scipy/io/tests/./ data/testsparse_6.5.1_GLNX86.mat, variable testsparse (mismatch 46.6666666667%) x: array([[ 3.03865194e-319, 3.16202013e-322, 1.04346664e-320, 2.05531309e-320, 2.56123631e-320], [ 3.16202013e-322, 0.00000000e+000, 0.00000000e+000,... y: array([[ 1., 2., 3., 4., 5.], [ 2., 0., 0., 0., 0.], [ 3., 0., 0., 0., 0.]]) ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.5/site-packages/scipy/lib/blas/tests/ test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/sw/lib/python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.998711109161377+1.0175286878527778e-36j) DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.5/site-packages/scipy/linalg/tests/ test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/sw/lib/python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.998711109161377+1.3368407079941133e-36j) DESIRED: (-9+2j) From oliphant at ee.byu.edu Fri Mar 16 12:13:45 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 16 Mar 2007 09:13:45 -0700 Subject: [SciPy-user] Vectorize vs Map In-Reply-To: <80c99e790703160237n1367c895pba2ba31c9a3bf9d@mail.gmail.com> References: <45F97C87.8080506@gmail.com> <45F97FA4.6050902@gmail.com> <80c99e790703160237n1367c895pba2ba31c9a3bf9d@mail.gmail.com> Message-ID: <45FAC239.9030803@ee.byu.edu> lorenzo bolla wrote: > here is a simple timing on my machine. > > --------------------------------------------------- > > import scipy as S > from timeit import Timer > > def f(x): > return S.sqrt(S.absolute(x)**2) > > x = S.array(S.rand(1000)) > fv = S.vectorize(f) > fu = S.frompyfunc(f,1,1) > > def test(): > #f(x) > #fv(x) > fu(x) > > if __name__ == '__main__': > t = Timer('test()', 'from __main__ import test') > n = 100 > print "%.2f usec/pass" % (1e6*t.timeit(number=n)/n) > > > --------------------------------------------------- > > I get: > 229.84 usec/pass for f(x) > 119410.40 usec/pass for fv(x) > 114513.80 usec/pass for fu(x) > > vectorize and frompyfunc create functions roughly 500 times slower > than the one using ndarrays arithmetics (even if it cannot operate on > lists, just ndarrays). > There is nothing new here. It is not surprising at all. From pyfunc creates a ufunc out of a python function. The python function is called at each element-by-element calculation. In this case, the element-by-element calculation is also using ufuncs to compute the result (ufuncs are a very slow way to compute a single scalar operation). Because a Python function is called, the ufunc uses object arrays as well. All of these intermediate things means the result will be much slower. Vectorize and frompyfunc are only useful when you want a quick ufunc that you can't figure out how to vectorize yourself. If you can vectorize it, it is always better to do it than to use vectorize. -Travis From alexander.borghgraef.rma at gmail.com Fri Mar 16 12:42:42 2007 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Fri, 16 Mar 2007 17:42:42 +0100 Subject: [SciPy-user] Gaussian pyramid using ndimage Message-ID: <9e8c52a20703160942j3d1b4ca5ubbfab6dabe39f7c0@mail.gmail.com> Hi all, I've been looking into the ndimage module functionality, and I was wondering whether there were any downsampling filters implemented within it, like used in gaussian pyramid implementations. Obviously, convolution followed by downsampling would do the trick, but that requires an intermediate storing of a full-sized image, which isn't all that memory-efficient. Any suggestions? -- Alex Borghgraef -------------- next part -------------- An HTML attachment was scrubbed... URL: From bldrake at adaptcs.com Fri Mar 16 12:45:06 2007 From: bldrake at adaptcs.com (Barry Drake) Date: Fri, 16 Mar 2007 09:45:06 -0700 (PDT) Subject: [SciPy-user] Structured matrices In-Reply-To: <45F81DA2.2000201@cse.ucsc.edu> Message-ID: <363507.29865.qm@web408.biz.mail.mud.yahoo.com> Anand, Since the matrix is not symmetric, how can it be a covariance matrix? My area is signal processing and I haven't seen a non symmetric covariance matrix in my work. Regards, Barry L Drake Anand Patil wrote: Nils, Barry, That matrix is also the covariance of Brownian motion evaluated at times [9, 7, 5, 3, 1]. The inverse matrix bears a strong resemblance to the numerical second derivative operator. This makes sense because the second derivative in x of min(x, \xi) is a delta function concentrated at \xi, but there's probably a more important connection that I don't know about. Maybe this helps explain the strange results? I found some related refs, but haven't looked at them: http://citeseer.ist.psu.edu/347057.html, http://adsabs.harvard.edu/abs/2003EAEJA.....2001X . Cheers, Anand -------------------------- Nils, Does this matrix come from a particular application? I'm working on algorithms for the non-negative matrix factorization (NMF). With this matrix as input I'm getting some very strange results. So I'm curious about potential applications. Cheers! Barry L. Drake GA Tech Nils Wagner wrote: Hi all, I was wondering if the matrix family (see below) has a special name ? And/or is there a way to construct this matrix via special matrices (like Hankel, Toeplitz, etc.) ? [[ 3. 1.] [ 1. 1.]] [[ 5. 3. 1.] [ 3. 3. 1.] [ 1. 1. 1.]] [[ 7. 5. 3. 1.] [ 5. 5. 3. 1.] [ 3. 3. 3. 1.] [ 1. 1. 1. 1.]] [[ 9. 7. 5. 3. 1.] [ 7. 7. 5. 3. 1.] [ 5. 5. 5. 3. 1.] [ 3. 3. 3. 3. 1.] [ 1. 1. 1. 1. 1.]] Nils _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From zpincus at stanford.edu Fri Mar 16 12:54:32 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Fri, 16 Mar 2007 11:54:32 -0500 Subject: [SciPy-user] Gaussian pyramid using ndimage In-Reply-To: <9e8c52a20703160942j3d1b4ca5ubbfab6dabe39f7c0@mail.gmail.com> References: <9e8c52a20703160942j3d1b4ca5ubbfab6dabe39f7c0@mail.gmail.com> Message-ID: I *think* that all of the filters in ndimage proceed by pre-filtering and then doing whatever task is required, so even if there was a downsampling filter (maybe 'zoom' would work), it wouldn't be memory- efficient in the way you were hoping. Also, the spline filters used by ndimage for pre-filtering seem somewhat broken: http://projects.scipy.org/scipy/scipy/ticket/213 Zach On Mar 16, 2007, at 11:42 AM, Alexander Borghgraef wrote: > Hi all, > > I've been looking into the ndimage module functionality, and I was > wondering whether there were any downsampling filters > implemented within it, like used in gaussian pyramid > implementations. Obviously, convolution followed by downsampling would > do the trick, but that requires an intermediate storing of a full- > sized image, which isn't all that memory-efficient. Any suggestions? > > -- > Alex Borghgraef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Fri Mar 16 14:02:28 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 16 Mar 2007 12:02:28 -0600 Subject: [SciPy-user] Structured matrices In-Reply-To: <363507.29865.qm@web408.biz.mail.mud.yahoo.com> References: <45F81DA2.2000201@cse.ucsc.edu> <363507.29865.qm@web408.biz.mail.mud.yahoo.com> Message-ID: On 3/16/07, Barry Drake wrote: > Anand, > Since the matrix is not symmetric, how can it be a covariance matrix? Why do you say it's not symmetric? Using Anne's beautiful one-liner: In [223]: ss = lambda s: minimum.outer(r_[s:-1:-2],r_[s:-1:-2]) Checking for s=9: In [224]: ss(9) Out[224]: array([[9, 7, 5, 3, 1], [7, 7, 5, 3, 1], [5, 5, 5, 3, 1], [3, 3, 3, 3, 1], [1, 1, 1, 1, 1]]) Structurally it seems pretty obvious they must be symmetric always, and it's also straightforward to do an explicit check over a number of sizes: In [225]: sszero = lambda s: abs(ss(s)-ss(s).T).max() In [226]: [sszero(n) for n in range(3,45,2)] Out[226]: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] Am I missing something? Cheers, f From nvf at uwm.edu Fri Mar 16 16:16:33 2007 From: nvf at uwm.edu (Nick Fotopoulos) Date: Fri, 16 Mar 2007 15:16:33 -0500 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? Message-ID: > Message: 5 > Date: Thu, 15 Mar 2007 15:07:26 -0400 > From: Lawrence David > Subject: Re: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: > _fftpack.so problems? > To: SciPy Users List > Message-ID: <77114BB9-AB85-4BB2-85F5-DE324676EC3C at mit.edu> > Content-Type: text/plain; charset="us-ascii" > > Hot darn Batman - you nailed it!! > > Much thanks, > - lawrence > > http://www.stinkpot.org > > > On Mar 15, 2007, at 12:58 PM, Robert Kern wrote: > > > Lawrence David wrote: > >> Hi there, > >> > >> I'm pulling the little hair I have out trying to install SciPy on > >> my G4 > >> PPC running OS 10.4.9. > >> > >> I've followed the instructions on the SciPy.org > >> website: http://www.scipy.org/Installing_SciPy/Mac_OS_X. I've got > >> gcc > >> 4.0.1 and gfortran (gnu95 -> 4.3). > >> > >> Running build: "python setup.py build_src build_clib -- > >> fcompiler=gnu95 > >> build_ext --fcompiler=gnu95 build", yields a long error message that > >> terminates with a litany of "Undefined symbols" and this last > >> message: > >> > >> collect2: ld returned 1 exit status > >> error: Command "/usr/local/bin/gfortran -L/sw/lib > >> build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/Lib/ > >> fftpack/_fftpackmodule.o > >> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfft.o > >> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/drfft.o > >> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o > >> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o > >> build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/ > >> fortranobject.o > >> -L/usr/local/lib -L/usr/local/lib/gcc/powerpc-apple-darwin8.8.0/4.3.0 > >> -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -lfftw3 -lgfortran -o > >> build/lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so" failed with > >> exit status 1 > >> > >> I've noticed several other folks have had similar error messages > >> on this > >> board, but none of the posted solutions worked for me. If anyone > >> could > >> offer any suggestions on what to try, I'd love to hear them! > > > > It looks like you have LDFLAGS defined (thus, the -L/sw/lib). That > > *replaces* > > all of the Python-specific linker flags that distutils provides. The symptoms sound exactly like mine, except that in my case LDFLAGS is definitely not defined, so the solution does not apply to me. The big thread http://groups.google.com/group/comp.lang.python/browse_thread/thread/1fadaffda3a768a7/9a76e6cb3482d4a7?lnk=st&q=distutils+build+flags+problems&rnum=14&hl=en#9a76e6cb3482d4a7 says more or less the same thing. nvf at dirac:~$ echo $LD $LDG_DIRECTORY $LDG_LOCATION $LD_LIBRARY_PATH $LDG_INSTALL_LOG $LDG_SOFTWARE_LOCATION Is there anything else that could cause missing flags? Scipy builds for the Apple-supplied Python 2.3, but does not build for MacPython 2.4 or 2.5. I eventually got it to build using "python setup.py build_src build_clib --fcompiler=gnu95 build_ext -lpython --fcompiler=gnu95 build", but unsurprisingly, it doesn't work. I think it's picking up the system python's libpython: nvf at dirac:~$ python -c "import scipy; scipy.test(10,10)" Fatal Python error: Interpreter not initialized (version mismatch?) Abort trap nvf at dirac:~$ locate libpython /Developer/SDKs/MacOSX10.4u.sdk/usr/lib/libpython.dylib /Developer/SDKs/MacOSX10.4u.sdk/usr/lib/libpython2.3.dylib /Developer/SDKs/MacOSX10.4u.sdk/usr/lib/libpython2.dylib /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config/libpython2.4.a /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/config/libpython2.5.a /usr/lib/libpython.dylib /usr/lib/libpython2.3.dylib /usr/lib/libpython2.dylib Those libpython.dylibs just simlink to the libpython2.3.dylib. I would have thought that the MacPython installer would provide dynamic libraries in addition to the static libraries. I would appreciate any help on this, as my previous pleas have gone unanswered. If there's any other information I can provide, just let me know. Many thanks, Nick From robert.kern at gmail.com Fri Mar 16 16:31:35 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 Mar 2007 15:31:35 -0500 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? In-Reply-To: References: Message-ID: <45FAFEA7.2010403@gmail.com> Nick Fotopoulos wrote: >> Message: 5 >> Date: Thu, 15 Mar 2007 15:07:26 -0400 >> From: Lawrence David >> Subject: Re: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: >> _fftpack.so problems? >> To: SciPy Users List >> Message-ID: <77114BB9-AB85-4BB2-85F5-DE324676EC3C at mit.edu> >> Content-Type: text/plain; charset="us-ascii" >> >> Hot darn Batman - you nailed it!! >> >> Much thanks, >> - lawrence >> >> http://www.stinkpot.org >> >> >> On Mar 15, 2007, at 12:58 PM, Robert Kern wrote: >> >>> Lawrence David wrote: >>>> Hi there, >>>> >>>> I'm pulling the little hair I have out trying to install SciPy on >>>> my G4 >>>> PPC running OS 10.4.9. >>>> >>>> I've followed the instructions on the SciPy.org >>>> website: http://www.scipy.org/Installing_SciPy/Mac_OS_X. I've got >>>> gcc >>>> 4.0.1 and gfortran (gnu95 -> 4.3). >>>> >>>> Running build: "python setup.py build_src build_clib -- >>>> fcompiler=gnu95 >>>> build_ext --fcompiler=gnu95 build", yields a long error message that >>>> terminates with a litany of "Undefined symbols" and this last >>>> message: >>>> >>>> collect2: ld returned 1 exit status >>>> error: Command "/usr/local/bin/gfortran -L/sw/lib >>>> build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/Lib/ >>>> fftpack/_fftpackmodule.o >>>> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfft.o >>>> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/drfft.o >>>> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o >>>> build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o >>>> build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/ >>>> fortranobject.o >>>> -L/usr/local/lib -L/usr/local/lib/gcc/powerpc-apple-darwin8.8.0/4.3.0 >>>> -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -lfftw3 -lgfortran -o >>>> build/lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so" failed with >>>> exit status 1 >>>> >>>> I've noticed several other folks have had similar error messages >>>> on this >>>> board, but none of the posted solutions worked for me. If anyone >>>> could >>>> offer any suggestions on what to try, I'd love to hear them! >>> It looks like you have LDFLAGS defined (thus, the -L/sw/lib). That >>> *replaces* >>> all of the Python-specific linker flags that distutils provides. > > The symptoms sound exactly like mine, except that in my case LDFLAGS > is definitely not defined, so the solution does not apply to me. The > big thread http://groups.google.com/group/comp.lang.python/browse_thread/thread/1fadaffda3a768a7/9a76e6cb3482d4a7?lnk=st&q=distutils+build+flags+problems&rnum=14&hl=en#9a76e6cb3482d4a7 > says more or less the same thing. Except in that case, LDFLAGS was the problem. > nvf at dirac:~$ echo $LD > $LDG_DIRECTORY $LDG_LOCATION $LD_LIBRARY_PATH > $LDG_INSTALL_LOG $LDG_SOFTWARE_LOCATION > > Is there anything else that could cause missing flags? Scipy builds for the > Apple-supplied Python 2.3, but does not build for MacPython 2.4 or > 2.5. > > I eventually got it to build using "python setup.py build_src > build_clib --fcompiler=gnu95 build_ext -lpython --fcompiler=gnu95 > build", but unsurprisingly, it doesn't work. I think it's picking up > the system python's libpython: Yes, because you added -lpython which is not how you need to link against the Python libraries for framework builds of Python. > nvf at dirac:~$ python -c "import scipy; scipy.test(10,10)" > Fatal Python error: Interpreter not initialized (version mismatch?) > Abort trap > nvf at dirac:~$ locate libpython > /Developer/SDKs/MacOSX10.4u.sdk/usr/lib/libpython.dylib > /Developer/SDKs/MacOSX10.4u.sdk/usr/lib/libpython2.3.dylib > /Developer/SDKs/MacOSX10.4u.sdk/usr/lib/libpython2.dylib > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config/libpython2.4.a > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/config/libpython2.5.a > /usr/lib/libpython.dylib > /usr/lib/libpython2.3.dylib > /usr/lib/libpython2.dylib > > Those libpython.dylibs just simlink to the libpython2.3.dylib. I > would have thought that the MacPython installer would provide dynamic > libraries in addition to the static libraries. No, everything should be contained in /Library/Frameworks/Python.framework . In such a case, the "shared library" is actually the file /Library/Frameworks/Python.framework/Python . It gets linked when "-framework Python" is part of the link command, as distutils does provided it is not interfered with. > I would appreciate any help on this, as my previous pleas have gone > unanswered. If there's any other information I can provide, just let > me know. Can you supply the output from an unsuccessful build again? Don't do -lpython this time. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bldrake at adaptcs.com Fri Mar 16 16:40:28 2007 From: bldrake at adaptcs.com (Barry Drake) Date: Fri, 16 Mar 2007 13:40:28 -0700 (PDT) Subject: [SciPy-user] Structured matrices In-Reply-To: Message-ID: <173515.38726.qm@web409.biz.mail.mud.yahoo.com> It is symmetric. I was relying on someone else looking at this and didn't double check it first. I'll be more careful next time. Thanks, Barry Fernando Perez wrote: On 3/16/07, Barry Drake wrote: > Anand, > Since the matrix is not symmetric, how can it be a covariance matrix? Why do you say it's not symmetric? Using Anne's beautiful one-liner: In [223]: ss = lambda s: minimum.outer(r_[s:-1:-2],r_[s:-1:-2]) Checking for s=9: In [224]: ss(9) Out[224]: array([[9, 7, 5, 3, 1], [7, 7, 5, 3, 1], [5, 5, 5, 3, 1], [3, 3, 3, 3, 1], [1, 1, 1, 1, 1]]) Structurally it seems pretty obvious they must be symmetric always, and it's also straightforward to do an explicit check over a number of sizes: In [225]: sszero = lambda s: abs(ss(s)-ss(s).T).max() In [226]: [sszero(n) for n in range(3,45,2)] Out[226]: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] Am I missing something? Cheers, f _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From hakan.jakobsson at gmail.com Fri Mar 16 22:05:51 2007 From: hakan.jakobsson at gmail.com (=?ISO-8859-1?Q?H=E5kan_Jakobsson?=) Date: Sat, 17 Mar 2007 03:05:51 +0100 Subject: [SciPy-user] FEM Message-ID: <45FB4CFF.5050901@gmail.com> Hi list, Python/scipy newbie here. I'm currently writing my master's thesis in computational mathematics - discontinuous galerkin methods to be precise. I'v been doing all the numerics for my thesis in Matlab but stumbled upon Python and scipy by chance. As a little experiment I translated one of my solvers into Python using numpy, scipy and the pysparse packages, and I was very impressed with the effort/performance ratio. Now, to the point.. If you're doing finite element analysis, what are you're experiences using Python? Do you use Python as your main development tool? In conjunction with C, Fortran? Is there any drawbacks with using Python for this type of work? Anything else? I hope I'm not asking to much here, but it would be really interesting to know a bit about what the situation's like. I think Python and scipy is really great, and now I'm trying to convince everyone else... Please, share with me your experiences. /H?kan J From wbaxter at gmail.com Fri Mar 16 22:25:17 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 17 Mar 2007 11:25:17 +0900 Subject: [SciPy-user] FEM In-Reply-To: <45FB4CFF.5050901@gmail.com> References: <45FB4CFF.5050901@gmail.com> Message-ID: Howdy, I don't do FEA, but the things I do are not so different. You might find the matlab comparison web page useful: http://www.scipy.org/NumPy_for_Matlab_Users Since I wrote a lot that page, I won't repeat the pros and cons arguments here in detail, but rather refer you to that page. Here's a quick summary of what I see to be the highlights: * Python/Numpy/Scipy are much better when it comes to the language itself and software development aspects, integrating with native code in C/C++/Fortran, and applying spot-optimizations. * Matlab is better (for now) when it comes to graphical debugging. * Matlab is better (for now) when it comes to finding free code to solve standard mathematical problem X. * Matlab is better when it comes to profiling. But keep on evangelizing. As far as I see it, the advantages of Python are intrinsic (it's based on a good language with broad community), whereas Matlab's are temporary and derive primarily from having a large head start. Finally, especially for academia I think the licensing terms are a very important consideration. --bb On 3/17/07, H?kan Jakobsson wrote: > Hi list, > Python/scipy newbie here. I'm currently writing my master's thesis in > computational mathematics - discontinuous galerkin methods to be > precise. I'v been doing all the numerics for my thesis in Matlab but > stumbled upon Python and scipy by chance. As a little experiment I > translated one of my solvers into Python using numpy, scipy and the > pysparse packages, and I was very impressed with the effort/performance > ratio. Now, to the point.. > > If you're doing finite element analysis, what are you're experiences > using Python? Do you use Python as your main development tool? In > conjunction with C, Fortran? Is there any drawbacks with using Python > for this type of work? Anything else? > > I hope I'm not asking to much here, but it would be really interesting > to know a bit about what the situation's like. I think Python and scipy > is really great, and now I'm trying to convince everyone else... > > Please, share with me your experiences. > > /H?kan J > From josh8912 at yahoo.com Sat Mar 17 01:10:30 2007 From: josh8912 at yahoo.com (JJ) Date: Fri, 16 Mar 2007 22:10:30 -0700 (PDT) Subject: [SciPy-user] use of Jacobian function in leastsq Message-ID: <508387.65137.qm@web54007.mail.yahoo.com> Hello: This is a question based on previous discussions by Brendan Simons and Travis (RE: optimize.leastsq -> fjac and the covariance redux). I am wondering why use of the Jacobian function in the leastsq function does not seem to work. There is probably a very simple answer. I am trying to use a Jacobian function for a different problem and am using this as a test. The code is as follows, and you can see that C is not equal to C2: from scipy import * from scipy.optimize import leastsq x = array([0., 1., 2., 3., 4., 5.]).astype('d') coeffs = [4., 3., 5., 2.] yErrs = array([0.1, 0.2, -0.1, 0.05, 0, -.02]).astype('d') m = len(x) n = len(coeffs) def evalY(p, x): """a simple cubic""" return x**3 * p[3] + x**2 * p[2] + x * p[1] + p[0] def J(p, y, x): """the jacobian of a cubic (not actually a function of y, p since resid is linear in p) """ print '\n in J' result = zeros([m,n], 'd') result[:, 0] = ones(m, 'd') result[:, 1] = x result[:, 2] = x**2 result[:, 3] = x**3 print result return result def resid(p, y, x): return y - evalY(p, x) y_true = evalY(coeffs, x) y_meas = y_true + yErrs p0 = [3., 2., 4., 1.] pMin, C, infodict, ier, mesg = leastsq(resid, p0, args=(y_meas, x), full_output=True) pMin2, C2, infodict2, ier2, mesg2 = leastsq(resid, p0, Dfun=J, args=(y_meas, x), full_output=True) #check against know algorithms for computing C JAtPMin = J(pMin, None, x) Q_true, R_true = linalg.qr(JAtPMin) C_true = linalg.inv(dot(transpose(JAtPMin), JAtPMin)) C_true2 = linalg.inv(dot(transpose(R_true), R_true)) print 'pMin' print pMin print 'pMin2' print pMin2 print '\nC' print C print '\nC2' print C2 print '\nC_true' print C_true print '\nC_true2' print C_true2 Thanks, JJ ____________________________________________________________________________________ Looking for earth-friendly autos? Browse Top Cars by "Green Rating" at Yahoo! Autos' Green Center. http://autos.yahoo.com/green_center/ From ryanlists at gmail.com Sat Mar 17 11:08:41 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 17 Mar 2007 10:08:41 -0500 Subject: [SciPy-user] FEM In-Reply-To: References: <45FB4CFF.5050901@gmail.com> Message-ID: I think the question is a good one and very appropriate when you consider the potential risk you are taking and the amount of time it could take to really learn all you need to know to make a truly informed decision. I switched from Matlab to Scipy/Numpy/Matplotlib/Mayavi/IPython half way through my Ph.D. work and it was a great decision. But, I didn't do any FEA in Python. F2py makes using native FORTRAN code very easy. Google for Python FEA/FEM. I know I have run across some packages, but I have never looked at it in detail. I have never been able to quantify it (others have had some success), but in my experience, Python code is much easier to write and debug than Matlab code. As a simple example, if I need to operate on every item in a vector (and can't come up with a good vectorized solution), in Matlab I need to write for i in length(vector), item = vector(i); {do something with item} end where in Python it is just for item in vector: {do something with item} That is a really simple example, but it shows to areas where Matlab enables me to introduce bugs: 1. If I have nested for loops, I might screw up the indices (i,j,k,...) 2. It lets me overwrite the imaginary number i I lost countless hours of productivity chasing down those inds of bugs. (You only make the i as index mistake once or twice). And don't get me started about making sure you remember the stinking semicolons.... FWIW, Ryan On 3/16/07, Bill Baxter wrote: > Howdy, I don't do FEA, but the things I do are not so different. > You might find the matlab comparison web page useful: > http://www.scipy.org/NumPy_for_Matlab_Users > Since I wrote a lot that page, I won't repeat the pros and cons > arguments here in detail, but rather refer you to that page. > > Here's a quick summary of what I see to be the highlights: > * Python/Numpy/Scipy are much better when it comes to the language > itself and software development aspects, integrating with native code > in C/C++/Fortran, and applying spot-optimizations. > > * Matlab is better (for now) when it comes to graphical debugging. > * Matlab is better (for now) when it comes to finding free code to > solve standard mathematical problem X. > * Matlab is better when it comes to profiling. > > But keep on evangelizing. As far as I see it, the advantages of > Python are intrinsic (it's based on a good language with broad > community), whereas Matlab's are temporary and derive primarily from > having a large head start. > > Finally, especially for academia I think the licensing terms are a > very important consideration. > > --bb > > On 3/17/07, H?kan Jakobsson wrote: > > Hi list, > > Python/scipy newbie here. I'm currently writing my master's thesis in > > computational mathematics - discontinuous galerkin methods to be > > precise. I'v been doing all the numerics for my thesis in Matlab but > > stumbled upon Python and scipy by chance. As a little experiment I > > translated one of my solvers into Python using numpy, scipy and the > > pysparse packages, and I was very impressed with the effort/performance > > ratio. Now, to the point.. > > > > If you're doing finite element analysis, what are you're experiences > > using Python? Do you use Python as your main development tool? In > > conjunction with C, Fortran? Is there any drawbacks with using Python > > for this type of work? Anything else? > > > > I hope I'm not asking to much here, but it would be really interesting > > to know a bit about what the situation's like. I think Python and scipy > > is really great, and now I'm trying to convince everyone else... > > > > Please, share with me your experiences. > > > > /H?kan J > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nvf at uwm.edu Sat Mar 17 11:35:02 2007 From: nvf at uwm.edu (Nick Fotopoulos) Date: Sat, 17 Mar 2007 10:35:02 -0500 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? Message-ID: On 3/17/07, scipy-user-request at scipy.org wrote: > Message: 3 > Date: Fri, 16 Mar 2007 15:31:35 -0500 > From: Robert Kern > Subject: Re: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: > _fftpack.so problems? > To: SciPy Users List > Message-ID: <45FAFEA7.2010403 at gmail.com> > Content-Type: text/plain; charset=UTF-8 > Nick Fotopoulos wrote: > > I eventually got it to build using "python setup.py build_src > > build_clib --fcompiler=gnu95 build_ext -lpython --fcompiler=gnu95 > > build", but unsurprisingly, it doesn't work. I think it's picking up > > the system python's libpython: > > Yes, because you added -lpython which is not how you need to link against the > Python libraries for framework builds of Python. I was guessing. Thanks for the correction. > > nvf at dirac:~$ python -c "import scipy; scipy.test(10,10)" > > Fatal Python error: Interpreter not initialized (version mismatch?) > > Abort trap > > nvf at dirac:~$ locate libpython > > /Developer/SDKs/MacOSX10.4u.sdk/usr/lib/libpython.dylib > > /Developer/SDKs/MacOSX10.4u.sdk/usr/lib/libpython2.3.dylib > > /Developer/SDKs/MacOSX10.4u.sdk/usr/lib/libpython2.dylib > > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config/libpython2.4.a > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/config/libpython2.5.a > > /usr/lib/libpython.dylib > > /usr/lib/libpython2.3.dylib > > /usr/lib/libpython2.dylib > > > > Those libpython.dylibs just simlink to the libpython2.3.dylib. I > > would have thought that the MacPython installer would provide dynamic > > libraries in addition to the static libraries. > > No, everything should be contained in /Library/Frameworks/Python.framework . In > such a case, the "shared library" is actually the file > /Library/Frameworks/Python.framework/Python . It gets linked when "-framework > Python" is part of the link command, as distutils does provided it is not > interfered with. Huh, I wouldn't have guessed this on my own. > Can you supply the output from an unsuccessful build again? Don't do -lpython > this time. collect2: ld returned 1 exit status error: Command "/usr/local/bin/gfortran -Wall -bundle build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/Lib/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/drfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/fortranobject.o -L/opt/lscsoft/non-lsc/lib -L/usr/local/lib/gcc/i386-apple-darwin8.8.1/4.3.0 -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -lfftw3 -lgfortran -o build/lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so" failed with exit status 1 Many thanks for the information and for taking a shot at this. Take care, Nick From hasslerjc at comcast.net Sat Mar 17 11:43:05 2007 From: hasslerjc at comcast.net (John Hassler) Date: Sat, 17 Mar 2007 11:43:05 -0400 Subject: [SciPy-user] FEM In-Reply-To: References: <45FB4CFF.5050901@gmail.com> Message-ID: <45FC0C89.8010104@comcast.net> An HTML attachment was scrubbed... URL: From dschult at colgate.edu Sat Mar 17 13:00:44 2007 From: dschult at colgate.edu (Dan Schult) Date: Sat, 17 Mar 2007 13:00:44 -0400 Subject: [SciPy-user] ld unknown flag macosx_version_min Message-ID: I've got a Mac OSx 10.4.8 machine and am compiling scipy according to the instructions on the webpage. I've got gcc 4.0.0 gfortran 4.3.0 fftw3.0 and svn versions of numpy and scipy. My python is version 2.5. Building numpy goes smoothly, but when I try scipy I have an ld error. It seems to be using a linker option I am not familiar with and I can't find it in the package anywhere. Could it be coming from the python 2.5 binary I downloaded? Anyway, how do I turn it off or get an ld that accepts this option? The option is macosx_version_min Thanks for your help! Dan Schult python setup.py build_src build_clib --fcompiler=gnu95 build_ext -- fcompiler=gnu95 build ... Lots of stuff as usual, ending with ... running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler customize Gnu95FCompiler using build_ext building 'scipy.fftpack._fftpack' extension compiling C sources C compiler: gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 -Wall compile options: '-DSCIPY_FFTW3_H -I/usr/local/include -Ibuild/ src.macosx-10.3-ppc-2.5 -I/Library/Frameworks/Python.framework/ Versions/2.5/lib/python2.5/site-packages/numpy/core/include -I/ Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c' /usr/local/bin/gfortran -Wall -undefined dynamic_lookup -bundle build/ temp.macosx-10.3-ppc-2.5/build/src.macosx-10.3-ppc-2.5/Lib/fftpack/ _fftpackmodule.o build/temp.macosx-10.3-ppc-2.5/Lib/fftpack/src/ zfft.o build/temp.macosx-10.3-ppc-2.5/Lib/fftpack/src/drfft.o build/ temp.macosx-10.3-ppc-2.5/Lib/fftpack/src/zrfft.o build/ temp.macosx-10.3-ppc-2.5/Lib/fftpack/src/zfftnd.o build/ temp.macosx-10.3-ppc-2.5/build/src.macosx-10.3-ppc-2.5/ fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/powerpc-apple- darwin8.8.0/4.3.0 -Lbuild/temp.macosx-10.3-ppc-2.5 -ldfftpack -lfftw3 -lgfortran -o build/lib.macosx-10.3-ppc-2.5/scipy/fftpack/_fftpack.so /usr/bin/ld: unknown flag: -macosx_version_min collect2: ld returned 1 exit status /usr/bin/ld: unknown flag: -macosx_version_min collect2: ld returned 1 exit status error: Command "/usr/local/bin/gfortran -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.3-ppc-2.5/build/ src.macosx-10.3-ppc-2.5/Lib/fftpack/_fftpackmodule.o build/ temp.macosx-10.3-ppc-2.5/Lib/fftpack/src/zfft.o build/ temp.macosx-10.3-ppc-2.5/Lib/fftpack/src/drfft.o build/ temp.macosx-10.3-ppc-2.5/Lib/fftpack/src/zrfft.o build/ temp.macosx-10.3-ppc-2.5/Lib/fftpack/src/zfftnd.o build/ temp.macosx-10.3-ppc-2.5/build/src.macosx-10.3-ppc-2.5/ fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/powerpc-apple- darwin8.8.0/4.3.0 -Lbuild/temp.macosx-10.3-ppc-2.5 -ldfftpack -lfftw3 -lgfortran -o build/lib.macosx-10.3-ppc-2.5/scipy/fftpack/ _fftpack.so" failed with exit status 1 From robert.kern at gmail.com Sat Mar 17 15:41:33 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 17 Mar 2007 14:41:33 -0500 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? In-Reply-To: References: Message-ID: <45FC446D.7000409@gmail.com> Nick Fotopoulos wrote: > > collect2: ld returned 1 exit status > error: Command "/usr/local/bin/gfortran -Wall -bundle > build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/Lib/fftpack/_fftpackmodule.o > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfft.o > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/drfft.o > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o > build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/fortranobject.o > -L/opt/lscsoft/non-lsc/lib > -L/usr/local/lib/gcc/i386-apple-darwin8.8.1/4.3.0 > -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -lfftw3 -lgfortran -o > build/lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so" failed with > exit status 1 > > Many thanks for the information and for taking a shot at this. Where is this coming from if not LDFLAGS? -L/opt/lscsoft/non-lsc/lib -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Mar 17 16:27:39 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 17 Mar 2007 15:27:39 -0500 Subject: [SciPy-user] ld unknown flag macosx_version_min In-Reply-To: References: Message-ID: <45FC4F3B.6050302@gmail.com> Dan Schult wrote: > I've got a Mac OSx 10.4.8 machine and am compiling > scipy according to the instructions on the webpage. I've > got gcc 4.0.0 gfortran 4.3.0 fftw3.0 and svn versions of numpy > and scipy. My python is version 2.5. > Building numpy goes smoothly, but when I try scipy I have an ld error. > > It seems to be using a linker option I am not familiar with and I can't > find it in the package anywhere. Could it be coming from the > python 2.5 binary I downloaded? Anyway, how do I turn it off or get > an ld that accepts this option? > The option is macosx_version_min I have gcc 4.0.1 and gfortran 4.3.0 installed on my system, and I do not see this problem. Can you try upgrading to the latest version of Xcode (which should have gcc 4.0.1)? It's not coming from Python but, I suspect, gfortran. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lxander.m at gmail.com Sun Mar 18 08:18:19 2007 From: lxander.m at gmail.com (Alexander Michael) Date: Sun, 18 Mar 2007 08:18:19 -0400 Subject: [SciPy-user] FEM In-Reply-To: <45FB4CFF.5050901@gmail.com> References: <45FB4CFF.5050901@gmail.com> Message-ID: <525f23e80703180518q75557644ta39da20e32828100@mail.gmail.com> On 3/16/07, H?kan Jakobsson wrote: > Hi list, > Python/scipy newbie here. I'm currently writing my master's thesis in > computational mathematics - discontinuous galerkin methods to be > precise. I'v been doing all the numerics for my thesis in Matlab but > stumbled upon Python and scipy by chance. As a little experiment I > translated one of my solvers into Python using numpy, scipy and the > pysparse packages, and I was very impressed with the effort/performance > ratio. Now, to the point.. > > If you're doing finite element analysis, what are you're experiences > using Python? Do you use Python as your main development tool? In > conjunction with C, Fortran? Is there any drawbacks with using Python > for this type of work? Anything else? > > I hope I'm not asking to much here, but it would be really interesting > to know a bit about what the situation's like. I think Python and scipy > is really great, and now I'm trying to convince everyone else... > > Please, share with me your experiences. > > /H?kan J I haven't built any FEM work in Python, but it looks like others have given it a go (I remember seeing [1] and [2], for instance). If you are building matrix-less solvers (i.e. you're not explicitly forming the large sparse matrix for the whole system), then I think Python and NumPy could potentially take you a long way. If you're solver requires forming the complete sparse system matrix, then you'll need something more than NumPy for the sparse matrix representation. PySparse looks like a reasonable (but I haven't used it) candidate (). It provides iterative solvers and access to SuperLU (I personally like UMFPACK as well, but the matrix solver is secondary if this is not the focus of your research). I've used PETSc and C++ in my pre-Python days for building FEM, but it looks like some people have shoe-horned it into a Python-centric environment (). They also provide access to ParMETIS, which may be helpful depending on your goals. Honestly, though, if you are being productive in your current computational environment and Matlab meets your peformance and architectural requirements, then my *guess* would be that a switch to Python will cost you a month of progress. If Matlab won't scale to the size of problems you wish to solve, then you will need to move closer to the metal and Python over something like SuperLU or PETSc (if you need parallelization) will save you from working in C/C++ (and provide a better development environment). If your goal is to build flexible "end-user software," then I don't think Matlab will cut it architecturally and you will need to switch to something else anyway. [1] [2] From acorriga at gmu.edu Sun Mar 18 09:38:02 2007 From: acorriga at gmu.edu (Andrew Corrigan) Date: Sun, 18 Mar 2007 09:38:02 -0400 Subject: [SciPy-user] spsolve not working Message-ID: <45FD40BA.1020107@gmu.edu> I have the same problem mentioned below with scipy 0.5.2. I try to call spsolve and get an error message. This only happens when running in 64-bit Ubuntu, but doesn't happen in Windows XP with Enthought's Python. Can someone please help? Thanks, Andrew Traceback (most recent call last): File "/home/acorrigan/workspace/meshless/src/physics/potential/test1.py", line 75, in ? main() File "/home/acorrigan/workspace/meshless/src/physics/potential/test1.py", line 69, in main s_f = GeneralizedRBFInterpolant(points, values, operators, RadialKernel(ArbitrarySupport(Gaussian(1.0),1.0)), FixedGrid) File "/home/acorrigan/workspace/meshless/src/interpolation/interpolant/generalized_rbf.py", line 55, in __init__ self.resolve(values) File "/home/acorrigan/workspace/meshless/src/interpolation/interpolant/generalized_rbf.py", line 64, in resolve self.coefficients = linsolve.spsolve(self.A, values.swapaxes(0,1)).swapaxes(0,1) #.tocsr() File "/usr/lib/python2.4/site-packages/scipy/linsolve/linsolve.py", line 77, in spsolve mat, csc = _toCS_superLU( A ) File "/usr/lib/python2.4/site-packages/scipy/linsolve/linsolve.py", line 24, in _toCS_superLU mat = A.tocsc() File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2684, in tocsc return self.tocsr(nzmax).tocsc() File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2673, in tocsr data[k : k+len(row)] = self.data[i] ValueError: setting an array element with a sequence. Nils Wagner iam.uni-stuttgart.de> writes: > > Hi all, > > I would like to solve > > K_dyn x = f, > > where K_dyn is a sparse matrix. UMFPACK is not installed and I am using > the latest svn version. > > >>> K_dyn > <71987x71987 sparse matrix of type '' > with 3083884 stored elements (space for 3083884) > in Compressed Sparse Column format> > >>> f > <71987x1 sparse matrix of type '' > with 52 stored elements (space for 52) > in Compressed Sparse Column format> > >>> x = spsolve(K_dyn, f) > Traceback (most recent call last): > File "", line 1, in ? > File > "/usr/local/lib64/python2.4/site-packages/scipy/linsolve/linsolve.py", > line 75, in spsolve > b = asarray(b, dtype=data.dtype) > File "/usr/local/lib64/python2.4/site-packages/numpy/core/numeric.py", > line 132, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: setting an array element with a sequence. > > Is this a bug ? > > Nils > From jelle.feringa at ezct.net Sun Mar 18 10:49:18 2007 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Sun, 18 Mar 2007 15:49:18 +0100 Subject: [SciPy-user] vector field integration Message-ID: <001401c7696c$9cf0a6f0$c000a8c0@JELLE> Has anyone worked on vector field integration in a numpy/scipy context? I'm evaluating my options to do so, but I have little experience in this, so that makes it hard to decipher which libraries might be of good use, from those that won't. Any pointers would be really appreciated! Cheers, -jelle From nwagner at iam.uni-stuttgart.de Sun Mar 18 12:02:29 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 18 Mar 2007 17:02:29 +0100 Subject: [SciPy-user] spsolve not working In-Reply-To: <45FD40BA.1020107@gmu.edu> References: <45FD40BA.1020107@gmu.edu> Message-ID: On Sun, 18 Mar 2007 09:38:02 -0400 Andrew Corrigan wrote: > I have the same problem mentioned below with scipy >0.5.2. I try to call > spsolve and get an error message. This only happens >when running in > 64-bit Ubuntu, but doesn't happen in Windows XP with >Enthought's > Python. Can someone please help? Fixed in svn (IIRC). Nils From lechtlr at yahoo.com Sun Mar 18 12:13:36 2007 From: lechtlr at yahoo.com (lechtlr) Date: Sun, 18 Mar 2007 09:13:36 -0700 (PDT) Subject: [SciPy-user] use of Jacobian function in leastsq Message-ID: <28602.56536.qm@web57911.mail.re3.yahoo.com> JJ: If you define the derivatives of the Jacobian function in columns instead of rows, it'll work better. Try the attached script. -Lex from scipy import * from scipy.optimize import leastsq x = array([0., 1., 2., 3., 4., 5.]).astype('d') coeffs = [4., 3., 5., 2.] yErrs = array([0.1, 0.2, -0.1, 0.05, 0,-.02]).astype('d') m = len(x) n = len(coeffs) def evalY(p, x): """a simple cubic""" return x**3 * p[3] + x**2 * p[2] + x * p[1] + p[0] def J(p, y, x): """the jacobian of a cubic (not actually a function of y, p since resid is linear in p) """ print '\n in J' result = zeros([n,m], 'd') result[0, :] = -ones(m, 'd') result[1, :] = -x result[2, :] = -x**2 result[3, :] = -x**3 print result return result def resid(p, y, x): return y - evalY(p, x) y_true = evalY(coeffs, x) y_meas = y_true + yErrs p0 = [1., 1., 1., 1.] pMin, C, infodict, ier, mesg = leastsq(resid, p0, args=(y_meas, x), full_output=True) pMin2, C2, infodict2, ier2, mesg2 = leastsq(resid, p0, Dfun=J, col_deriv=1, args=(y_meas, x), full_output=True) #check against known algorithms for computing C JAtPMin = J(pMin, None, x) Q_true, R_true = linalg.qr(JAtPMin) C_true = linalg.inv(dot(JAtPMin, transpose(JAtPMin))) C_true2 = linalg.inv(dot(R_true, transpose(R_true))) print 'pMin' print pMin print 'pMin2' print pMin2 print '\nC' print C print '\nC2' print C2 print '\nC_true' print C_true print '\nC_true2' print C_true2 --------------------------------- Don't be flakey. Get Yahoo! Mail for Mobile and always stay connected to friends. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nvf at uwm.edu Sun Mar 18 14:06:24 2007 From: nvf at uwm.edu (Nick Fotopoulos) Date: Sun, 18 Mar 2007 13:06:24 -0500 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? Message-ID: On 3/18/07, scipy-user-request at scipy.org wrote: > Message: 2 > Date: Sat, 17 Mar 2007 14:41:33 -0500 > From: Robert Kern > Subject: Re: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: > _fftpack.so problems? > To: SciPy Users List > Message-ID: <45FC446D.7000409 at gmail.com> > Content-Type: text/plain; charset=UTF-8 > > Nick Fotopoulos wrote: > > > > > collect2: ld returned 1 exit status > > error: Command "/usr/local/bin/gfortran -Wall -bundle > > build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/Lib/fftpack/_fftpackmodule.o > > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfft.o > > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/drfft.o > > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zrfft.o > > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/src/zfftnd.o > > build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/fortranobject.o > > -L/opt/lscsoft/non-lsc/lib > > -L/usr/local/lib/gcc/i386-apple-darwin8.8.1/4.3.0 > > -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -lfftw3 -lgfortran -o > > build/lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so" failed with > > exit status 1 > > > > Many thanks for the information and for taking a shot at this. > > Where is this coming from if not LDFLAGS? > > -L/opt/lscsoft/non-lsc/lib nvf at dirac:~/temp/scipy-svn$ cat site.cfg [fftw3] library_dirs = /opt/lscsoft/non-lsc/lib fftw3_libs = fftw3 include_dirs = /opt/lscsoft/non-lsc/include From acorriga at gmu.edu Sun Mar 18 15:27:04 2007 From: acorriga at gmu.edu (Andrew Corrigan) Date: Sun, 18 Mar 2007 19:27:04 +0000 (UTC) Subject: [SciPy-user] spsolve not working References: <45FD40BA.1020107@gmu.edu> Message-ID: Thanks. I'll give that a try. Nils Wagner iam.uni-stuttgart.de> writes: > Fixed in svn (IIRC). > > Nils > From sgarcia at olfac.univ-lyon1.fr Mon Mar 19 05:00:28 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Mon, 19 Mar 2007 10:00:28 +0100 Subject: [SciPy-user] Problem with weave blitz win32 In-Reply-To: <45FA661F.9020004@olfac.univ-lyon1.fr> References: <45FA661F.9020004@olfac.univ-lyon1.fr> Message-ID: <45FE512C.1020208@olfac.univ-lyon1.fr> Hi list, does anyone have experience a scipy.weave with blitz on winXP ? My still have the problem running the example array3d.py ... And I have few knowledge about compilation ... Any help would be wonderfull... Sam Samuel GARCIA wrote: > Hi list, > > I have problem running weave on windowsXP. > > Version : > Python 2.5 > MiGW 3.4.2 > Scipy 0.5.2 > Numpy 1.0.1 > > > array3d.py give me : > > numpy: > [[[ 0 1 2 3] > [ 4 5 6 7] > [ 8 9 10 11]] > > [[12 13 14 15] > [16 17 18 19] > [20 21 22 23]]] > Pure Inline: > > img[ 0][ 0]= 0 1 2 3 > img[ 0][ 1]= 4 5 6 7 > img[ 0][ 2]= 8 9 10 11 > img[ 1][ 0]= 12 13 14 15 > img[ 1][ 1]= 16 17 18 19 > img[ 1][ 2]= 20 21 22 23 > Blitz Inline: > > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:5: > warning: ignoring #pragma warning > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:6: > warning: ignoring #pragma warning > In file included from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/applics.h:400, > from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/vecexpr.h:32, > from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/vecpick.cc:16, > from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/vecpick.h:293, > from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/vector.h:449, > from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/tinyvec.h:430, > from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array-impl.h:44, > from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array.h:32, > from > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:9: > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/mathfunc.h: In > static member function `static double > blitz::_bz_expm1::apply(P_numtype1)': > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/mathfunc.h:1353: > error: `::expm1' has not been declared > In file included from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array/funcs.h:29, > from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array/newet.h:29, > from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array/et.h:27, > from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array-impl.h:2515, > from > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/array.h:32, > from > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:9: > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/funcs.h: In > static member function `static T_numtype1 > blitz::Fn_expm1::apply(T_numtype1)': > G:/Python25/lib/site-packages/scipy/weave/blitz/blitz/funcs.h:113: > error: `::expm1' has not been declared > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp: > In function `PyObject* file_to_py(FILE*, char*, char*)': > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:404: > warning: unused variable 'py_obj' > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp: > In function `blitz::Array > convert_to_blitz(PyArrayObject*, const char*) [with T = int, int N = 3]': > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:708: > instantiated from here > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp:657: > warning: unused variable 'stride_acc' > Traceback (most recent call last): > File "array3d.py", line 105, in > main() > File "array3d.py", line 101, in main > blitz_inline(arr) > File "array3d.py", line 89, in blitz_inline > weave.inline(code, ['arr'], type_converters=converters.blitz) > File "G:\Python25\Lib\site-packages\scipy\weave\inline_tools.py", > line 339, in inline > **kw) > File "G:\Python25\Lib\site-packages\scipy\weave\inline_tools.py", > line 447, in compile_function > verbose=verbose, **kw) > File "G:\Python25\Lib\site-packages\scipy\weave\ext_tools.py", line > 365, in compile > verbose = verbose, **kw) > File "G:\Python25\Lib\site-packages\scipy\weave\build_tools.py", > line 269, in build_extension > setup(name = module_name, ext_modules = [ext],verbose=verb) > File "G:\Python25\Lib\site-packages\numpy\distutils\core.py", line > 174, in setup > return old_setup(**new_attr) > File "G:\Python25\lib\distutils\core.py", line 168, in setup > raise SystemExit, "error: " + str(msg) > distutils.errors.CompileError: error: Command "g++ -mno-cygwin -O2 > -Wall -IG:\Python25\lib\site-packages\scipy\weave > -IG:\Python25\lib\site-packages\scipy\weave\scxx > -IG:\Python25\lib\site-packages\scipy\weave\blitz > -IG:\Python25\lib\site-packages\numpy\core\include > -IG:\Python25\include -IG:\Python25\PC -c > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.cpp > -o > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_intermediate\compiler_894ad5ed761bb51736c6d2b7872dc212\Release\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\sc_49e94d1bdd1ad16917064c910093194f0.o" > failed with exit status 1 > > > And weave.test() give me : > > > Found 0 tests for scipy.weave.c_spec > Found 2 tests for scipy.weave.blitz_tools > building extensions here: > g:\docume~1\sgarcia\locals~1\temp\sgarcia\python25_compiled\m7 > Found 1 tests for scipy.weave.ext_tools > Found 9 tests for scipy.weave.build_tools > Found 0 tests for scipy.weave.inline_tools > Found 1 tests for scipy.weave.ast_tools > Warning: FAILURE importing tests for from '...ckages\\scipy\\weave\\wx_spec.pyc'> > G:\Python25\Lib\site-packages\scipy\weave\tests\test_wx_spec.py:16: > ImportError: No module named wxPython (in ) > Found 3 tests for scipy.weave.standard_array_spec > Found 74 tests for scipy.weave.size_check > Found 26 tests for scipy.weave.catalog > Found 16 tests for scipy.weave.slice_handler > Found 0 tests for __main__ > ...warning: specified build_dir '_bad_path_' does not exist or is not > writable. Trying default locations > .....warning: specified build_dir '_bad_path_' does not exist or is > not writable. Trying default locations > ....................................F..F..................................................................removing > 'g:\docume~1\sgarcia\locals~1\temp\tmpga71mmcat_test' (and everything > under it) > error removing g:\docume~1\sgarcia\locals~1\temp\tmpga71mmcat_test: > g:\docume~1\sgarcia\locals~1\temp\tmpga71mmcat_test: Le r?pertoire > n'est pas vide > .removing 'g:\docume~1\sgarcia\locals~1\temp\tmpimmvs6cat_test' (and > everything under it) > ................. > ====================================================================== > FAIL: check_1d_3 > (scipy.weave.tests.test_size_check.test_dummy_array_indexing) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", > line 168, in check_1d_3 > self.generic_1d('a[-11:]') > File > "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", > line 135, in generic_1d > self.generic_wrap(a,expr) > File > "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", > line 127, in generic_wrap > self.generic_test(a,expr,desired) > File > "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", > line 123, in generic_test > assert_array_equal(actual,desired, expr) > File "G:\Python25\lib\site-packages\numpy\testing\utils.py", line > 223, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File "G:\Python25\lib\site-packages\numpy\testing\utils.py", line > 215, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not equal > a[-11:] > (mismatch 100.0%) > x: array([1]) > y: array([10]) > > ====================================================================== > FAIL: check_1d_6 > (scipy.weave.tests.test_size_check.test_dummy_array_indexing) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", > line 174, in check_1d_6 > self.generic_1d('a[:-11]') > File > "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", > line 135, in generic_1d > self.generic_wrap(a,expr) > File > "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", > line 127, in generic_wrap > self.generic_test(a,expr,desired) > File > "G:\Python25\Lib\site-packages\scipy\weave\tests\test_size_check.py", > line 123, in generic_test > assert_array_equal(actual,desired, expr) > File "G:\Python25\lib\site-packages\numpy\testing\utils.py", line > 223, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File "G:\Python25\lib\site-packages\numpy\testing\utils.py", line > 215, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not equal > a[:-11] > (mismatch 100.0%) > x: array([9]) > y: array([0]) > > ---------------------------------------------------------------------- > Ran 132 tests in 2.266s > > FAILED (failures=2) > >Exit code: 0 > > > And of course my main problem is that some of my code that work on > linux debian does'nt work on win32. > (But it used to work on the old enthon distribution version!) > > Any idea ? > > Thank > > Sam > > > > -- > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > Samuel Garcia > Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. > CNRS - UMR5020 - Universite Claude Bernard LYON 1 > Equipe logisique et technique > 50, avenue Tony Garnier > 69366 LYON Cedex 07 > FRANCE > T?l : 04 37 28 74 64 > Fax : 04 37 28 76 01 > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logisique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nogradi at gmail.com Mon Mar 19 06:28:13 2007 From: nogradi at gmail.com (Daniel Nogradi) Date: Mon, 19 Mar 2007 11:28:13 +0100 Subject: [SciPy-user] from maple to scipy/numpy Message-ID: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> [I sent this message to c.l.p and was told to come here] I'm just getting started with numpy/scipy and first would like to get a view of what it can do and what it can't. The main idea is to move from maple to python and the first thing that poped up is the fact that maple is very convenient for both formal manipulations and exact integer calculations. For instance if a matrix has integer entries and the eigenvalues are integers maple can find these exactly. Can numpy/scipy do this? Or the eigenvalues will always be floating point numbers? A more general question, to what extent can numpy/scipy do symbolic (formal) manipulations? (I know this is rather vague, but for those of you who are familiar with maple/mathematica I guess it's clear.) From bruno.chazelas at ias.u-psud.fr Mon Mar 19 06:37:58 2007 From: bruno.chazelas at ias.u-psud.fr (bruno) Date: Mon, 19 Mar 2007 11:37:58 +0100 Subject: [SciPy-user] from maple to scipy/numpy In-Reply-To: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> Message-ID: <45FE6806.4040003@ias.u-psud.fr> Hello, though I am rather new to python/scipy I think it is more focalized on numerical calculation than on symbolical calculation. A more straight forward replacement for Maple would be Maxima I think. http://maxima.sourceforge.net/ Bruno Daniel Nogradi a ?crit : > [I sent this message to c.l.p and was told to come here] > > I'm just getting started with numpy/scipy and first would like to get > a view of what it can do and what it can't. The main idea is to move > from maple to python and the first thing that poped up is the fact > that maple is very convenient for both formal manipulations and exact > integer calculations. For instance if a matrix has integer entries and > the eigenvalues are integers maple can find these exactly. > > Can numpy/scipy do this? Or the eigenvalues will always be floating > point numbers? > > A more general question, to what extent can numpy/scipy do symbolic > (formal) manipulations? (I know this is rather vague, but for those of > you who are familiar with maple/mathematica I guess it's clear.) > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From jan at aims.ac.za Mon Mar 19 06:42:38 2007 From: jan at aims.ac.za (Jan Groenewald) Date: Mon, 19 Mar 2007 12:42:38 +0200 Subject: [SciPy-user] from maple to scipy/numpy In-Reply-To: <45FE6806.4040003@ias.u-psud.fr> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> Message-ID: <20070319104238.GK8808@aims.ac.za> Hi On Mon, Mar 19, 2007 at 11:37:58AM +0100, bruno wrote: > though I am rather new to python/scipy I think it is more focalized on > numerical calculation than on symbolical calculation. A more straight > forward replacement for Maple would be Maxima I think. > > http://maxima.sourceforge.net/ SAGE is probably what you need. It integrates maxima, Gap, NLT, Pari-GP, with python and scipy. http://modular.math.washington.edu/sage/ To install on debian-based systems (binary): http://modular.math.washington.edu/SAGEbin/linux_32bit/sage-2.3-debian-i686-Linux.tar.gz Or elsewhere: http://modular.math.washington.edu/sage/download.html Their blurb: SAGE is free open source math software that supports research and teaching in algebra, geometry, number theory, cryptography, numerical computation, and related areas. Both the SAGE development model and the technology in SAGE itself is distinguished by an extremely strong emphasis on openness, community, cooperation, and collaboration: we are building the car, not reinventing the wheel. Our overall goal is to create a viable free open source alternative to commercial mathematics software. SAGE includes the following core software: Group theory and combinatorics GAP, NetworkX Symbolic computation and Calculus Maxima Commutative algebra Singular Number theory PARI, MWRANK, NTL Graphics Matplotlib Numerical methods GSL, Numpy Mainstream programming language Python Interactive shell IPython Graphical User Interface The SAGE Notebook Versioned Source Tracking Mercurial HG cheers, Jan -- .~. /V\ Jan Groenewald /( )\ www.aims.ac.za ^^-^^ From cimrman3 at ntc.zcu.cz Mon Mar 19 06:44:15 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 19 Mar 2007 11:44:15 +0100 Subject: [SciPy-user] FEM In-Reply-To: <45FB4CFF.5050901@gmail.com> References: <45FB4CFF.5050901@gmail.com> Message-ID: <45FE697F.8090308@ntc.zcu.cz> H?kan Jakobsson wrote: > Hi list, > Python/scipy newbie here. I'm currently writing my master's thesis in > computational mathematics - discontinuous galerkin methods to be > precise. I'v been doing all the numerics for my thesis in Matlab but > stumbled upon Python and scipy by chance. As a little experiment I > translated one of my solvers into Python using numpy, scipy and the > pysparse packages, and I was very impressed with the effort/performance > ratio. Now, to the point.. > > If you're doing finite element analysis, what are you're experiences > using Python? Do you use Python as your main development tool? In > conjunction with C, Fortran? Is there any drawbacks with using Python > for this type of work? Anything else? > > I hope I'm not asking to much here, but it would be really interesting > to know a bit about what the situation's like. I think Python and scipy > is really great, and now I'm trying to convince everyone else... > > Please, share with me your experiences. I have switched to Python + scipy + matplotlib + ... combo two years ago and never regretted that decision. You can see the result at http://ui505p06-mbs.ntc.zcu.cz/sfe/SFE. Before I coded a lot in matlab (a heart simulation FE code, with inverse problem solution capability), writing critical parts as C mex-files. It was rather painful comparing to ease of SWIG or f2py or ctypes or (plug your favourite wrapper generator here). The pass-by-value (all function arguments are immutable) concept of matlab was one of the main obstacles to get a decent speed - we had to use dirty tricks to override this. The combination Python + C (or fortran or...) is a really powerful concept. Moreover numpy and scipy advances in big paces, so if you miss something now, there is a great chance you will not miss it in a few weeks... cheers, r. From david at ar.media.kyoto-u.ac.jp Mon Mar 19 06:47:42 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 19 Mar 2007 19:47:42 +0900 Subject: [SciPy-user] FEM In-Reply-To: <45FE697F.8090308@ntc.zcu.cz> References: <45FB4CFF.5050901@gmail.com> <45FE697F.8090308@ntc.zcu.cz> Message-ID: <45FE6A4E.1080206@ar.media.kyoto-u.ac.jp> Robert Cimrman wrote: > > Before I coded a lot in matlab (a heart simulation FE code, with inverse > problem solution capability), writing critical parts as C mex-files. It > was rather painful comparing to ease of SWIG or f2py or ctypes or (plug > your favourite wrapper generator here). The pass-by-value (all function > arguments are immutable) concept of matlab was one of the main obstacles > to get a decent speed - we had to use dirty tricks to override this. To be exact, matlab uses COW to pass arguments efficiently. Otherwise, I agree entirely, interfacing matlab with C is really a painful experience: there is no way to do it efficiently and nicely at the same time, the C-API of matlab being far from complete for anything non trivial. That was one of my main reason to try (and stay with afterwards) python. David From nauss at lcrs.de Mon Mar 19 06:52:51 2007 From: nauss at lcrs.de (Thomas Nauss) Date: Mon, 19 Mar 2007 11:52:51 +0100 Subject: [SciPy-user] Next neighbor interpolation inside regular gridded arrays Message-ID: <45FE6B83.10902@lcrs.de> Hi everyone, I have projected raw swath satellite data to map coordinates and stored the data values in a 2D target array that has the geometry of the output dataset already. I have initialized the target array using Numeric.zeros and since not all fields of the target array had corresponding fields in the input dataset, some fields of the target array have still the value 0. Is there a function which performs a next neighbor interpolation for each array field with a specified value? Cheers, Thomas From nogradi at gmail.com Mon Mar 19 06:54:13 2007 From: nogradi at gmail.com (Daniel Nogradi) Date: Mon, 19 Mar 2007 11:54:13 +0100 Subject: [SciPy-user] from maple to scipy/numpy In-Reply-To: <20070319104238.GK8808@aims.ac.za> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> Message-ID: <5f56302b0703190354ub83a380u319c38ec31dd4c9d@mail.gmail.com> > > though I am rather new to python/scipy I think it is more focalized on > > numerical calculation than on symbolical calculation. A more straight > > forward replacement for Maple would be Maxima I think. > > > > http://maxima.sourceforge.net/ > > SAGE is probably what you need. It integrates maxima, Gap, NLT, > Pari-GP, with python and scipy. > > http://modular.math.washington.edu/sage/ > > To install on debian-based systems (binary): > http://modular.math.washington.edu/SAGEbin/linux_32bit/sage-2.3-debian-i686-Linux.tar.gz > > Or elsewhere: > http://modular.math.washington.edu/sage/download.html Thanks very much for the advice. These two packages (sage and maxima) seem all right for symbolic manipulation indeed. Concerning integer arithmetics for example for matrix eigenvalues aren't they a bit of an overkill? Can't numpy/scipy do that by itself? From nielsellegaard at gmail.com Mon Mar 19 05:32:13 2007 From: nielsellegaard at gmail.com (Niels L. Ellegaard) Date: Mon, 19 Mar 2007 10:32:13 +0100 Subject: [SciPy-user] FEM References: <45FB4CFF.5050901@gmail.com> Message-ID: <87aby9lgte.fsf@gmail.com> H?kan Jakobsson writes: > If you're doing finite element analysis, what are you're experiences > using Python? Do you use Python as your main development tool? In > conjunction with C, Fortran? Is there any drawbacks with using Python > for this type of work? Anything else? You may wish to have a look at fenics. It's a FEM engine with a python frontend which I believe is based on numpy. I must admit that I haven't tried the program, but judging from the webpage it is pretty cool :) http://www.fenics.org/ Niels From bryanv at enthought.com Mon Mar 19 09:23:10 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Mon, 19 Mar 2007 08:23:10 -0500 Subject: [SciPy-user] from maple to scipy/numpy In-Reply-To: <20070319104238.GK8808@aims.ac.za> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> Message-ID: <45FE8EBE.1040002@enthought.com> SAGE looks pretty cool, I would just add PyDX http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/user-guide.html as a package for differential geometry. Jan Groenewald wrote: > SAGE includes the following core software: > > Group theory and combinatorics GAP, NetworkX > Symbolic computation and Calculus Maxima > Commutative algebra Singular > Number theory PARI, MWRANK, NTL > Graphics Matplotlib > Numerical methods GSL, Numpy > Mainstream programming language Python > Interactive shell IPython > Graphical User Interface The SAGE Notebook > Versioned Source Tracking Mercurial HG > > cheers, > Jan From hakan.jakobsson at gmail.com Mon Mar 19 10:51:24 2007 From: hakan.jakobsson at gmail.com (=?ISO-8859-1?Q?H=E5kan_Jakobsson?=) Date: Mon, 19 Mar 2007 15:51:24 +0100 Subject: [SciPy-user] FEM In-Reply-To: <45FE697F.8090308@ntc.zcu.cz> References: <45FB4CFF.5050901@gmail.com> <45FE697F.8090308@ntc.zcu.cz> Message-ID: <687bb3e80703190751l582e407epaf31d0d685fff50d@mail.gmail.com> Thanks everyone for your replies, they have been most helpful. I've studied the pros and cons for python, and I'm fairly sure I'll be using python more and more. Your codes, Robert, seems very interesting. I've downloaded the slides from your website and I'll be looking through them. They seem to adresss many of the questions I'd like answered. Furthermore, It's comforting to see that people out there are actually writing research codes using python. I'm sure I'll be posting more questions here in the future. Again thanks, and I certainly will keep on evangelizing. /Hakan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andrew.Reid at nist.gov Mon Mar 19 11:03:08 2007 From: Andrew.Reid at nist.gov (Andrew Reid) Date: Mon, 19 Mar 2007 11:03:08 -0400 Subject: [SciPy-user] FEM In-Reply-To: <687bb3e80703190751l582e407epaf31d0d685fff50d@mail.gmail.com> References: <45FB4CFF.5050901@gmail.com> <45FE697F.8090308@ntc.zcu.cz> <687bb3e80703190751l582e407epaf31d0d685fff50d@mail.gmail.com> Message-ID: <20070319150308.GD14247@smithers.nist.gov> On Mon, Mar 19, 2007 at 03:51:24PM +0100, H?kan Jakobsson wrote: > Thanks everyone for your replies, they have been most helpful. I've studied > the pros and cons for python, and I'm fairly sure I'll be using python more > and more. But wait! It's not over yet! I'm part of a team working on an FEM project at NIST which uses Python quite heavily. Our brief is to address Materials Science users, so we have a lot of custom tools for adapting our mesh to user-provided microstructural image, and we also have an extensible interface for adding new types of user-provided custom constitutive rules. The actual FEM part is pretty standard. Our numerical work and solvers are all in C++, and we don't actually use SciPy/NumPy, we just use SWIG to get to the numerical code. The flexibility of Python has been crucial in working through our API and mesh-construction tools -- it has the right level of easy scriptability, and the right sort of object-orientation, for what we need. The project page is at , and there's a relatively recent talk (in PDF form) at , which very strongly resembles the talk I gave at SciPy2006. -- A. -- Dr. Andrew C. E. Reid, Guest Researcher Center for Theoretical and Computational Materials Science National Institute of Standards and Technology, Mail Stop 8910 Gaithersburg MD 20899 USA andrew.reid at nist.gov From dschult at colgate.edu Mon Mar 19 12:19:25 2007 From: dschult at colgate.edu (Dan Schult) Date: Mon, 19 Mar 2007 12:19:25 -0400 Subject: [SciPy-user] ld unknown flag macosx_version_min In-Reply-To: <45FC4F3B.6050302@gmail.com> References: <45FC4F3B.6050302@gmail.com> Message-ID: <3B3E8AF7-A043-4FD6-8EAC-673B200DF199@colgate.edu> On Mar 17, 2007, at 4:27 PM, Robert Kern wrote: > Dan Schult wrote: >> I've got a Mac OSx 10.4.8 machine and am compiling >> scipy according to the instructions on the webpage. I've >> got gcc 4.0.0 gfortran 4.3.0 fftw3.0 and svn versions of numpy >> and scipy. My python is version 2.5. >> Building numpy goes smoothly, but when I try scipy I have an ld >> error. > > I have gcc 4.0.1 and gfortran 4.3.0 installed on my system, and I > do not see > this problem. Can you try upgrading to the latest version of Xcode > (which should > have gcc 4.0.1)? It's not coming from Python but, I suspect, gfortran. > Robert Kern Indeed-- after an almost 1Gig download of XCode 2.4.1 to get gcc 4.0.1 it compiled and runs. :) The tests give me 4 failed, but these already have been noted on the email list. I assume if I had compiled gfortran with gcc4.0 it would have worked the first time. I always seem to have trouble when using precompiled binaries. Thanks very much Robert for the helpful tip! From robert.kern at gmail.com Mon Mar 19 12:43:14 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 Mar 2007 11:43:14 -0500 Subject: [SciPy-user] from maple to scipy/numpy In-Reply-To: <5f56302b0703190354ub83a380u319c38ec31dd4c9d@mail.gmail.com> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <5f56302b0703190354ub83a380u319c38ec31dd4c9d@mail.gmail.com> Message-ID: <45FEBDA2.9070902@gmail.com> Daniel Nogradi wrote: > Concerning integer arithmetics for example for matrix eigenvalues > aren't they a bit of an overkill? Can't numpy/scipy do that by itself? I guess they could if someone were to implement such algorithms for us. The focus of contributions has largely been for floating point algorithms, so we use LAPACK to do most of our linear algebra. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ggellner at uoguelph.ca Mon Mar 19 12:55:34 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Mon, 19 Mar 2007 12:55:34 -0400 Subject: [SciPy-user] from maple to scipy/numpy In-Reply-To: <45FE8EBE.1040002@enthought.com> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <45FE8EBE.1040002@enthought.com> Message-ID: <20070319165534.GG31245@encolpuis> I second PyDX it is great, I use it mostly for the automatic derivatives, very nice if you don't need compiled speed (it is still quite fast anyway). Gabriel On Mon, Mar 19, 2007 at 08:23:10AM -0500, Bryan Van de Ven wrote: > > SAGE looks pretty cool, I would just add PyDX > > http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/user-guide.html > > as a package for differential geometry. > > > Jan Groenewald wrote: > > SAGE includes the following core software: > > > > Group theory and combinatorics GAP, NetworkX > > Symbolic computation and Calculus Maxima > > Commutative algebra Singular > > Number theory PARI, MWRANK, NTL > > Graphics Matplotlib > > Numerical methods GSL, Numpy > > Mainstream programming language Python > > Interactive shell IPython > > Graphical User Interface The SAGE Notebook > > Versioned Source Tracking Mercurial HG > > > > cheers, > > Jan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From simon at arrowtheory.com Mon Mar 19 13:31:34 2007 From: simon at arrowtheory.com (Simon Burton) Date: Mon, 19 Mar 2007 10:31:34 -0700 Subject: [SciPy-user] from maple to scipy/numpy In-Reply-To: <20070319165534.GG31245@encolpuis> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <45FE8EBE.1040002@enthought.com> <20070319165534.GG31245@encolpuis> Message-ID: <20070319103134.f71de165.simon@arrowtheory.com> On Mon, 19 Mar 2007 12:55:34 -0400 Gabriel Gellner wrote: > > I second PyDX it is great, I use it mostly for the automatic > derivatives, very nice if you don't need compiled speed (it is still > quite fast anyway). > > Gabriel wow, a user! (*blushing*). Simon. From stefan at sun.ac.za Mon Mar 19 16:24:43 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 19 Mar 2007 20:24:43 +0000 Subject: [SciPy-user] Next neighbor interpolation inside regular griddedarrays In-Reply-To: <45FE6B83.10902@lcrs.de> References: <45FE6B83.10902@lcrs.de> Message-ID: Hi Thomas On Mon, 19 Mar 2007 11:52:51 +0100, Thomas Nauss wrote: > Hi everyone, > I have projected raw swath satellite data to map coordinates and stored > the data values in a 2D target array that has the geometry of the output > dataset already. I have initialized the target array using Numeric.zeros > and since not all fields of the target array had corresponding fields in > the input dataset, some fields of the target array have still the value 0. > Is there a function which performs a next neighbor interpolation for > each array field with a specified value? Normally, the easiest way to work around this problem is to perform the process in reverse. I.e. for every coordinate in the output frame, do the inverse mapping to find the coordinate in the input frame, and, using interpolation, calculate the value. If you are interested, I have code available that does all sorts of warping, or you can take a look at scipy's ndimage.map_coordinates. Cheers St?fan From joishi at amnh.org Mon Mar 19 18:09:11 2007 From: joishi at amnh.org (J Oishi) Date: Mon, 19 Mar 2007 18:09:11 -0400 Subject: [SciPy-user] wavevector arrays In-Reply-To: References: Message-ID: <9F48FF87-3B7A-4763-9186-68AC32AF577A@amnh.org> Hi, I am performing a number of 3D FFTs using scipy, and I had a question as to how to setup 3 3D arrays of wavevectors, much like one might do with mgrid. However, because the FFT returns wavevectors running from 0,...,k_c, -k_c, ..., -1 (where k_c is the nyquist wavenumber) I'm not sure how to get mgrid to do this. In 2D, I could create two 1D kx and ky arrays and use meshgrid (for an even number of points): import numpy as N kx = 2*N.pi/Lx * N.concatenate(N.arange(0,nx/2-1),N.arange(-nx/2, 0)) ky = 2*N.pi/Ly * N.concatenate(N.arange(0,ny/2-1),N.arange(-ny/2, 0)) kk = meshgrid(kx,ky) where kk would have a shape (2,ny,nx). However, this (to my knowledge) doesn't generalize to 3D. I'm a total python/scipy neophyte, so any help you could provide would be quite appreciated. thanks, jeff From steve at shrogers.com Mon Mar 19 23:16:07 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Mon, 19 Mar 2007 21:16:07 -0600 Subject: [SciPy-user] John Backus R.I.P. Message-ID: <45FF51F7.8090009@shrogers.com> John Backus, the father of Fortran, `has died `_. From topengineer at gmail.com Tue Mar 20 03:20:45 2007 From: topengineer at gmail.com (Hui Chang Moon) Date: Tue, 20 Mar 2007 16:20:45 +0900 Subject: [SciPy-user] I can't find unit step function. Message-ID: <296323b50703200020m2a33d3d8kc0bd0f0c8a9cac3e@mail.gmail.com> Hello, users, Scipy has many convenient functions, so I can't find the function that I am willing to use. So, anybody help me find the unit step function. Thank you~. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.warde.farley at utoronto.ca Tue Mar 20 03:38:05 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Tue, 20 Mar 2007 03:38:05 -0400 Subject: [SciPy-user] I can't find unit step function. In-Reply-To: <296323b50703200020m2a33d3d8kc0bd0f0c8a9cac3e@mail.gmail.com> References: <296323b50703200020m2a33d3d8kc0bd0f0c8a9cac3e@mail.gmail.com> Message-ID: <1174376285.28167.6.camel@rodimus> On Tue, 2007-03-20 at 16:20 +0900, Hui Chang Moon wrote: > Hello, users, > > Scipy has many convenient functions, so I can't find the function that > I am willing to use. > So, anybody help me find the unit step function. That might be just a little too basic for a built-in. But you can easily define it. def step(x): if x == 0: return 0.5 return x > 0 and 1 or 0 (Your idea of what constitutes 'the' unit step function may vary) Cheers, David From peridot.faceted at gmail.com Tue Mar 20 03:45:12 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 20 Mar 2007 03:45:12 -0400 Subject: [SciPy-user] I can't find unit step function. In-Reply-To: <1174376285.28167.6.camel@rodimus> References: <296323b50703200020m2a33d3d8kc0bd0f0c8a9cac3e@mail.gmail.com> <1174376285.28167.6.camel@rodimus> Message-ID: On 20/03/07, David Warde-Farley wrote: > On Tue, 2007-03-20 at 16:20 +0900, Hui Chang Moon wrote: > > Hello, users, > > > > Scipy has many convenient functions, so I can't find the function that > > I am willing to use. > > So, anybody help me find the unit step function. > > That might be just a little too basic for a built-in. But you can easily > define it. > > def step(x): > if x == 0: return 0.5 > return x > 0 and 1 or 0 You might prefer that it be vectorized: In [7]: def step(x): ...: return asarray(x>=0,dtype=float) ...: In [8]: step(linspace(-1,1,9)) Out[8]: array([ 0., 0., 0., 0., 1., 1., 1., 1., 1.]) Though if you prefer step(0)==0.5 it's a bit of a pain. Anne From topengineer at gmail.com Tue Mar 20 03:54:35 2007 From: topengineer at gmail.com (Hui Chang Moon) Date: Tue, 20 Mar 2007 16:54:35 +0900 Subject: [SciPy-user] I can't find unit step function. In-Reply-To: References: <296323b50703200020m2a33d3d8kc0bd0f0c8a9cac3e@mail.gmail.com> <1174376285.28167.6.camel@rodimus> Message-ID: <296323b50703200054k4398ab0dje2d11012384bc242@mail.gmail.com> Thank you for your kindness~! Have a nice day~!! 2007/3/20, Anne Archibald : > > On 20/03/07, David Warde-Farley wrote: > > On Tue, 2007-03-20 at 16:20 +0900, Hui Chang Moon wrote: > > > Hello, users, > > > > > > Scipy has many convenient functions, so I can't find the function that > > > I am willing to use. > > > So, anybody help me find the unit step function. > > > > That might be just a little too basic for a built-in. But you can easily > > define it. > > > > def step(x): > > if x == 0: return 0.5 > > return x > 0 and 1 or 0 > > You might prefer that it be vectorized: > > In [7]: def step(x): > ...: return asarray(x>=0,dtype=float) > ...: > > In [8]: step(linspace(-1,1,9)) > Out[8]: array([ 0., 0., 0., 0., 1., 1., 1., 1., 1.]) > > Though if you prefer step(0)==0.5 it's a bit of a pain. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nauss at lcrs.de Tue Mar 20 04:19:01 2007 From: nauss at lcrs.de (Thomas Nauss) Date: Tue, 20 Mar 2007 09:19:01 +0100 Subject: [SciPy-user] Next neighbor interpolation inside regular griddedarrays In-Reply-To: References: <45FE6B83.10902@lcrs.de> Message-ID: <45FF98F5.8050401@lcrs.de> Hi St?fan, thank's for your response. I have checked map_coordinates but it does not feature next neighbor interpolation and as I understood ndimage requires equally-spaced data. I think the equally-spaced data is the main problem since the raw satellite data doesn't have it (neither lat nor long is continuously increasing along cols or rows). That's why I didn't use the inverse way because for that I had to go through the entire array for every output pixel to check the closest lat/long combination. I also tried PyNGL.natgrid to get an equally spaced input grid but it takes much to long (> 1e6 pixels) . I could of course warp the raw data to an equally-spaced (non-interpolated) lat/long coordination system so that you can access the value of a specific lat/long coordinate directly through the row/col number and perform an inverse projection afterwards - this should have a slightly better accuracy in the end. Do you have any other idea? Cheers Thomas Stefan van der Walt wrote: > Hi Thomas > > On Mon, 19 Mar 2007 11:52:51 +0100, Thomas Nauss wrote: >> Hi everyone, >> I have projected raw swath satellite data to map coordinates and stored >> the data values in a 2D target array that has the geometry of the output >> dataset already. I have initialized the target array using Numeric.zeros >> and since not all fields of the target array had corresponding fields in >> the input dataset, some fields of the target array have still the value 0. >> Is there a function which performs a next neighbor interpolation for >> each array field with a specified value? > > Normally, the easiest way to work around this problem is to perform the > process in reverse. I.e. for every coordinate in the output frame, do the > inverse mapping to find the coordinate in the input frame, and, using > interpolation, calculate the value. > > If you are interested, I have code available that does all sorts of warping, > or you can take a look at scipy's ndimage.map_coordinates. > > Cheers > St?fan > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From david.warde.farley at utoronto.ca Tue Mar 20 04:29:34 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Tue, 20 Mar 2007 04:29:34 -0400 Subject: [SciPy-user] I can't find unit step function. In-Reply-To: References: <296323b50703200020m2a33d3d8kc0bd0f0c8a9cac3e@mail.gmail.com> <1174376285.28167.6.camel@rodimus> Message-ID: <1174379374.5075.5.camel@rodimus> On Tue, 2007-03-20 at 03:45 -0400, Anne Archibald wrote: > In [7]: def step(x): > ...: return asarray(x>=0,dtype=float) > ...: > > In [8]: step(linspace(-1,1,9)) > Out[8]: array([ 0., 0., 0., 0., 1., 1., 1., 1., 1.]) > > Though if you prefer step(0)==0.5 it's a bit of a pain. For the sake of completeness and utter boredom, here's the vectorized version with step(0) == 0.5, and it's still a one-liner ;) def step(x): return asarray(x>0,dtype=float)+0.5*asarray(x==0,dtype=float) In [9]: step(arange(-5,5)) Out[9]: array([ 0. , 0. , 0. , 0. , 0. , 0.5, 1. , 1. , 1. , 1. ]) Cheers, David From raphael.langella at steria.com Tue Mar 20 07:08:14 2007 From: raphael.langella at steria.com (raphael langella) Date: Tue, 20 Mar 2007 12:08:14 +0100 Subject: [SciPy-user] building numpy/scipy on Solaris Message-ID: <8feab0f2.b0f28fea@steria.com> I'm trying to build numpy and scipy on Solaris 8. The BLAS FAQ on netlib.org suggests using optimized BLAS librairies provided by computer vendor, like the SUN Performance Library. This library is supposed to provide enhanced and optimized version of BLAS and LAPACK. I happen to have Forte 7 installed, so I first tried to build against this library (libsunperf.a). I tried several versions and different compilation options, but I always get undefined symbols. Is this supported? As anyone ever succeeded in using this library to compile scipy and numpy? I also did some unsuccessful tests with Atlas. But, I think I'm now very close to have it working with the standard BLAS/LAPACK (from netlib.org). I have to use GNU ld (unresolved symbols when using Sun's ld). Numpy builds and tests without any error. I've got a few errors and failures with scipy (see attachment for the complete test report). Are these errors critical? I don't understand where they come from and how to correct them. Thanks for your help. Rapha?l Langella -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy.test.txt URL: From hakan.jakobsson at gmail.com Tue Mar 20 07:27:18 2007 From: hakan.jakobsson at gmail.com (=?ISO-8859-1?Q?H=E5kan_Jakobsson?=) Date: Tue, 20 Mar 2007 12:27:18 +0100 Subject: [SciPy-user] FEM In-Reply-To: <20070319150308.GD14247@smithers.nist.gov> References: <45FB4CFF.5050901@gmail.com> <45FE697F.8090308@ntc.zcu.cz> <687bb3e80703190751l582e407epaf31d0d685fff50d@mail.gmail.com> <20070319150308.GD14247@smithers.nist.gov> Message-ID: <687bb3e80703200427m2477eccdwa3dd8b6722c311f4@mail.gmail.com> I see that now :-) Thanks for the input and the links to your project. Part of my plan is to look into SWIG and some of the other wrapper generators out there in the future. This does help a lot. /H?kan J On 3/19/07, Andrew Reid wrote: > > On Mon, Mar 19, 2007 at 03:51:24PM +0100, H?kan Jakobsson wrote: > > Thanks everyone for your replies, they have been most helpful. I've > studied > > the pros and cons for python, and I'm fairly sure I'll be using python > more > > and more. > > But wait! It's not over yet! > > I'm part of a team working on an FEM project at NIST which > uses Python quite heavily. Our brief is to address Materials > Science users, so we have a lot of custom tools for adapting > our mesh to user-provided microstructural image, and we also > have an extensible interface for adding new types of > user-provided custom constitutive rules. The actual FEM > part is pretty standard. > > Our numerical work and solvers are all in C++, > and we don't actually use SciPy/NumPy, we just use SWIG > to get to the numerical code. > > The flexibility of Python has been crucial in working through > our API and mesh-construction tools -- it has the right level of > easy scriptability, and the right sort of object-orientation, > for what we need. > > The project page is at , > and there's a relatively recent talk (in PDF form) at > < > http://www.cacr.caltech.edu/projects/danse/talks/kickoff/22-Reid/danse_oof.pdf > >, > which very strongly resembles the talk I gave at SciPy2006. > > -- A. > -- > Dr. Andrew C. E. Reid, Guest Researcher > Center for Theoretical and Computational Materials Science > National Institute of Standards and Technology, Mail Stop 8910 > Gaithersburg MD 20899 USA > andrew.reid at nist.gov > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericpellegrini at yahoo.fr Tue Mar 20 08:13:23 2007 From: ericpellegrini at yahoo.fr (Pellegrini Eric) Date: Tue, 20 Mar 2007 13:13:23 +0100 (CET) Subject: [SciPy-user] Problem to understand kstest Message-ID: <281487.627.qm@web23405.mail.ird.yahoo.com> Hi everybody, I have a set of points for which I would like to know if it follows a normal distribution. To do so, I would like to perform the KS test. Here is an example: x = range(1000) mean = scipy.stats.mean(x) std = scipy.stats.std(x) stats.kstest(x,'norm',args=(mean,std)) is it right ? If yes, how to set up the significance level and what is its default value ? is it a double-sided test ? Thank you ery much Eric Pellegrini --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From franckm at aims.ac.za Tue Mar 20 08:31:35 2007 From: franckm at aims.ac.za (Franck Kalala Mutombo) Date: Tue, 20 Mar 2007 14:31:35 +0200 Subject: [SciPy-user] animate solitons Message-ID: <45FFD427.3090900@aims.ac.za> hi, I have solved the sine-cordon equation(which is a partial differential equation) and I found my solitons(the solution to that equation), it is easy to plot it. but want I need is to animate it in a certain range. do you know any python module which can allows me to animate my solitons? ps: solitons a solution of a partial differential, it is a plane wave moving a certain speed, so I want to animate it as is moving. -- Franck African Institute for Mathematical Sciences -- www.aims.ac.za From ryanlists at gmail.com Tue Mar 20 08:32:19 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 20 Mar 2007 07:32:19 -0500 Subject: [SciPy-user] I can't find unit step function. In-Reply-To: <1174379374.5075.5.camel@rodimus> References: <296323b50703200020m2a33d3d8kc0bd0f0c8a9cac3e@mail.gmail.com> <1174376285.28167.6.camel@rodimus> <1174379374.5075.5.camel@rodimus> Message-ID: Sorry to contribute to this discussion out of boredom and curiosity, but what about: y=where(t>0,1.0,0) (assuming t already exists as some time vector i.e. t=arange(-1,5,0.01)) Since where is built in, you could argue that step is built in as well :0 Ryan On 3/20/07, David Warde-Farley wrote: > On Tue, 2007-03-20 at 03:45 -0400, Anne Archibald wrote: > > > In [7]: def step(x): > > ...: return asarray(x>=0,dtype=float) > > ...: > > > > In [8]: step(linspace(-1,1,9)) > > Out[8]: array([ 0., 0., 0., 0., 1., 1., 1., 1., 1.]) > > > > Though if you prefer step(0)==0.5 it's a bit of a pain. > > For the sake of completeness and utter boredom, here's the vectorized > version with step(0) == 0.5, and it's still a one-liner ;) > > def step(x): > return asarray(x>0,dtype=float)+0.5*asarray(x==0,dtype=float) > > In [9]: step(arange(-5,5)) > Out[9]: array([ 0. , 0. , 0. , 0. , 0. , 0.5, 1. , 1. , 1. , > 1. ]) > > Cheers, > > David > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bruno.chazelas at ias.u-psud.fr Tue Mar 20 08:52:10 2007 From: bruno.chazelas at ias.u-psud.fr (bruno) Date: Tue, 20 Mar 2007 13:52:10 +0100 Subject: [SciPy-user] animate solitons In-Reply-To: <45FFD427.3090900@aims.ac.za> References: <45FFD427.3090900@aims.ac.za> Message-ID: <45FFD8FA.2080301@ias.u-psud.fr> Hello, If you look for graphic animation you can do that with matplotlib http://www.scipy.org/Cookbook/Matplotlib/Animations Bruno Franck Kalala Mutombo a ?crit : > hi, > > I have solved the sine-cordon equation(which is a partial differential > equation) and I found my solitons(the solution to that equation), it is > easy to plot it. but want I need is to animate it in a certain range. do > you know any python module which can allows me to animate my solitons? > > ps: solitons a solution of a partial differential, it is a plane wave > moving a certain speed, so I want to animate it as is moving. > > > From takanobu.amano at gmail.com Tue Mar 20 09:06:15 2007 From: takanobu.amano at gmail.com (Takanobu Amano) Date: Tue, 20 Mar 2007 22:06:15 +0900 Subject: [SciPy-user] wavevector arrays In-Reply-To: <9F48FF87-3B7A-4763-9186-68AC32AF577A@amnh.org> References: <9F48FF87-3B7A-4763-9186-68AC32AF577A@amnh.org> Message-ID: fftpack.fftshift may be useful for you. kx_, ky_, kz_ = mgrid[-nx/2:+nx/2+1,-ny/2:+ny/2+1,-nz/2:+nz/2+1] kx = fftpack.fftshift(kx_) ky = fftpack.fftshift(ky_) kz = fftpack.fftshift(kz_) I'm not sure if this is what you need. Takanobu 2007/3/20, J Oishi : > Hi, > > I am performing a number of 3D FFTs using scipy, and I had a question > as to how to setup 3 3D arrays of wavevectors, much like one might do > with mgrid. However, because the FFT returns wavevectors running from > 0,...,k_c, -k_c, ..., -1 (where k_c is the nyquist wavenumber) I'm > not sure how to get mgrid to do this. > > In 2D, I could create two 1D kx and ky arrays and use meshgrid (for > an even number of points): > > import numpy as N > > kx = 2*N.pi/Lx * N.concatenate(N.arange(0,nx/2-1),N.arange(-nx/2, 0)) > ky = 2*N.pi/Ly * N.concatenate(N.arange(0,ny/2-1),N.arange(-ny/2, 0)) > > kk = meshgrid(kx,ky) > > where kk would have a shape (2,ny,nx). However, this (to my > knowledge) doesn't generalize to 3D. > > I'm a total python/scipy neophyte, so any help you could provide > would be quite appreciated. > > thanks, > > jeff > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From eike.welk at gmx.net Tue Mar 20 12:54:45 2007 From: eike.welk at gmx.net (Eike Welk) Date: Tue, 20 Mar 2007 17:54:45 +0100 Subject: [SciPy-user] Next neighbor interpolation inside regular gridded arrays In-Reply-To: <45FE6B83.10902@lcrs.de> References: <45FE6B83.10902@lcrs.de> Message-ID: <200703201754.45322.eike.welk@gmx.net> The Matplotlib cookbook mentions two libs that might be usefull for you: http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data Regards Eike. From cookedm at physics.mcmaster.ca Tue Mar 20 13:56:38 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 20 Mar 2007 13:56:38 -0400 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: <8feab0f2.b0f28fea@steria.com> References: <8feab0f2.b0f28fea@steria.com> Message-ID: <20070320175638.GA29831@arbutus.physics.mcmaster.ca> On Tue, Mar 20, 2007 at 12:08:14PM +0100, raphael langella wrote: > I'm trying to build numpy and scipy on Solaris 8. > The BLAS FAQ on netlib.org suggests using optimized BLAS librairies > provided by computer vendor, like the SUN Performance Library. This > library is supposed to provide enhanced and optimized version of BLAS > and LAPACK. I happen to have Forte 7 installed, so I first tried to > build against this library (libsunperf.a). > I tried several versions and different compilation options, but I always > get undefined symbols. Is this supported? As anyone ever succeeded in > using this library to compile scipy and numpy? Haven't heard about anybody else trying it. What are the undefined symbols? One thing I can think of that I've run into before on a Alpha was that the Compaq Math Library only had LAPACK version 2, not version 3 (which had been already been out for several years when our group got the Alpha...). > I also did some unsuccessful tests with Atlas. But, I think I'm now very > close to have it working with the standard BLAS/LAPACK (from > netlib.org). I have to use GNU ld (unresolved symbols when using Sun's > ld). Numpy builds and tests without any error. I've got a few errors and > failures with scipy (see attachment for the complete test report). Are > these errors critical? I don't understand where they come from and how > to correct them. > Thanks for your help. > > Rapha?l Langella > calc-gen3-ci:/home/user1/ctcils/poladmin/rla/root $ python -c "import scipy; scipy.test(level=1)" > Found 1 tests for scipy.cluster.vq > Found 128 tests for scipy.linalg.fblas > Found 397 tests for scipy.ndimage > Found 10 tests for scipy.integrate.quadpack > Found 98 tests for scipy.stats.stats > Found 53 tests for scipy.linalg.decomp > Found 3 tests for scipy.integrate.quadrature > Found 96 tests for scipy.sparse.sparse > Found 20 tests for scipy.fftpack.pseudo_diffs > Found 6 tests for scipy.optimize.optimize > Found 6 tests for scipy.interpolate.fitpack > Found 6 tests for scipy.interpolate > Found 70 tests for scipy.stats.distributions > Found 10 tests for scipy.stats.morestats > Found 4 tests for scipy.linalg.lapack > Found 18 tests for scipy.fftpack.basic > Warning: FAILURE importing tests for > /home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) UMFPACK not built? > Warning: FAILURE importing tests for > /home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/optimize/lbfgsb.py:30: ImportError: ld.so.1: python: fatal: relocation error: file /home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/optimize/_lbfgsb.so: symbol etime_: referenced symbol not found (in ?) This one looks like the Sun Fortran compiler doesn't have etime. Not serious unless you use optimize.fmin_l_bfgs_b. > Warning: FAILURE importing tests for > /home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/io/tests/test_mio.py:14: ImportError: cannot import name loadmat (in ?) Odd. loadmat comes from scipy.io.mio, and it's a pure python routine. Did you remove the build/ directory before trying again? Distutils can get confused if it's interrupted (it doesn't do dependency tracking that well). > Warning: The maximum number of subdivisions (50) has been achieved. > If increasing the limit yields no improvement it is advised to analyze > the integrand in order to determine the difficulties. If the position of a > local difficulty can be determined (singularity, discontinuity) one will > probably gain from splitting up the interval and calling the integrator > on the subranges. Perhaps a special-purpose integrator should be used. [clip] I don't know why this is happening; it doesn't occur on other platforms. > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** This is OK. > ====================================================================== > ERROR: check_normalize (scipy.sparse.tests.test_sparse.test_coo) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/sparse/tests/test_sparse.py", line 658, in check_normalize > assert(zip(nrow,ncol,ndata) == sorted(zip(row,col,data))) #should sort by rows, then cols > NameError: global name 'sorted' is not defined Fixed in svn; a 2.4ism snuck in. > ====================================================================== > FAIL: check_double_integral (scipy.integrate.tests.test_quadpack.test_quad) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/integrate/tests/test_quadpack.py", line 95, in check_double_integral > 5/6.0 * (b**3.0-a**3.0)) > File "/home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/integrate/tests/test_quadpack.py", line 9, in assert_quad > assert abs(value-tabledValue) < err, (value, tabledValue, err) > AssertionError: (4036.4928418995069, 5.8333333333333339, 1835.6069980919819) > > ====================================================================== > FAIL: check_triple_integral (scipy.integrate.tests.test_quadpack.test_quad) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/integrate/tests/test_quadpack.py", line 106, in check_triple_integral > 8/3.0 * (b**4.0 - a**4.0)) > File "/home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/integrate/tests/test_quadpack.py", line 9, in assert_quad > assert abs(value-tabledValue) < err, (value, tabledValue, err) > AssertionError: (8.6265941452283596e-09, 40.0, 3.0164375235041808e-09) These look like the same problem as the warnings before about subdivisions. Something funny. > ====================================================================== > FAIL: check_syevr (scipy.lib.tests.test_lapack.test_flapack_float) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 41, in check_syevr > assert_array_almost_equal(w,exact_w) > File "/home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal > header='Arrays are not almost equal') > File "/home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > > (mismatch 33.3333333333%) > x: array([-0.66992444, 0.48769474, 9.18222618], dtype=float32) > y: array([-0.66992434, 0.48769389, 9.18223045]) > ====================================================================== > FAIL: check_syevr_irange (scipy.lib.tests.test_lapack.test_flapack_float) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange > assert_array_almost_equal(w,exact_w[rslice]) > File "/home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal > header='Arrays are not almost equal') > File "/home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > > (mismatch 33.3333333333%) > x: array([-0.66992444, 0.48769474, 9.18222618], dtype=float32) > y: array([-0.66992434, 0.48769389, 9.18223045]) Huh, I'd say the Sun library isn't accurate enough. Sacrificing accuracy for speed, no doubt. > ---------------------------------------------------------------------- > Ran 1543 tests in 12.138s > > FAILED (failures=4, errors=2) > calc-gen3-ci:/home/user1/ctcils/poladmin/rla/root $ -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From emilia12 at mail.bg Tue Mar 20 15:03:54 2007 From: emilia12 at mail.bg (emilia12 at mail.bg) Date: Tue, 20 Mar 2007 21:03:54 +0200 Subject: [SciPy-user] problem with the determinant of matrix Message-ID: <1174417434.e88e400550ce2@mail.bg> Hi list, i have a problem with the determinant of matrix. Following the example ("http://www.scipy.org/SciPy_Tutorial"): #code from numpy import matrix from scipy.linalg import inv, det, eig A=matrix([[1,1,1],[4,4,3],[7,8,5]]) # 3 lines 3 rows b = matrix([1,2,1]).transpose() # 3 lines 1 rows. print det(A) # We can check, whether the matrix is regular #/code the python crashes with "Exit code: -1073741795" versions: Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit (Intel)] on win32 NumPy 1.0.1 for Python 2.4 SciPy 0.5.2 for Python 2.4 so, any ideas why this happens? e. ----------------------------- SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? ??????????. http://www.bgscena.com/ From aisaac at american.edu Tue Mar 20 15:44:25 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 20 Mar 2007 15:44:25 -0400 Subject: [SciPy-user] problem with the determinant of matrix In-Reply-To: <1174417434.e88e400550ce2@mail.bg> References: <1174417434.e88e400550ce2@mail.bg> Message-ID: On Tue, 20 Mar 2007, emilia12 at mail.bg apparently wrote: > i have a problem with the determinant of matrix. Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import matrix >>> from numpy.linalg import det >>> A=matrix([[1,1,1],[4,4,3],[7,8,5]]) >>> det(A) 0.99999999999999956 >>> fwiw, Alan Isaac From peridot.faceted at gmail.com Tue Mar 20 16:54:20 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 20 Mar 2007 16:54:20 -0400 Subject: [SciPy-user] I can't find unit step function. In-Reply-To: References: <296323b50703200020m2a33d3d8kc0bd0f0c8a9cac3e@mail.gmail.com> <1174376285.28167.6.camel@rodimus> <1174379374.5075.5.camel@rodimus> Message-ID: On 20/03/07, Ryan Krauss wrote: > Sorry to contribute to this discussion out of boredom and curiosity, > but what about: > > y=where(t>0,1.0,0) Or y=(1+sign(t))/2. for conciseness and the 0.5 both... Anne From david.warde.farley at utoronto.ca Tue Mar 20 16:57:14 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Tue, 20 Mar 2007 16:57:14 -0400 Subject: [SciPy-user] I can't find unit step function. In-Reply-To: References: <296323b50703200020m2a33d3d8kc0bd0f0c8a9cac3e@mail.gmail.com> <1174376285.28167.6.camel@rodimus> <1174379374.5075.5.camel@rodimus> Message-ID: <86BC2FBA-BEAC-4571-846D-B46A90A6A6C1@utoronto.ca> On 20-Mar-07, at 4:54 PM, Anne Archibald wrote: > On 20/03/07, Ryan Krauss wrote: >> Sorry to contribute to this discussion out of boredom and curiosity, >> but what about: >> >> y=where(t>0,1.0,0) > > Or > y=(1+sign(t))/2. > for conciseness and the 0.5 both... > > Anne Bravo! I think we can consider this dead horse adequately flogged ;) David From ckkart at hoc.net Tue Mar 20 23:15:12 2007 From: ckkart at hoc.net (Christian) Date: Wed, 21 Mar 2007 12:15:12 +0900 Subject: [SciPy-user] odr thread safe? Message-ID: Hi, I noticed that I cannot run odr more than once at a time using python threads. Is that true? Debugging yields: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1294730336 (LWP 9312)] 0xb6fe0934 in fcn_callback (n=0xb2d3e9ec, m=0xb2d3e9e8, np=0xb2d3e9e4, nq=0xb2d3e9e0, ldn=0xb2d3e9ec, ldm=0xb2d3e9e8, ldnp=0xb2d3e9e4, beta=0x90e0350, xplusd=0xb19b4e58, ifixb=0x90e6580, ifixx=0x90e65d8, ldfix=0xb2d3e9c4, ideval=0xb701a098, f=0xb19d76e8, fjacb=0xb19d8ec0, fjacd=0xb19b2008, istop=0xb2d3e288) at ./peak_o_mat/odr/__odrpack.c:70 70 Py_INCREF(odr_global.pyBeta); Can someone advise me how to change the wrapper to be thread safe? Christian From raphael.langella at steria.com Wed Mar 21 04:59:04 2007 From: raphael.langella at steria.com (raphael langella) Date: Wed, 21 Mar 2007 09:59:04 +0100 Subject: [SciPy-user] building numpy/scipy on Solaris Message-ID: <96b8d3f1.d3f196b8@steria.com> ---- Messages d?origine ---- De: "David M. Cooke" Date: mardi, mars 20, 2007 6:56 pm Objet: Re: [SciPy-user] building numpy/scipy on Solaris > On Tue, Mar 20, 2007 at 12:08:14PM +0100, raphael langella wrote: > > I'm trying to build numpy and scipy on Solaris 8. > > The BLAS FAQ on netlib.org suggests using optimized BLAS librairies > > provided by computer vendor, like the SUN Performance Library. This > > library is supposed to provide enhanced and optimized version of > BLAS> and LAPACK. I happen to have Forte 7 installed, so I first > tried to > > build against this library (libsunperf.a). > > I tried several versions and different compilation options, but I > always> get undefined symbols. Is this supported? As anyone ever > succeeded in > > using this library to compile scipy and numpy? > > Haven't heard about anybody else trying it. What are the undefined > symbols? One thing I can think of that I've run into before on a Alpha > was that the Compaq Math Library only had LAPACK version 2, not > version3 (which had been already been out for several years when > our group got > the Alpha...). It's supposed to support LAPACK v3.0 and BLAS1, 2 & 3 (http://developers.sun.com/sunstudio/perflib_index.html). It gives me this when I import numpy : ImportError: ld.so.1: python: fatal: relocation error: file /usr/lib/python2.3/site-packages/numpy/linalg/lapack_lite.so: symbol __getenv_: referenced symbol not found > > /home/user1/ctcils/poladmin/rla/root/lib/python2.3/site- > packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) > > UMFPACK not built? right. I'm not sure if my users will need it, but I'll try building it anyway. > > Warning: FAILURE importing tests for 'scipy.optimize.zeros' from '...kages/scipy/optimize/zeros.pyc'> > > /home/user1/ctcils/poladmin/rla/root/lib/python2.3/site- > packages/scipy/optimize/lbfgsb.py:30: ImportError: ld.so.1: python: > fatal: relocation error: file > /home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/scipy/optimize/_lbfgsb.so: symbol etime_: referenced symbol not found (in ?) > > This one looks like the Sun Fortran compiler doesn't have etime. Not > serious unless you use optimize.fmin_l_bfgs_b. Well, let's say it's not critical for now... > > > Warning: FAILURE importing tests for '...site-packages/scipy/io/mio.pyc'> > > /home/user1/ctcils/poladmin/rla/root/lib/python2.3/site- > packages/scipy/io/tests/test_mio.py:14: ImportError: cannot import > name loadmat (in ?) > > Odd. loadmat comes from scipy.io.mio, and it's a pure python routine. > > Did you remove the build/ directory before trying again? Distutils can > get confused if it's interrupted (it doesn't do dependency tracking > thatwell). Yes I did. > > NameError: global name 'sorted' is not defined > > Fixed in svn; a 2.4ism snuck in. I'll try it. > Huh, I'd say the Sun library isn't accurate enough. Sacrificing > accuracyfor speed, no doubt. OK, but I didn't compile scipy and numpy against libsunperf, I used netlib.org versions of blas and lapack. On the other hand, I did build lapack against libsunperf (that's the default in make.inc.SUN4SOL2), so I'm not sure which blas library is used in the end. Thanks a lot for your support From cookedm at physics.mcmaster.ca Wed Mar 21 06:23:03 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 21 Mar 2007 06:23:03 -0400 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: <96b8d3f1.d3f196b8@steria.com> References: <96b8d3f1.d3f196b8@steria.com> Message-ID: <20070321102303.GA32399@arbutus.physics.mcmaster.ca> On Wed, Mar 21, 2007 at 09:59:04AM +0100, raphael langella wrote: > > On Tue, Mar 20, 2007 at 12:08:14PM +0100, raphael langella wrote: > > > I'm trying to build numpy and scipy on Solaris 8. > > > The BLAS FAQ on netlib.org suggests using optimized BLAS librairies > > > provided by computer vendor, like the SUN Performance Library. This > > > library is supposed to provide enhanced and optimized version of > > BLAS> and LAPACK. I happen to have Forte 7 installed, so I first > > tried to > > > build against this library (libsunperf.a). > > It's supposed to support LAPACK v3.0 and BLAS1, 2 & 3 > (http://developers.sun.com/sunstudio/perflib_index.html). > It gives me this when I import numpy : > ImportError: ld.so.1: python: fatal: relocation error: file > /usr/lib/python2.3/site-packages/numpy/linalg/lapack_lite.so: symbol > __getenv_: referenced symbol not found Odd. lapack_lite doesn't use getenv anywhere, so it must be the sunperf library (and http://docs.sun.com/source/819-3692/plug_optimizing.html shows that it does look at environment variables). I'm guessing there's some link-time option that it needs. > > > Warning: FAILURE importing tests for > '...site-packages/scipy/io/mio.pyc'> > > > /home/user1/ctcils/poladmin/rla/root/lib/python2.3/site- > > packages/scipy/io/tests/test_mio.py:14: ImportError: cannot import > > name loadmat (in ?) > > > > Odd. loadmat comes from scipy.io.mio, and it's a pure python routine. > > > > Did you remove the build/ directory before trying again? Distutils can > > get confused if it's interrupted (it doesn't do dependency tracking > > thatwell). > > Yes I did. > > > > Huh, I'd say the Sun library isn't accurate enough. Sacrificing > > accuracyfor speed, no doubt. > > OK, but I didn't compile scipy and numpy against libsunperf, I used > netlib.org versions of blas and lapack. On the other hand, I did build > lapack against libsunperf (that's the default in make.inc.SUN4SOL2), so > I'm not sure which blas library is used in the end. Hmm, then I'm going to blame the compiler :) Maybe it's doing some optimization that it shouldn't? -fast, maybe? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From gruben at bigpond.net.au Wed Mar 21 08:17:01 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Wed, 21 Mar 2007 23:17:01 +1100 Subject: [SciPy-user] animate solitons In-Reply-To: <45FFD427.3090900@aims.ac.za> References: <45FFD427.3090900@aims.ac.za> Message-ID: <4601223D.5080305@bigpond.net.au> If it is in 1D, take a look at vpython. You can create a curve and as you modify it, vpython will take care of the animation for you. Gary R. Franck Kalala Mutombo wrote: > hi, > > I have solved the sine-cordon equation(which is a partial differential > equation) and I found my solitons(the solution to that equation), it is > easy to plot it. but want I need is to animate it in a certain range. do > you know any python module which can allows me to animate my solitons? > > ps: solitons a solution of a partial differential, it is a plane wave > moving a certain speed, so I want to animate it as is moving. From ryanlists at gmail.com Wed Mar 21 08:51:58 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 21 Mar 2007 07:51:58 -0500 Subject: [SciPy-user] animate solitons In-Reply-To: <4601223D.5080305@bigpond.net.au> References: <45FFD427.3090900@aims.ac.za> <4601223D.5080305@bigpond.net.au> Message-ID: I have some success using Mayavi to animate solutions of PDE's for the mode shapes of a beam. On 3/21/07, Gary Ruben wrote: > If it is in 1D, take a look at vpython. You can create a curve and as > you modify it, vpython will take care of the animation for you. > > Gary R. > > Franck Kalala Mutombo wrote: > > hi, > > > > I have solved the sine-cordon equation(which is a partial differential > > equation) and I found my solitons(the solution to that equation), it is > > easy to plot it. but want I need is to animate it in a certain range. do > > you know any python module which can allows me to animate my solitons? > > > > ps: solitons a solution of a partial differential, it is a plane wave > > moving a certain speed, so I want to animate it as is moving. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From raphael.langella at steria.com Wed Mar 21 09:27:56 2007 From: raphael.langella at steria.com (raphael langella) Date: Wed, 21 Mar 2007 14:27:56 +0100 Subject: [SciPy-user] building numpy/scipy on Solaris Message-ID: ---- Messages d?origine ---- De: "David M. Cooke" Date: mercredi, mars 21, 2007 11:23 am Objet: Re: [SciPy-user] building numpy/scipy on Solaris > On Wed, Mar 21, 2007 at 09:59:04AM +0100, raphael langella wrote: > > > On Tue, Mar 20, 2007 at 12:08:14PM +0100, raphael langella wrote: > > > > I'm trying to build numpy and scipy on Solaris 8. > > > > The BLAS FAQ on netlib.org suggests using optimized BLAS > librairies> > > provided by computer vendor, like the SUN > Performance Library. This > > > > library is supposed to provide enhanced and optimized version > of > > > BLAS> and LAPACK. I happen to have Forte 7 installed, so I > first > > > tried to > > > > build against this library (libsunperf.a). > > > > It's supposed to support LAPACK v3.0 and BLAS1, 2 & 3 > > (http://developers.sun.com/sunstudio/perflib_index.html). > > It gives me this when I import numpy : > > ImportError: ld.so.1: python: fatal: relocation error: file > > /usr/lib/python2.3/site-packages/numpy/linalg/lapack_lite.so: symbol > > __getenv_: referenced symbol not found > > Odd. lapack_lite doesn't use getenv anywhere, so it must be the > sunperflibrary (and http://docs.sun.com/source/819- > 3692/plug_optimizing.htmlshows that it does look at environment > variables). I'm guessing there's > some link-time option that it needs. ok. I'll forget about the other errors I have and will focus on libsunperf for now. It makes sense to me to use vendor provided optimized BLAS/LAPACK libraries. I compiled with these options : python setup.py config_fc --f77flags='-xlic_lib=sunperf -xarch=v9b' install and now, I've got this error when importing numpy : ImportError: ld.so.1: python: fatal: relocation error: file /usr/local/lib/python2.3/site-packages/numpy/linalg/lapack_lite.so: symbol dgeev_: referenced symbol not found Any idea? From bryanv at enthought.com Wed Mar 21 11:54:36 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Wed, 21 Mar 2007 10:54:36 -0500 Subject: [SciPy-user] animate solitons In-Reply-To: References: <45FFD427.3090900@aims.ac.za> <4601223D.5080305@bigpond.net.au> Message-ID: <4601553C.7030809@enthought.com> You can also do this sort of thing in Chaco if you only need 2D capabilities like line/scatter or colormap plots. Here is a simple example of a simulated updating data stream: https://svn.enthought.com/enthought/browser/trunk/src/lib/enthought/chaco2/examples/advanced/data_stream.py Ryan Krauss wrote: > I have some success using Mayavi to animate solutions of PDE's for the > mode shapes of a beam. > > On 3/21/07, Gary Ruben wrote: >> If it is in 1D, take a look at vpython. You can create a curve and as >> you modify it, vpython will take care of the animation for you. >> >> Gary R. >> >> Franck Kalala Mutombo wrote: >>> hi, >>> >>> I have solved the sine-cordon equation(which is a partial differential >>> equation) and I found my solitons(the solution to that equation), it is >>> easy to plot it. but want I need is to animate it in a certain range. do >>> you know any python module which can allows me to animate my solitons? >>> >>> ps: solitons a solution of a partial differential, it is a plane wave >>> moving a certain speed, so I want to animate it as is moving. >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Wed Mar 21 12:40:00 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 21 Mar 2007 12:40:00 -0400 Subject: [SciPy-user] animate solitons In-Reply-To: <4601553C.7030809@enthought.com> References: <45FFD427.3090900@aims.ac.za> <4601223D.5080305@bigpond.net.au> <4601553C.7030809@enthought.com> Message-ID: Most of these solutions seem to be designed to allow animations to be updated on the fly as new data comes in. Undoubtedly valuable, but more sophisticated than one needs, often. Is there a tidy package that lets you take, say, a 2D array and treat one axis as x-values and the other axis as time, and produce an animation? It would presumably loop, and be user-controllable (pause, reverse, frame-forward, frame-back, etc.) Anne From strawman at astraw.com Wed Mar 21 12:58:41 2007 From: strawman at astraw.com (Andrew Straw) Date: Wed, 21 Mar 2007 09:58:41 -0700 Subject: [SciPy-user] problem with the determinant of matrix In-Reply-To: <1174417434.e88e400550ce2@mail.bg> References: <1174417434.e88e400550ce2@mail.bg> Message-ID: <46016441.5070208@astraw.com> Could this be the issue that the SciPy build on the website was built with SSE2 support and you don't have SSE2? Do you have an older processor (other than Intel Pentium 3 or AMD something-or-other)? Not using Windows much, I don't keep track of the details, but there historically has been the occasional issue along these lines, and I'm not sure it's been addressed. -Andrew emilia12 at mail.bg wrote: > Hi list, > i have a problem with the determinant of matrix. Following > the example ("http://www.scipy.org/SciPy_Tutorial"): > #code > from numpy import matrix > from scipy.linalg import inv, det, eig > > A=matrix([[1,1,1],[4,4,3],[7,8,5]]) # 3 lines 3 rows > b = matrix([1,2,1]).transpose() # 3 lines 1 rows. > > print det(A) # We can check, whether the matrix is > regular > #/code > > the python crashes with "Exit code: -1073741795" > > versions: > Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit > (Intel)] on win32 > NumPy 1.0.1 for Python 2.4 > SciPy 0.5.2 for Python 2.4 > > so, any ideas why this happens? > > e. > > > > > > ----------------------------- > > SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? ??????????. > http://www.bgscena.com/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From strawman at astraw.com Wed Mar 21 13:01:46 2007 From: strawman at astraw.com (Andrew Straw) Date: Wed, 21 Mar 2007 10:01:46 -0700 Subject: [SciPy-user] problem with the determinant of matrix In-Reply-To: <46016441.5070208@astraw.com> References: <1174417434.e88e400550ce2@mail.bg> <46016441.5070208@astraw.com> Message-ID: <460164FA.8010101@astraw.com> Andrew Straw wrote: > Could this be the issue that the SciPy build on the website was built > with SSE2 support and you don't have SSE2? Do you have an older > processor (other than Intel Pentium 3 or AMD something-or-other)? Not > ^^^^^ older > using Windows much, I don't keep track of the details, but there > historically has been the occasional issue along these lines, and I'm > not sure it's been addressed. > > -Andrew > > emilia12 at mail.bg wrote: > >> Hi list, >> i have a problem with the determinant of matrix. Following >> the example ("http://www.scipy.org/SciPy_Tutorial"): >> #code >> from numpy import matrix >> from scipy.linalg import inv, det, eig >> >> A=matrix([[1,1,1],[4,4,3],[7,8,5]]) # 3 lines 3 rows >> b = matrix([1,2,1]).transpose() # 3 lines 1 rows. >> >> print det(A) # We can check, whether the matrix is >> regular >> #/code >> >> the python crashes with "Exit code: -1073741795" >> >> versions: >> Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit >> (Intel)] on win32 >> NumPy 1.0.1 for Python 2.4 >> SciPy 0.5.2 for Python 2.4 >> >> so, any ideas why this happens? >> >> e. >> >> >> >> >> >> ----------------------------- >> >> SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? ??????????. >> http://www.bgscena.com/ >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bryanv at enthought.com Wed Mar 21 13:07:19 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Wed, 21 Mar 2007 12:07:19 -0500 Subject: [SciPy-user] animate solitons In-Reply-To: References: <45FFD427.3090900@aims.ac.za> <4601223D.5080305@bigpond.net.au> <4601553C.7030809@enthought.com> Message-ID: <46016647.5000401@enthought.com> Hi Anne, That chaco example I posted was made specifically in response to a question about realtime DAQ, so it does generate some random data "on the fly" to simulate a realtime data stream. However, it would be pretty trivial to have it, say, step through the rows of a preset 2d array each timer tick instead. As for something straight out of the box, I'm not sure.. the only thing that comes to mind is VPython, possibly. Anne Archibald wrote: > Most of these solutions seem to be designed to allow animations to be > updated on the fly as new data comes in. Undoubtedly valuable, but > more sophisticated than one needs, often. Is there a tidy package > that lets you take, say, a 2D array and treat one axis as x-values and > the other axis as time, and produce an animation? It would presumably > loop, and be user-controllable (pause, reverse, frame-forward, > frame-back, etc.) > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From pwang at enthought.com Wed Mar 21 13:21:32 2007 From: pwang at enthought.com (Peter Wang) Date: Wed, 21 Mar 2007 12:21:32 -0500 Subject: [SciPy-user] animate solitons In-Reply-To: <46016647.5000401@enthought.com> References: <45FFD427.3090900@aims.ac.za> <4601223D.5080305@bigpond.net.au> <4601553C.7030809@enthought.com> <46016647.5000401@enthought.com> Message-ID: <9B72195F-2AC6-4F09-AB69-E80398163234@enthought.com> On Mar 21, 2007, at 12:07 PM, Bryan Van de Ven wrote: > That chaco example I posted was made specifically in response to a > question > about realtime DAQ, so it does generate some random data "on the > fly" to > simulate a realtime data stream. However, it would be pretty > trivial to have it, > say, step through the rows of a preset 2d array each timer tick > instead. Actually, the updating_plot demos in https://svn.enthought.com/ enthought/browser/trunk/src/lib/enthought/chaco2/examples/ updating_plot do precisely this - on each timer increment, they modify the plot's data to reflect a slightly different subsection of the total data. > Is there a tidy package > that lets you take, say, a 2D array and treat one axis as x-values and > the other axis as time, and produce an animation? You might also be interested in the scalar image function inspector: https://svn.enthought.com/enthought/browser/trunk/src/lib/enthought/ chaco2/advanced/scalar_image_function_inspector This interactively shows cross-cuts in X and Y through a 2D scalar data volume. You can see a screenshot at: http://code.enthought.com/chaco/gallery/ scalar_image_function_inspector.png Much of the complexity in the code for this demo is in hooking up many of the interactive elements; if all that was desired was animation in a single dimension as the user moved a slider, then the code would be much simplified. -peter From robert.kern at gmail.com Wed Mar 21 13:42:12 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 Mar 2007 12:42:12 -0500 Subject: [SciPy-user] odr thread safe? In-Reply-To: References: Message-ID: <46016E74.9060309@gmail.com> Christian wrote: > Hi, > I noticed that I cannot run odr more than once at a time using python threads. > Is that true? Debugging yields: > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread -1294730336 (LWP 9312)] > 0xb6fe0934 in fcn_callback (n=0xb2d3e9ec, m=0xb2d3e9e8, np=0xb2d3e9e4, > nq=0xb2d3e9e0, > ldn=0xb2d3e9ec, ldm=0xb2d3e9e8, ldnp=0xb2d3e9e4, beta=0x90e0350, > xplusd=0xb19b4e58, > ifixb=0x90e6580, ifixx=0x90e65d8, ldfix=0xb2d3e9c4, ideval=0xb701a098, > f=0xb19d76e8, > fjacb=0xb19d8ec0, fjacd=0xb19b2008, istop=0xb2d3e288) at > ./peak_o_mat/odr/__odrpack.c:70 > 70 Py_INCREF(odr_global.pyBeta); > > Can someone advise me how to change the wrapper to be thread safe? Hmm. I'm not sure how this is happening. Calls to extension functions should be holding the global interpreter lock unless if it is explicitly released, and I'm pretty sure that I'm not. It's possible that the GIL gets released when the Python callbacks execute; I'd have to check. In that case, we would need a lock around the call to __odrpack.odr(). But yeah, there's a difficult problem with calling a FORTRAN subroutine multiple concurrent times. We need to pass function pointer callbacks to the FORTRAN subroutine. These need to be real C functions. They can't have state, so the state needs to be global. C++ methods don't work. Consequently, the Python functions that the C callbacks will actually call are stored in odr_global. Looking at the Python C API docs, a possibility is to use PyThreadState_GetDict() to get a thread-specific dictionary. If threads aren't being used, then you only get NULL, though, so another mechanism will have to exist for the unthreaded case. I think. I could be misinterpreting the documentation. http://docs.python.org/api/threads.html Let me know what you come up with! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From emilia12 at mail.bg Wed Mar 21 14:29:00 2007 From: emilia12 at mail.bg (emilia12 at mail.bg) Date: Wed, 21 Mar 2007 20:29:00 +0200 Subject: [SciPy-user] problem with the determinant of matrix In-Reply-To: <460164FA.8010101@astraw.com> References: <1174417434.e88e400550ce2@mail.bg> <46016441.5070208@astraw.com> <460164FA.8010101@astraw.com> Message-ID: <1174501740.6d53e1d4a465a@mail.bg> Hi Andrew, my computer is PIII on 1000MHz but i tried the same at my work (PIV 2core ... etc) and it didn't crash but it reported the following error: RuntimeError: module compiled against version 1000002 of C-API but this version of numpy is 1000009 RuntimeError: module compiled against version 1000002 of C-API but this version of numpy is 1000009 and then I changed the 2dn line: from scipy.linalg import inv, det, eig with: import numpy and: print numpy.linalg.det(A) works OK so probably there is a problem with numpy and scipi ? regards e. ????? ?? ????? ?? Andrew Straw : > Andrew Straw wrote: > > Could this be the issue that the SciPy build on the > website was built > > with SSE2 support and you don't have SSE2? Do you have > an older > > processor (other than Intel Pentium 3 or AMD > something-or-other)? Not > > > ^^^^^ older > > using Windows much, I don't keep track of the details, > but there > > historically has been the occasional issue along these > lines, and I'm > > not sure it's been addressed. > > > > -Andrew > > > > emilia12 at mail.bg wrote: > > > >> Hi list, > >> i have a problem with the determinant of matrix. > Following > >> the example ("http://www.scipy.org/SciPy_Tutorial"): > >> #code > >> from numpy import matrix > >> from scipy.linalg import inv, det, eig > >> > >> A=matrix([[1,1,1],[4,4,3],[7,8,5]]) # 3 lines 3 rows > >> b = matrix([1,2,1]).transpose() # 3 lines 1 rows. > >> > >> print det(A) # We can check, whether the matrix is > >> regular > >> #/code > >> > >> the python crashes with "Exit code: -1073741795" > >> > >> versions: > >> Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 > 32 bit > >> (Intel)] on win32 > >> NumPy 1.0.1 for Python 2.4 > >> SciPy 0.5.2 for Python 2.4 > >> > >> so, any ideas why this happens? > >> > >> e. > >> > >> > >> > >> > >> > >> ----------------------------- > >> > >> SCENA - ???????????? ????????? ???????? ?? ??????? > ??????????? ? ??????????. > >> http://www.bgscena.com/ > >> > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > >> > >> > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ----------------------------- SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? ??????????. http://www.bgscena.com/ From oliphant at ee.byu.edu Wed Mar 21 16:46:04 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 21 Mar 2007 13:46:04 -0700 Subject: [SciPy-user] odr thread safe? In-Reply-To: <46016E74.9060309@gmail.com> References: <46016E74.9060309@gmail.com> Message-ID: <4601998C.7010404@ee.byu.edu> Robert Kern wrote: >Hmm. I'm not sure how this is happening. Calls to extension functions should be >holding the global interpreter lock unless if it is explicitly released, and I'm >pretty sure that I'm not. It's possible that the GIL gets released when the >Python callbacks execute; I'd have to check. In that case, we would need a lock >around the call to __odrpack.odr(). > >But yeah, there's a difficult problem with calling a FORTRAN subroutine multiple >concurrent times. We need to pass function pointer callbacks to the FORTRAN >subroutine. These need to be real C functions. They can't have state, so the >state needs to be global. C++ methods don't work. Consequently, the Python >functions that the C callbacks will actually call are stored in odr_global. > >Looking at the Python C API docs, a possibility is to use >PyThreadState_GetDict() to get a thread-specific dictionary. If threads aren't >being used, then you only get NULL, though, so another mechanism will have to >exist for the unthreaded case. I think. I could be misinterpreting the >documentation. > > In NumPy for the ufuncs, we use PyThreadState_GetDict and then if that is NULL use PyEval_GetBuiltins so that we always get a per-thread dictionary to store information in. Something like that could be done. -Travis > http://docs.python.org/api/threads.html > >Let me know what you come up with! > > > From cookedm at physics.mcmaster.ca Wed Mar 21 19:57:55 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 21 Mar 2007 19:57:55 -0400 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: References: Message-ID: <20070321235755.GA4496@arbutus.physics.mcmaster.ca> On Wed, Mar 21, 2007 at 02:27:56PM +0100, raphael langella wrote: > ---- Messages d?origine ---- > De: "David M. Cooke" > Date: mercredi, mars 21, 2007 11:23 am > Objet: Re: [SciPy-user] building numpy/scipy on Solaris > > > On Wed, Mar 21, 2007 at 09:59:04AM +0100, raphael langella wrote: > > > > On Tue, Mar 20, 2007 at 12:08:14PM +0100, raphael langella wrote: > > > > > I'm trying to build numpy and scipy on Solaris 8. > > > > > The BLAS FAQ on netlib.org suggests using optimized BLAS > > librairies> > > provided by computer vendor, like the SUN > > Performance Library. This > > > > > library is supposed to provide enhanced and optimized version > > of > > > > BLAS> and LAPACK. I happen to have Forte 7 installed, so I > > first > > > > tried to > > > > > build against this library (libsunperf.a). > > > > > > It's supposed to support LAPACK v3.0 and BLAS1, 2 & 3 > > > (http://developers.sun.com/sunstudio/perflib_index.html). > > > It gives me this when I import numpy : > > > ImportError: ld.so.1: python: fatal: relocation error: file > > > /usr/lib/python2.3/site-packages/numpy/linalg/lapack_lite.so: symbol > > > __getenv_: referenced symbol not found > > > > Odd. lapack_lite doesn't use getenv anywhere, so it must be the > > sunperflibrary (and http://docs.sun.com/source/819- > > 3692/plug_optimizing.htmlshows that it does look at environment > > variables). I'm guessing there's > > some link-time option that it needs. > > ok. I'll forget about the other errors I have and will focus on > libsunperf for now. It makes sense to me to use vendor provided > optimized BLAS/LAPACK libraries. > > I compiled with these options : > python setup.py config_fc --f77flags='-xlic_lib=sunperf -xarch=v9b' install > > and now, I've got this error when importing numpy : > ImportError: ld.so.1: python: fatal: relocation error: file > /usr/local/lib/python2.3/site-packages/numpy/linalg/lapack_lite.so: > symbol dgeev_: referenced symbol not found It looks like the C versions of the Fortran routines in sunperf don't have trailing underscores (they're not just the Fortran routine, they're C wrappers). Try CPPFLAGS='-DNO_APPEND_FORTRAN' python setup.py config_fc --f77flags='-xlic_lib=sunperf -xarch=v9b' install so that lapack_litemodule.c uses dgeev instead of dgeev_. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From jim at well.com Thu Mar 22 00:25:55 2007 From: jim at well.com (jim stockford) Date: Wed, 21 Mar 2007 20:25:55 -0800 Subject: [SciPy-user] call for speakers (bayPIGgies) Message-ID: <292db2ef401c498af6222c387abd0b2f@well.com> The bayPIGgies group is (always) looking for speakers. Our email discussion reflects a lot of interest in SciPy. If any of you are interested in presenting SciPy issues, you'll have an enthusiastic audience. Let me know, I'll let you know what speaking entails (not a lot, at minimum). hopefully, jim at well.com From raphael.langella at steria.com Thu Mar 22 04:31:04 2007 From: raphael.langella at steria.com (raphael langella) Date: Thu, 22 Mar 2007 09:31:04 +0100 Subject: [SciPy-user] building numpy/scipy on Solaris Message-ID: ---- Messages d?origine ---- De: "David M. Cooke" Date: jeudi, mars 22, 2007 0:57 am Objet: Re: [SciPy-user] building numpy/scipy on Solaris > On Wed, Mar 21, 2007 at 02:27:56PM +0100, raphael langella wrote: > > ---- Messages d?origine ---- > > De: "David M. Cooke" > > Date: mercredi, mars 21, 2007 11:23 am > > Objet: Re: [SciPy-user] building numpy/scipy on Solaris > > > > > On Wed, Mar 21, 2007 at 09:59:04AM +0100, raphael langella wrote: > > > > > On Tue, Mar 20, 2007 at 12:08:14PM +0100, raphael langella > wrote:> > > > > I'm trying to build numpy and scipy on Solaris 8. > > > > > > The BLAS FAQ on netlib.org suggests using optimized BLAS > > > librairies> > > provided by computer vendor, like the SUN > > > Performance Library. This > > > > > > library is supposed to provide enhanced and optimized > version > > > of > > > > > BLAS> and LAPACK. I happen to have Forte 7 installed, so I > > > first > > > > > tried to > > > > > > build against this library (libsunperf.a). > > > > > > > > It's supposed to support LAPACK v3.0 and BLAS1, 2 & 3 > > > > (http://developers.sun.com/sunstudio/perflib_index.html). > > > > It gives me this when I import numpy : > > > > ImportError: ld.so.1: python: fatal: relocation error: file > > > > /usr/lib/python2.3/site-packages/numpy/linalg/lapack_lite.so: > symbol> > > __getenv_: referenced symbol not found > > > > > > Odd. lapack_lite doesn't use getenv anywhere, so it must be the > > > sunperflibrary (and http://docs.sun.com/source/819- > > > 3692/plug_optimizing.htmlshows that it does look at environment > > > variables). I'm guessing there's > > > some link-time option that it needs. > > > > ok. I'll forget about the other errors I have and will focus on > > libsunperf for now. It makes sense to me to use vendor provided > > optimized BLAS/LAPACK libraries. > > > > I compiled with these options : > > python setup.py config_fc --f77flags='-xlic_lib=sunperf - > xarch=v9b' install > > > > and now, I've got this error when importing numpy : > > ImportError: ld.so.1: python: fatal: relocation error: file > > /usr/local/lib/python2.3/site-packages/numpy/linalg/lapack_lite.so: > > symbol dgeev_: referenced symbol not found > > It looks like the C versions of the Fortran routines in sunperf don't > have trailing underscores (they're not just the Fortran routine, > they'reC wrappers). Try > > CPPFLAGS='-DNO_APPEND_FORTRAN' python setup.py config_fc -- > f77flags='-xlic_lib=sunperf -xarch=v9b' install > > so that lapack_litemodule.c uses dgeev instead of dgeev_. Well, now it does uses dgeev, but it's still not found : ImportError: ld.so.1: python: fatal: relocation error: file /home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/numpy/linalg/lapack_lite.so: symbol dgeev: referenced symbol not found So, the function is just absent from the library? I set BLAS and LAPACK to libsunperf.a. Should I use a dynamic version instead? From cookedm at physics.mcmaster.ca Thu Mar 22 04:41:01 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 22 Mar 2007 04:41:01 -0400 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: References: Message-ID: <20070322084101.GA5508@arbutus.physics.mcmaster.ca> On Thu, Mar 22, 2007 at 09:31:04AM +0100, raphael langella wrote: > > > > > > I compiled with these options : > > > python setup.py config_fc --f77flags='-xlic_lib=sunperf - > > xarch=v9b' install > > > > > > and now, I've got this error when importing numpy : > > > ImportError: ld.so.1: python: fatal: relocation error: file > > > /usr/local/lib/python2.3/site-packages/numpy/linalg/lapack_lite.so: > > > symbol dgeev_: referenced symbol not found > > > > It looks like the C versions of the Fortran routines in sunperf don't > > have trailing underscores (they're not just the Fortran routine, > > they'reC wrappers). Try > > > > CPPFLAGS='-DNO_APPEND_FORTRAN' python setup.py config_fc -- > > f77flags='-xlic_lib=sunperf -xarch=v9b' install > > > > so that lapack_litemodule.c uses dgeev instead of dgeev_. > > Well, now it does uses dgeev, but it's still not found : > ImportError: ld.so.1: python: fatal: relocation error: file > /home/user1/ctcils/poladmin/rla/root/lib/python2.3/site-packages/numpy/linalg/lapack_lite.so: > symbol dgeev: referenced symbol not found > > So, the function is just absent from the library? > I set BLAS and LAPACK to libsunperf.a. Should I use a dynamic version > instead? Ahh, just realized. We can do this completely with the C compiler; the -xlic_lib=sunperf -xarch=v9b is only being used for Fortran. Try CFLAGS='-xlic_lib=sunperf -xarch=v9b' CPPFLAGS='-DNO_APPEND_FORTRAN' python setup.py install Since sunperf has C bindings, we don't need the Fortran compiler at all for Numpy (and I don't think it was being used in the first place). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From raphael.langella at steria.com Thu Mar 22 05:39:42 2007 From: raphael.langella at steria.com (raphael langella) Date: Thu, 22 Mar 2007 10:39:42 +0100 Subject: [SciPy-user] building numpy/scipy on Solaris Message-ID: ---- Messages d?origine ---- De: "David M. Cooke" Date: jeudi, mars 22, 2007 9:41 am Objet: Re: [SciPy-user] building numpy/scipy on Solaris > Ahh, just realized. We can do this completely with the C compiler; the > -xlic_lib=sunperf -xarch=v9b is only being used for Fortran. Try > > CFLAGS='-xlic_lib=sunperf -xarch=v9b' CPPFLAGS='- > DNO_APPEND_FORTRAN' python setup.py install > > Since sunperf has C bindings, we don't need the Fortran compiler at > allfor Numpy (and I don't think it was being used in the first place). well, options for C don't have the same syntax as Fortran, so I used '-mcpu=v9 -lsunperf' and got : ImportError: ld.so.1: python: fatal: relocation error: file /Produits/sun/forte/7/SUNWspro/lib/libsunperf.so.4: symbol __f95_sign: referenced symbol not found but then, I realized I was using the default libsunperf (v8). As I'm building on sparc III, I want v9b optimizations, so I did : LD_LIBRARY_PATH=/Produits/sun/forte/7/SUNWspro/lib/v9b:$LD_LIBRARY_PATH and got : ld: fatal: file /Produits/sun/forte/7/SUNWspro/lib/v9b/libsunperf.so: wrong ELF class: ELFCLASS64 but I'm building 64 bits code, am I not? I added the -m64 option and got : /Produits/publics/sparc.SunOS.5.8/python/2.3.4/include/python2.3/pyport.h:554:2: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." Damn, it looks like my python was compiled in 32 bits. Well, if I have to keep to 32 bits, so be it. Going back a few lines up, I have an __f95_sign symbol not found (removing -mcpu=v9 doesn't change anything). From bryanv at enthought.com Thu Mar 22 10:00:57 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Thu, 22 Mar 2007 09:00:57 -0500 Subject: [SciPy-user] call for speakers (bayPIGgies) In-Reply-To: <292db2ef401c498af6222c387abd0b2f@well.com> References: <292db2ef401c498af6222c387abd0b2f@well.com> Message-ID: <46028C19.3080106@enthought.com> Hi Jim, My name is Bryan Van de Ven and I work for Enthought. I have actually been meaning to contact BayPiggies about the possibility of speaking since I will be coming out to the Bay Area later this Summer (July/August?). I didn't imagine talking about SciPy directly, but about some combination of Traits (auto GUI creation, easy MVC framework), Chaco (2D interactive plotting), and maybe a little about mlab/mayavi (3D visualization). I'd certainly show some examples that used SciPy, though. My schedule is flexible so if there is interest a short talk on these topics I can certainly make my trip coincide with one of your meetings. Please let me know! Best Regards, Bryan Van de Ven jim stockford wrote: > > The bayPIGgies group is (always) looking for speakers. > Our email discussion reflects a lot of interest in SciPy. > If any of you are interested in presenting SciPy issues, > you'll have an enthusiastic audience. > > Let me know, I'll let you know what speaking entails (not > a lot, at minimum). > > hopefully, > jim at well.com > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nmarais at sun.ac.za Thu Mar 22 10:34:12 2007 From: nmarais at sun.ac.za (Neilen Marais) Date: Thu, 22 Mar 2007 16:34:12 +0200 Subject: [SciPy-user] linsolve.factorized was: Re: Using umfpack to calculate a incomplete LU factorisation (ILU) References: <45EE899D.9040304@ntc.zcu.cz> <45F00D7A.4060603@iam.uni-stuttgart.de> <45F03884.3000101@ntc.zcu.cz> <45F04386.6010105@ntc.zcu.cz> Message-ID: Hi Robert! On Thu, 08 Mar 2007 18:10:30 +0100, Robert Cimrman wrote: > Robert Cimrman wrote: > Well, I did it since I am going to need this, too :-) > > In [3]:scipy.linsolve.factorized? > ... > Definition: scipy.linsolve.factorized(A) > Docstring: > Return a fuction for solving a linear system, with A pre-factorized. > > Example: > solve = factorized( A ) # Makes LU decomposition. > x1 = solve( rhs1 ) # Uses the LU factors. > x2 = solve( rhs2 ) # Uses again the LU factors. > > This uses UMFPACK if available. This is a useful improvement, thanks. But why not just extend linsolve.splu to use umfpack so we can present a consistent interface? The essential difference between factorized and splu is that you get to explicity control the storage of the LU factorisation and get some additional info (i.e. the number of nonzeros), whereas factorised only gives you a solve function. The actual library used to do the sparse LU is just an implementation detail that should abstracted wherever possible, no? If nobody complains about the idea I'm willing to implement it. Thanks Neilen > > cheers, > r. -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From cimrman3 at ntc.zcu.cz Thu Mar 22 10:46:07 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 22 Mar 2007 15:46:07 +0100 Subject: [SciPy-user] linsolve.factorized was: Re: Using umfpack to calculate a incomplete LU factorisation (ILU) In-Reply-To: References: <45EE899D.9040304@ntc.zcu.cz> <45F00D7A.4060603@iam.uni-stuttgart.de> <45F03884.3000101@ntc.zcu.cz> <45F04386.6010105@ntc.zcu.cz> Message-ID: <460296AF.4000405@ntc.zcu.cz> Neilen Marais wrote: >> Robert Cimrman wrote: >> Well, I did it since I am going to need this, too :-) >> >> In [3]:scipy.linsolve.factorized? >> ... >> Definition: scipy.linsolve.factorized(A) >> Docstring: >> Return a fuction for solving a linear system, with A pre-factorized. >> >> Example: >> solve = factorized( A ) # Makes LU decomposition. >> x1 = solve( rhs1 ) # Uses the LU factors. >> x2 = solve( rhs2 ) # Uses again the LU factors. >> >> This uses UMFPACK if available. > > This is a useful improvement, thanks. But why not just extend > linsolve.splu to use umfpack so we can present a consistent interface? The > essential difference between factorized and splu is that you get to > explicity control the storage of the LU factorisation and get some > additional info (i.e. the number of nonzeros), whereas factorised only > gives you a solve function. The actual library used to do the sparse LU is > just an implementation detail that should abstracted wherever possible, no? > > If nobody complains about the idea I'm willing to implement it. Sure, splu is an exception, every effort making it consistent is welcome. But note that umfpack always gives you complete LU factors, there is no ILU (drop-off) support - how would you tackle this? Maybe change its name to get_superlu_obj or something like that, use use_solver( useUmfpack = False ) at its beginning, and restore the use_solver setting at the end? r. From alexander.borghgraef.rma at gmail.com Fri Mar 23 05:38:48 2007 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Fri, 23 Mar 2007 10:38:48 +0100 Subject: [SciPy-user] Gaussian pyramid using ndimage In-Reply-To: References: <9e8c52a20703160942j3d1b4ca5ubbfab6dabe39f7c0@mail.gmail.com> Message-ID: <9e8c52a20703230238m3465261ey84b3335fa4e6fe6f@mail.gmail.com> On 3/16/07, Zachary Pincus wrote: > > I *think* that all of the filters in ndimage proceed by pre-filtering > and then doing whatever task is required, so even if there was a > downsampling filter (maybe 'zoom' would work), it wouldn't be memory- > efficient in the way you were hoping. > > Also, the spline filters used by ndimage for pre-filtering seem > somewhat broken: > http://projects.scipy.org/scipy/scipy/ticket/213 Hmm. So ndimage isn't really ready for use yet? Pity, image processing is my main use for scipy. Documentation should be extended too, it's not always clear what exactly an algorithm is doing, what the valid parameters are and what they're doing. I also noticed generic_filter requires the filter function to work on an unraveled array, and has to return a scalar, so you can't create tensor images with it (motionfields, gradient fields, in my case the 3D normal field for an elevation map...). Some missing functionality there, and definitely in need of better documentation. -- Alex Borghgraef -------------- next part -------------- An HTML attachment was scrubbed... URL: From acorriga at gmu.edu Fri Mar 23 12:02:56 2007 From: acorriga at gmu.edu (Andrew Corrigan) Date: Fri, 23 Mar 2007 16:02:56 +0000 (UTC) Subject: [SciPy-user] spsolve not working References: <45FD40BA.1020107@gmu.edu> Message-ID: Turns out I was just using spsolve incorrectly. There was no problem with it. From anand at soe.ucsc.edu Fri Mar 23 12:23:34 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Fri, 23 Mar 2007 09:23:34 -0700 Subject: [SciPy-user] Numpy distutils failing to report ndarrayobject.h Message-ID: <4603FF06.2090206@cse.ucsc.edu> Hi all, The following in a Pyrex extension module: cdef extern from "numpy/ndarrayobject.h": void* PyArray_DATA(object obj) produces the following in C: #include "numpy/ndarrayobject.h". The numpy distutils apparently fail to compile the extension module under win2k, Python 2.4: PyMC2/PyrexLazyFunction.c:15:33: numpy/ndarrayobject.h: No such file or directory PyMC2/PyrexLazyFunction.c: In function `__pyx_f_17PyrexLazyFunction_12LazyFunction_get_array_data': but it compiles fine on my machine, running OSX 10.4 and Python 2.5. Unfortunately I can't experiment much because I don't have a Windows machine available. Can anyone help me out with this? Thanks, Anand From gnurser at googlemail.com Fri Mar 23 12:38:45 2007 From: gnurser at googlemail.com (George Nurser) Date: Fri, 23 Mar 2007 16:38:45 +0000 Subject: [SciPy-user] sparse & odr failures on latest scipy svn Message-ID: <1d1e6ea70703230938s7b774ea2p3df53123fdecccf1@mail.gmail.com> I get one error in the sparse tests and two in the odr tests for scipy v 2869 python 2.3, numpy svn 3591, acml math, 64 bit Linux on opteron. These errors were not there last time I updated (v 2324) --George Nurser. .... ..............F..F.............. ====================================================================== ERROR: check_normalize (scipy.sparse.tests.test_sparse.test_coo) ---------------------------------------------------------------------- Traceback (most recent call last): File "/noc/users/agn/ext/Linux/lib64/python2.3/site-packages/scipy/sparse/tests/test_sparse.py", line 812, in check_normalize assert(zip(ncol,nrow,ndata) == sorted(zip(col,row,data))) #should sort by cols, then rows NameError: global name 'sorted' is not defined ====================================================================== FAIL: test_explicit (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/noc/users/agn/ext/Linux/lib64/python2.3/site-packages/scipy/odr/tests/test_odr.py", line 49, in test_explicit np.array([ 1.2646548050648876e+03, -5.4018409956678255e+01, File "/noc/users/agn/ext/Linux/lib64/python/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/noc/users/agn/ext/Linux/lib64/python/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.26462971e+03, -5.42545890e+01, -8.64250389e-02]) y: array([ 1.26465481e+03, -5.40184100e+01, -8.78497122e-02]) ====================================================================== FAIL: test_multi (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/noc/users/agn/ext/Linux/lib64/python2.3/site-packages/scipy/odr/tests/test_odr.py", line 190, in test_multi np.array([ 4.3799880305938963, 2.4333057577497703, 8.0028845899503978, File "/noc/users/agn/ext/Linux/lib64/python/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/noc/users/agn/ext/Linux/lib64/python/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.31272063, 2.44289312, 7.76215871, 0.55995622, 0.46423343]) y: array([ 4.37998803, 2.43330576, 8.00288459, 0.51011472, 0.51739023]) ---------------------------------------------------------------------- Ran 1620 tests in 4.210s FAILED (failures=2, errors=1) From koara at atlas.cz Fri Mar 23 13:17:33 2007 From: koara at atlas.cz (koara at atlas.cz) Date: Fri, 23 Mar 2007 18:17:33 +0100 Subject: [SciPy-user] sparse mmio bug? Message-ID: Hello all, When writing a sparse matrix to disk with io.mmwrite(sparse_matrix), the line with typecode = a.gettypecode() fails. Changing this to typecode = a.dtype.char the same way it is used couple of line above for the dense matrix case solves the problem. I use python2.5 with scipy 0.5.2 from Feisty repositories. ------------------------------------------ www.icqsms.cz/ From oliphant at ee.byu.edu Fri Mar 23 13:25:45 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Mar 2007 10:25:45 -0700 Subject: [SciPy-user] Numpy distutils failing to report ndarrayobject.h In-Reply-To: <4603FF06.2090206@cse.ucsc.edu> References: <4603FF06.2090206@cse.ucsc.edu> Message-ID: <46040D99.6020509@ee.byu.edu> Anand Patil wrote: >Hi all, > >The following in a Pyrex extension module: > >cdef extern from "numpy/ndarrayobject.h": > void* PyArray_DATA(object obj) > >produces the following in C: > >#include "numpy/ndarrayobject.h". > >The numpy distutils apparently fail to compile the extension module >under win2k, Python 2.4: > >PyMC2/PyrexLazyFunction.c:15:33: numpy/ndarrayobject.h: No such file or directory >PyMC2/PyrexLazyFunction.c: In function >`__pyx_f_17PyrexLazyFunction_12LazyFunction_get_array_data': > > >but it compiles fine on my machine, running OSX 10.4 and Python 2.5. >Unfortunately I can't experiment much because I don't have a Windows >machine available. Can anyone help me out with this? > > The include directory is not in the list of directories to search for the header files and so it is not being found. I'm not sure what would cause this. It's possible that the windows system doesn't have NumPy installed (or else the headers are not installed). Or, it's possible that numpy.distutils is not adding the NumPy directory to the list of directories to search for header files. I'm not sure why it's not adding the location of the NumPy headers to the compile line. -Travis >Thanks, >Anand >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From openopt at ukr.net Fri Mar 23 14:39:16 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 23 Mar 2007 20:39:16 +0200 Subject: [SciPy-user] can basearray using somehow be tried already? In-Reply-To: <20070319165534.GG31245@encolpuis> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <45FE8EBE.1040002@enthought.com> <20070319165534.GG31245@encolpuis> Message-ID: <46041ED4.9080307@ukr.net> Hi all, is currently any way to try using basearray? first of all I'm very interested in operators (matmult, dotmult etc) - will they have MATLAB/Octave/omatrix/etc -like style dot is present => dotwise operation dot is absent => matrix operation or other? And what will creation way look like? I mean something like x = [1 2 3; 4 5 6] I spent some hours using web search and scipy.org but found nothing. WBR, D. From robert.kern at gmail.com Fri Mar 23 15:08:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Mar 2007 14:08:18 -0500 Subject: [SciPy-user] can basearray using somehow be tried already? In-Reply-To: <46041ED4.9080307@ukr.net> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <45FE8EBE.1040002@enthought.com> <20070319165534.GG31245@encolpuis> <46041ED4.9080307@ukr.net> Message-ID: <460425A2.8040703@gmail.com> dmitrey wrote: > Hi all, > is currently any way to try using basearray? It does not exist. It was a proposal to add a minimal array object like that from numpy to the Python core. That proposal is dead now. If you want an array package for Python that is currently available, you want numpy. http://numpy.scipy.org > first of all I'm very interested in operators (matmult, dotmult etc) - > will they have MATLAB/Octave/omatrix/etc -like style > dot is present => dotwise operation > dot is absent => matrix operation > or other? No, we do not have the ability to change Python's syntax to add operators. For the array object, operators all act element-wise. numpy also provides a matrix object based on the array object which implements the relevant operators as matrix operations (e.g. * is a matrix multiplication rather than element-wise multiplication). > And what will creation way look like? I mean something like > x = [1 2 3; 4 5 6] x = numpy.array([[1, 2, 3], [4, 5, 6]]) Again, we don't have the ability to change Python's syntax to support other such syntaxes. If you want some documentation in terms that are familiar to Matlab and Octave users, you should read this page: http://www.scipy.org/NumPy_for_Matlab_Users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From perry at stsci.edu Fri Mar 23 15:33:56 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 23 Mar 2007 15:33:56 -0400 Subject: [SciPy-user] can basearray using somehow be tried already? In-Reply-To: <460425A2.8040703@gmail.com> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <45FE8EBE.1040002@enthought.com> <20070319165534.GG31245@encolpuis> <46041ED4.9080307@ukr.net> <460425A2.8040703@gmail.com> Message-ID: <66EE74B6-330B-4603-970A-104824585FE1@stsci.edu> On Mar 23, 2007, at 3:08 PM, Robert Kern wrote: > dmitrey wrote: > >> first of all I'm very interested in operators (matmult, dotmult >> etc) - >> will they have MATLAB/Octave/omatrix/etc -like style >> dot is present => dotwise operation >> dot is absent => matrix operation >> or other? > > No, we do not have the ability to change Python's syntax to add > operators. For > the array object, operators all act element-wise. numpy also > provides a matrix > object based on the array object which implements the relevant > operators as > matrix operations (e.g. * is a matrix multiplication rather than > element-wise > multiplication). While strictly true, there was a cool hack a while back that had nearly the same effect. I forget if it was looked into and found wanting for this purpose. Perhaps someone remembers if there was a reason this couldn't be used (or wouldn't be considered symbolic enough). For example, one could define an effective operator |mul|" and use it like: mat1 |mul| mat2 Two drawbacks were that it prevents the use of the text of the operator (e.g., mul in this case) for other purposes, for example for a variable name, and there is no way to change the precedence of the bracketing operator ('|' in this case). Another is that I guess you would have to explicitly import the operator names into your namespace (instead of having to refer to |numpy.mat|) http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/384122 Perry From openopt at ukr.net Fri Mar 23 15:21:29 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 23 Mar 2007 21:21:29 +0200 Subject: [SciPy-user] can basearray using somehow be tried already? In-Reply-To: <460425A2.8040703@gmail.com> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <45FE8EBE.1040002@enthought.com> <20070319165534.GG31245@encolpuis> <46041ED4.9080307@ukr.net> <460425A2.8040703@gmail.com> Message-ID: <460428B9.10309@ukr.net> Thank you Robert, I had read its before. As I understood from some links like http://wiki.python.org/moin/CodingProjectIdeas/PythonCore , basearray was intended to be added in Python 2.6. So, you know for sure that it will not happen? And what about other versions - 2.7 etc? WBR, D. Robert Kern wrote: > dmitrey wrote: > >> Hi all, >> is currently any way to try using basearray? >> > > It does not exist. It was a proposal to add a minimal array object like that > from numpy to the Python core. That proposal is dead now. If you want an array > package for Python that is currently available, you want numpy. > > http://numpy.scipy.org > > >> first of all I'm very interested in operators (matmult, dotmult etc) - >> will they have MATLAB/Octave/omatrix/etc -like style >> dot is present => dotwise operation >> dot is absent => matrix operation >> or other? >> > > No, we do not have the ability to change Python's syntax to add operators. For > the array object, operators all act element-wise. numpy also provides a matrix > object based on the array object which implements the relevant operators as > matrix operations (e.g. * is a matrix multiplication rather than element-wise > multiplication). > > >> And what will creation way look like? I mean something like >> x = [1 2 3; 4 5 6] >> > > x = numpy.array([[1, 2, 3], [4, 5, 6]]) > > Again, we don't have the ability to change Python's syntax to support other such > syntaxes. > > If you want some documentation in terms that are familiar to Matlab and Octave > users, you should read this page: > > http://www.scipy.org/NumPy_for_Matlab_Users > > From robert.kern at gmail.com Fri Mar 23 15:47:21 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Mar 2007 14:47:21 -0500 Subject: [SciPy-user] can basearray using somehow be tried already? In-Reply-To: <460428B9.10309@ukr.net> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <45FE8EBE.1040002@enthought.com> <20070319165534.GG31245@encolpuis> <46041ED4.9080307@ukr.net> <460425A2.8040703@gmail.com> <460428B9.10309@ukr.net> Message-ID: <46042EC9.7040405@gmail.com> dmitrey wrote: > Thank you Robert, I had read its before. As I understood from > some links like > http://wiki.python.org/moin/CodingProjectIdeas/PythonCore , > basearray was intended to be added in Python 2.6. Ah. That project idea seems to have been kept over from last year where this was actually a project that someone attempted. You linked to the student's application previously. See my response to that post for why it's not quite the project you think it is, why I think it is dead, and what is replacing it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Mar 23 15:48:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Mar 2007 14:48:23 -0500 Subject: [SciPy-user] can basearray using somehow be tried already? In-Reply-To: <66EE74B6-330B-4603-970A-104824585FE1@stsci.edu> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <45FE8EBE.1040002@enthought.com> <20070319165534.GG31245@encolpuis> <46041ED4.9080307@ukr.net> <460425A2.8040703@gmail.com> <66EE74B6-330B-4603-970A-104824585FE1@stsci.edu> Message-ID: <46042F07.5020701@gmail.com> Perry Greenfield wrote: > On Mar 23, 2007, at 3:08 PM, Robert Kern wrote: > >> dmitrey wrote: >> >>> first of all I'm very interested in operators (matmult, dotmult >>> etc) - >>> will they have MATLAB/Octave/omatrix/etc -like style >>> dot is present => dotwise operation >>> dot is absent => matrix operation >>> or other? >> No, we do not have the ability to change Python's syntax to add >> operators. For >> the array object, operators all act element-wise. numpy also >> provides a matrix >> object based on the array object which implements the relevant >> operators as >> matrix operations (e.g. * is a matrix multiplication rather than >> element-wise >> multiplication). > > While strictly true, there was a cool hack a while back that had > nearly the same effect. I forget if it was looked into and found > wanting for this purpose. Perhaps someone remembers if there was a > reason this couldn't be used (or wouldn't be considered symbolic > enough). What do you mean by "used"? There's no reason an individual couldn't use it, no. It's entirely decoupled from anything else; i.e. no one else has to modify anything in order to support it. Personally, I think the hack is pretty cool, but its verbosity and precedence problems prevent me from actually using it. Unfortunately, I think it simply doesn't solve the problem and creates more magic in the process. I wouldn't want to see such pseudo-operators added to numpy, for example. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From perry at stsci.edu Fri Mar 23 15:54:36 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 23 Mar 2007 15:54:36 -0400 Subject: [SciPy-user] can basearray using somehow be tried already? In-Reply-To: <46042F07.5020701@gmail.com> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <45FE8EBE.1040002@enthought.com> <20070319165534.GG31245@encolpuis> <46041ED4.9080307@ukr.net> <460425A2.8040703@gmail.com> <66EE74B6-330B-4603-970A-104824585FE1@stsci.edu> <46042F07.5020701@gmail.com> Message-ID: <7E1D131F-353B-4F16-9DEE-F194192E9F35@stsci.edu> On Mar 23, 2007, at 3:48 PM, Robert Kern wrote: > Perry Greenfield wrote: >> >> While strictly true, there was a cool hack a while back that had >> nearly the same effect. I forget if it was looked into and found >> wanting for this purpose. Perhaps someone remembers if there was a >> reason this couldn't be used (or wouldn't be considered symbolic >> enough). > > What do you mean by "used"? There's no reason an individual > couldn't use it, no. > It's entirely decoupled from anything else; i.e. no one else has to > modify > anything in order to support it. Personally, I think the hack is > pretty cool, > but its verbosity and precedence problems prevent me from actually > using it. > Unfortunately, I think it simply doesn't solve the problem and > creates more > magic in the process. I wouldn't want to see such pseudo-operators > added to > numpy, for example. Yes, in the sense of being a standard part of numpy (or matrix add- ons, etc.). Since we aren't heavy matrix users (yet anyway) it doesn't matter that much to us. This solution is not as good as being able to define new operators in Python itself. On the other hand, perhaps it is a better solution than having '*' have different meanings for different flavors of arrays. That's magic of a different sort, and just as prone to causing problems. So I think it is arguable which is worse. I agree it brings it's own set of problems. Perry From openopt at ukr.net Fri Mar 23 16:05:57 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 23 Mar 2007 22:05:57 +0200 Subject: [SciPy-user] can basearray using somehow be tried already? In-Reply-To: <46042F07.5020701@gmail.com> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <45FE8EBE.1040002@enthought.com> <20070319165534.GG31245@encolpuis> <46041ED4.9080307@ukr.net> <460425A2.8040703@gmail.com> <66EE74B6-330B-4603-970A-104824585FE1@stsci.edu> <46042F07.5020701@gmail.com> Message-ID: <46043325.3010000@ukr.net> Ok, but isn't it possible to add derive class to numpy, based on ndarray, with MATLAB-like dotwise & matrix operations with any of ndarray & matrix instances? I.e. x*y (x^y, x/y etc) is matrix & x .* y (x.^y, x./y etc) is dotwise, if any of x & y are something like mat_array? Currently I use it in my code things like def _matmult(self, x, y): return asarray(x) ** asarray(y) def _dotmult(self, x, y): return asarray(x) * asarray(y) >See my response to that post for why it's not quite the project you think it is, why I think it is dead, and what is replacing it. Can't you give exact URL, please? I'm not familiar enough with mailing lists interface & browsing so I can't find it by myself. WBR, D. Robert Kern wrote: > Perry Greenfield wrote: > >> On Mar 23, 2007, at 3:08 PM, Robert Kern wrote: >> >> >>> dmitrey wrote: >>> >>> >>>> first of all I'm very interested in operators (matmult, dotmult >>>> etc) - >>>> will they have MATLAB/Octave/omatrix/etc -like style >>>> dot is present => dotwise operation >>>> dot is absent => matrix operation >>>> or other? >>>> >>> No, we do not have the ability to change Python's syntax to add >>> operators. For >>> the array object, operators all act element-wise. numpy also >>> provides a matrix >>> object based on the array object which implements the relevant >>> operators as >>> matrix operations (e.g. * is a matrix multiplication rather than >>> element-wise >>> multiplication). >>> >> While strictly true, there was a cool hack a while back that had >> nearly the same effect. I forget if it was looked into and found >> wanting for this purpose. Perhaps someone remembers if there was a >> reason this couldn't be used (or wouldn't be considered symbolic >> enough). >> > > What do you mean by "used"? There's no reason an individual couldn't use it, no. > It's entirely decoupled from anything else; i.e. no one else has to modify > anything in order to support it. Personally, I think the hack is pretty cool, > but its verbosity and precedence problems prevent me from actually using it. > Unfortunately, I think it simply doesn't solve the problem and creates more > magic in the process. I wouldn't want to see such pseudo-operators added to > numpy, for example. > > From robert.kern at gmail.com Fri Mar 23 16:20:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Mar 2007 15:20:16 -0500 Subject: [SciPy-user] can basearray using somehow be tried already? In-Reply-To: <46043325.3010000@ukr.net> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <45FE6806.4040003@ias.u-psud.fr> <20070319104238.GK8808@aims.ac.za> <45FE8EBE.1040002@enthought.com> <20070319165534.GG31245@encolpuis> <46041ED4.9080307@ukr.net> <460425A2.8040703@gmail.com> <66EE74B6-330B-4603-970A-104824585FE1@stsci.edu> <46042F07.5020701@gmail.com> <46043325.3010000@ukr.net> Message-ID: <46043680.8070805@gmail.com> dmitrey wrote: > Ok, but isn't it possible to add derive class to numpy, based on > ndarray, with MATLAB-like dotwise & matrix operations with any of > ndarray & matrix instances? I.e. x*y (x^y, x/y etc) is matrix & x .* y > (x.^y, x./y etc) is dotwise, if any of x & y are something like mat_array? You can't add new operators. You can only redefine the behavior of the existing operators, and that's exactly what numpy.matrix does. > Currently I use it in my code things like > def _matmult(self, x, y): > return asarray(x) ** asarray(y) > > def _dotmult(self, x, y): > return asarray(x) * asarray(y) > >> See my response to that post for why it's not quite the > project you think it is, why I think it is dead, and what is replacing it. > > Can't you give exact URL, please? I'm not familiar enough with mailing > lists interface & browsing so I can't find it by myself. http://projects.scipy.org/pipermail/numpy-discussion/2007-March/026618.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lorenzo.isella at gmail.com Fri Mar 23 21:46:57 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sat, 24 Mar 2007 02:46:57 +0100 Subject: [SciPy-user] Post-Processing 3D Array Message-ID: <46048311.30308@gmail.com> Dear All, I am currently running a code to solve Navier-Stokes equations in a cylinder. The equations are solved in cylindrical coordinates (theta,r,z). The output is fundamentally made up of several 3D arrays containing e.g. the axial component of the velocity in space v_z[i,j,k] and so on. I am using Python to post-process the data, but there are a few problems I do not know how to tackle. (1) It would be very nice to be able to cut several cross-sections of the tube along which I'd like to plot v_z. I can easily obtain the 2D array v_z_cross[i,j], function of theta and r only, but then I am not sure about how to plot it in 2D. Also, the geometry of the resulting plot has to look like a circle and absolutely not a square. (2)Would it also be possible to make a full 3D plot of v_z[i,j,k]? As in (1), the geometry of the resulting plot has to look like a cylinder and not a cube or a square duct. Kind Regards Lorenzo From mwojc at p.lodz.pl Sat Mar 24 08:00:42 2007 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Sat, 24 Mar 2007 13:00:42 +0100 Subject: [SciPy-user] ffnet 0.6 released In-Reply-To: References: Message-ID: <200703241300.42900.mwojc@p.lodz.pl> Hallo scipy users! I'd like to announce new release of ffnet (feed-forward neural network for python). Project depends heavily on scipy/numpy, it's written partially in fortran and wrapped with f2py. Below i'm pasting original announce message posted to comp.lang.python.announce: ffnet version 0.6 is now released. Source packages, Gentoo ebuilds and Windows binaries are now available for download at: ? ? http://ffnet.sourceforge.net The last public release was 0.5. If you are unfamiliar with this package, see the end of this message for a description. NEW FEATURES ? ? - trained network can be now exported to fortran source ? ? ? code and compiled ? ? - added new architecture generator (imlgraph) ? ? - added rprop training algorithm ? ? - added draft network plotting facility (based on networkx ? ? ? and matplotlib) CHANGES & BUG FIXES ? ? - fixed bug preventing ffnet form working with networkx-0.33 ? ? - training data can be now numpy arrays ? ? - ffnet became a package now, but API should be compatibile ? ? ? with previous version DOCUMENTATION ? ? - docstrings of all objects have been improved ? ? - docs (automatically generated with epydoc) are avilable ? ? ? online and have been included to source distribution What is ffnet? -------------- ffnet is a fast and easy-to-use feed-forward neural network training solution for python. Unique features --------------- 1. Any network connectivity without cycles is allowed. 2. Training can be performed with use of several optimization ? ?schemes including: standard backpropagation with momentum, rprop, ? ?conjugate gradient, bfgs, tnc, genetic alorithm based optimization. 3. There is access to exact partial derivatives of network outputs ? ?vs. its inputs. 4. Automatic normalization of data. Basic assumptions and limitations: ---------------------------------- 1. Network has feed-forward architecture. 2. Input units have identity activation function, ? ?all other units have sigmoid activation function. 3. Provided data are automatically normalized, both input and output, ? ?with a linear mapping to the range (0.15, 0.85). ? ?Each input and output is treated separately (i.e. linear map is ? ?unique for each input and output). 4. Function minimized during training is a sum of squared errors ? ?of each output for each training pattern. ? ? Performance ----------- Excellent computational performance is achieved implementing core functions in fortran 77 and wrapping them with f2py. ffnet outstands in performance pure python training packages and is competitive to 'compiled language' software. Moreover, a trained network can be exported to fortran sources, compiled and called from many programming languages. Usage ----- Basic usage of the package is outlined below: ? ? from ffnet import ffnet, mlgraph, savenet, loadnet, exportnet ? ? conec = mlgraph( (2,2,1) ) ? ? net = ffnet(conec) ? ? input = [ [0.,0.], [0.,1.], [1.,0.], [1.,1.] ] ? ? target ?= [ [1.], [0.], [0.], [1.] ] ? ? net.train_tnc(input, target, maxfun = 1000) ? ? net.test(input, target, iprint = 2) ? ? savenet(net, "xor.net") ? ? exportnet(net, "xor.f") ? ? net = loadnet("xor.net") ? ? answer = net( [ 0., 0. ] ) ? ? partial_derivatives = net.derivative( [ 0., 0. ] ) Usage examples with full description can be found in examples directory of the source distribution or browsed at http://ffnet.sourceforge.net. From karol.langner at kn.pl Sat Mar 24 08:39:40 2007 From: karol.langner at kn.pl (Karol Langner) Date: Sat, 24 Mar 2007 13:39:40 +0100 Subject: [SciPy-user] can basearray using somehow be tried already? In-Reply-To: <46042EC9.7040405@gmail.com> References: <5f56302b0703190328s33362261n37871db9cdeeb2a7@mail.gmail.com> <460428B9.10309@ukr.net> <46042EC9.7040405@gmail.com> Message-ID: <200703241339.40222.karol.langner@kn.pl> On Friday 23 of March 2007 20:47, Robert Kern wrote: > dmitrey wrote: > > Thank you Robert, I had read its before. As I understood from > > some links like > > http://wiki.python.org/moin/CodingProjectIdeas/PythonCore , > > basearray was intended to be added in Python 2.6. > > Ah. That project idea seems to have been kept over from last year where > this was actually a project that someone attempted. You linked to the > student's application previously. See my response to that post for why it's > not quite the project you think it is, why I think it is dead, and what is > replacing it. I am the one that attempted that project, without success. I added a short annotation to the SciPy wiki project page from last year. Karol -- written by Karol Langner Sat Mar 24 13:38:08 CET 2007 From nwagner at iam.uni-stuttgart.de Sat Mar 24 15:13:20 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 24 Mar 2007 20:13:20 +0100 Subject: [SciPy-user] sparse & odr failures on latest scipy svn In-Reply-To: <1d1e6ea70703230938s7b774ea2p3df53123fdecccf1@mail.gmail.com> References: <1d1e6ea70703230938s7b774ea2p3df53123fdecccf1@mail.gmail.com> Message-ID: On Fri, 23 Mar 2007 16:38:45 +0000 "George Nurser" wrote: > I get one error in the sparse tests and two in the odr >tests for scipy v 2869 > > python 2.3, numpy svn 3591, acml math, 64 bit Linux on >opteron. > These errors were not there last time I updated (v 2324) > > --George Nurser. > > > .... > > ..............F..F.............. > ====================================================================== > ERROR: check_normalize >(scipy.sparse.tests.test_sparse.test_coo) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File >"/noc/users/agn/ext/Linux/lib64/python2.3/site-packages/scipy/sparse/tests/test_sparse.py", > line 812, in check_normalize > assert(zip(ncol,nrow,ndata) == >sorted(zip(col,row,data))) #should > sort by cols, then rows > NameError: global name 'sorted' is not defined > > ====================================================================== >FAIL: test_explicit (scipy.tests.test_odr.test_odr) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File >"/noc/users/agn/ext/Linux/lib64/python2.3/site-packages/scipy/odr/tests/test_odr.py", > line 49, in test_explicit > np.array([ 1.2646548050648876e+03, > -5.4018409956678255e+01, > File >"/noc/users/agn/ext/Linux/lib64/python/numpy/testing/utils.py", > line 230, in assert_array_almost_equal > header='Arrays are not almost equal') > File >"/noc/users/agn/ext/Linux/lib64/python/numpy/testing/utils.py", > line 215, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > > (mismatch 100.0%) > x: array([ 1.26462971e+03, -5.42545890e+01, > -8.64250389e-02]) > y: array([ 1.26465481e+03, -5.40184100e+01, > -8.78497122e-02]) > > ====================================================================== >FAIL: test_multi (scipy.tests.test_odr.test_odr) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File >"/noc/users/agn/ext/Linux/lib64/python2.3/site-packages/scipy/odr/tests/test_odr.py", > line 190, in test_multi > np.array([ 4.3799880305938963, 2.4333057577497703, > 8.0028845899503978, > File >"/noc/users/agn/ext/Linux/lib64/python/numpy/testing/utils.py", > line 230, in assert_array_almost_equal > header='Arrays are not almost equal') > File >"/noc/users/agn/ext/Linux/lib64/python/numpy/testing/utils.py", > line 215, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > > (mismatch 100.0%) > x: array([ 4.31272063, 2.44289312, 7.76215871, > 0.55995622, 0.46423343]) > y: array([ 4.37998803, 2.43330576, 8.00288459, > 0.51011472, 0.51739023]) > > ---------------------------------------------------------------------- > Ran 1620 tests in 4.210s > >FAILED (failures=2, errors=1) > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user See http://projects.scipy.org/scipy/scipy/ticket/357 Nils From bryan at cole.uklinux.net Sat Mar 24 16:49:04 2007 From: bryan at cole.uklinux.net (Bryan Cole) Date: Sat, 24 Mar 2007 20:49:04 +0000 Subject: [SciPy-user] Post-Processing 3D Array In-Reply-To: <46048311.30308@gmail.com> References: <46048311.30308@gmail.com> Message-ID: <1174769344.6343.31.camel@pc1.cole.uklinux.net> > I am currently running a code to solve Navier-Stokes equations in a > cylinder. The equations are solved in cylindrical coordinates > (theta,r,z). The output is fundamentally made up of several 3D arrays > containing e.g. the axial component of the velocity in space v_z[i,j,k] > and so on. > I am using Python to post-process the data, but there are a few problems > I do not know how to tackle. > (1) It would be very nice to be able to cut several cross-sections of > the tube along which I'd like to plot v_z. I can easily obtain the 2D > array v_z_cross[i,j], function of theta and r only, but then I am not > sure about how to plot it in 2D. Also, the geometry of the resulting > plot has to look like a circle and absolutely not a square. > (2)Would it also be possible to make a full 3D plot of v_z[i,j,k]? As in > (1), the geometry of the resulting plot has to look like a cylinder and > not a cube or a square duct. Sounds like a job for VTK (www.vtk.org). In VTK-speak, your dataset is best represented as a "structured grid". This means your data points can be indexed as i,j,k (i.e. a grid), but it's not a Cartesian grid, so the 3D location of each data point must be specified directly. At each data point, you can have an arbitrary number of scalar, vector or tensor attributes. I would say the easiest way to start is to write out your data as a VTK format file (this can be ASCII or binary) with format description given in http://www.vtk.org/pdf/file-formats.pdf , then load this up in either MayaVi (http://mayavi.sourceforge.net/) or Paraview (http://www.paraview.org/HTML/Index.html). There is a python module for writing VTK files called pyvtk (http://cens.ioc.ee/projects/pyvtk/ ) which may simplify the task further. HTH BC > > Kind Regards > > Lorenzo From gael.varoquaux at normalesup.org Sun Mar 25 08:33:31 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 25 Mar 2007 14:33:31 +0200 Subject: [SciPy-user] Post-Processing 3D Array In-Reply-To: <1174769344.6343.31.camel@pc1.cole.uklinux.net> References: <46048311.30308@gmail.com> <1174769344.6343.31.camel@pc1.cole.uklinux.net> Message-ID: <20070325123331.GA12920@clipper.ens.fr> On Sat, Mar 24, 2007 at 08:49:04PM +0000, Bryan Cole wrote: > > (1) It would be very nice to be able to cut several cross-sections of > > the tube along which I'd like to plot v_z. I can easily obtain the 2D > > array v_z_cross[i,j], function of theta and r only, but then I am not > > sure about how to plot it in 2D. Also, the geometry of the resulting > > plot has to look like a circle and absolutely not a square. > > (2)Would it also be possible to make a full 3D plot of v_z[i,j,k]? As in > > (1), the geometry of the resulting plot has to look like a cylinder and > > not a cube or a square duct. > Sounds like a job for VTK (www.vtk.org). In VTK-speak, your dataset is > best represented as a "structured grid". This means your data points can > be indexed as i,j,k (i.e. a grid), but it's not a Cartesian grid, so the > 3D location of each data point must be specified directly. At each data > point, you can have an arbitrary number of scalar, vector or tensor > attributes. > I would say the easiest way to start is to write out your data as a VTK > format file (this can be ASCII or binary) with format description given > in http://www.vtk.org/pdf/file-formats.pdf , then load this up in either > MayaVi (http://mayavi.sourceforge.net/) or Paraview > (http://www.paraview.org/HTML/Index.html). > There is a python module for writing VTK files called pyvtk > (http://cens.ioc.ee/projects/pyvtk/ ) which may simplify the task > further. Indeed I would say that vtk is very suited for this task. Tvtk and mayavi2 are python wrapper for vtk that are very pythonic and nice for interactiv work. I am currently working on a module to drive mayavi2 from "ipython -wthread" for scripting and interactive work. It still needs some work before I can even submit it for inclusion to mayavi2 but I am sending it along just in case in can be of some use. It can be of much help as it deals with the building of the vtk data objects from numpy arrays for most cases. Tvtk and mayavi2 are part of the enthought tools suite, if you want to use the attached module you will have to use the svn: https://svn.enthought.com/enthought/wiki/GrabbingAndBuilding For more info on mayavi2 and tvtk see the scipy wiki at http://www.scipy.org (I cannot provide a direct link to the proper page, as the wiki is down right now). HTH Ga?l From fperez.net at gmail.com Sun Mar 25 13:53:08 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 25 Mar 2007 11:53:08 -0600 Subject: [SciPy-user] Post-Processing 3D Array In-Reply-To: <20070325123331.GA12920@clipper.ens.fr> References: <46048311.30308@gmail.com> <1174769344.6343.31.camel@pc1.cole.uklinux.net> <20070325123331.GA12920@clipper.ens.fr> Message-ID: On 3/25/07, Gael Varoquaux wrote: [...] > "ipython -wthread" for scripting and interactive work. It still needs > some work before I can even submit it for inclusion to mayavi2 but I am > sending it along just in case in can be of some use. Did you forget to click 'attach' ? Cheers, f ps - I'm interested :) From zunzun at zunzun.com Sun Mar 25 14:33:00 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sun, 25 Mar 2007 14:33:00 -0400 Subject: [SciPy-user] www.scipy.org appears to be down Message-ID: <20070325183259.GA1028@zunzun.com> Quick note: www.scipy.org appears to be down. James From gael.varoquaux at normalesup.org Sun Mar 25 18:23:59 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 26 Mar 2007 00:23:59 +0200 Subject: [SciPy-user] Post-Processing 3D Array In-Reply-To: References: <46048311.30308@gmail.com> <1174769344.6343.31.camel@pc1.cole.uklinux.net> <20070325123331.GA12920@clipper.ens.fr> Message-ID: <20070325222356.GA153@clipper.ens.fr> On Sun, Mar 25, 2007 at 11:53:08AM -0600, Fernando Perez wrote: > On 3/25/07, Gael Varoquaux wrote: > [...] > > "ipython -wthread" for scripting and interactive work. It still needs > > some work before I can even submit it for inclusion to mayavi2 but I am > > sending it along just in case in can be of some use. > Did you forget to click 'attach' ? Why do I always do this !! The good thing about this is that I spent three hours in a train hacking on this module this afternoon, so you get more features and less code ! > ps - I'm interested :) I am interested in comments, eventhought this is really work in progress. Ga?l -------------- next part -------------- """A simple wrapper around tvtk.tools.mlab suitable for MayaVi2! This is meant to be used from the embedded Python interpreter in MayaVi2 or from IPython with the "-wthread" switch. There are several test functions at the end of this file that are illustrative to look at. """ # Author: Prabhu Ramachandran # Copyright (c) 2007, Enthought, Inc. # License: BSD Style. #TODO: * Add optionnal scalars to plot3d # * Make streamline display colors by default # * Ask Prabhu what the difference is between surf and mesh. # Propose mesh fall removal. # * Find out if tri_mesh is still needed. If not kill it. # * Propose surf_regular_c for removal # Standard library imports. import scipy # Enthought library imports. from enthought.envisage import get_application from enthought.tvtk.api import tvtk from enthought.tvtk.tools import mlab from enthought.traits.api import HasTraits, Instance from enthought.traits.ui.api import View, Item, Group # MayaVi related imports. from enthought.mayavi.services import IMAYAVI from enthought.mayavi.sources.vtk_data_source import VTKDataSource from enthought.mayavi.filters.filter_base import FilterBase from enthought.mayavi.modules.surface import Surface from enthought.mayavi.modules.vectors import Vectors from enthought.mayavi.modules.iso_surface import IsoSurface from enthought.mayavi.modules.streamline import Streamline from enthought.mayavi.modules.glyph import Glyph from enthought.mayavi.app import Mayavi from enthought.mayavi.core.source import Source from enthought.mayavi.core.module import Module from enthought.mayavi.core.module_manager import ModuleManager from enthought.mayavi.sources.array_source import ArraySource ###################################################################### # Application and mayavi instances. application = get_application() mayavi = None if application is not None: mayavi = application.get_service(IMAYAVI) ###################################################################### # `ImageActor` class # This should be added as a new MayaVi module. It is here for testing # and further improvements. class ImageActor(Module): # An image actor. actor = Instance(tvtk.ImageActor, allow_none=False) view = View(Group(Item(name='actor', style='custom', resizable=True), show_labels=False), width=500, height=500, resizable=True) def setup_pipeline(self): self.actor = tvtk.ImageActor() def update_pipeline(self): """Override this method so that it *updates* the tvtk pipeline when data upstream is known to have changed. """ mm = self.module_manager if mm is None: return src = mm.source self.actor.input = src.outputs[0] self.pipeline_changed = True def update_data(self): """Override this method so that it flushes the vtk pipeline if that is necessary. """ # Just set data_changed, the component should do the rest. self.data_changed = True def _actor_changed(self, old, new): if old is not None: self.actors.remove(old) self.actors.append(new) ###################################################################### # Utility functions. def _make_glyph_data(points, vectors=None, scalars=None): """Makes the data for glyphs using mlab. """ g = mlab.Glyphs(points, vectors, scalars) return g.poly_data def _make_default_figure(): """Checks to see if a valid mayavi instance is running. If not creates a new one. """ global mayavi if mayavi is None or application.stopped is not None: mayavi = figure() return mayavi def _add_data(tvtk_data, name=''): """Add a TVTK data object `tvtk_data` to the mayavi pipleine. Give the object a name of `name`. """ if isinstance(tvtk_data, tvtk.Object): d = VTKDataSource() d.data = tvtk_data elif isinstance(tvtk_data, Source): d = tvtk_data else: raise TypeError, \ "first argument should be either a TVTK object"\ " or a mayavi source" if len(name) > 0: d.name = name mayavi = _make_default_figure() mayavi.add_source(d) return d def _traverse(node): """Traverse a tree accessing the nodes children attribute. """ try: for leaf in node.children: for leaflet in _traverse(leaf): yield leaflet except AttributeError: pass yield node def _find_data(object): """Goes up the vtk pipeline to find the data sources of a given object. """ if isinstance(object, ModuleManager): inputs = [object.source] elif hasattr(object, 'module_manager'): inputs = [object.module_manager.source] elif hasattr(object, 'data') or isinstance(object, ArraySource): inputs = [object] else: raise TypeError, 'Cannot find data source for given object' data_sources = [] try: while True: input = inputs.pop() if hasattr(input, 'inputs'): inputs += input.inputs elif hasattr(input, 'image_data'): data_sources.append(input.image_data) else: data_sources.append(input.data) except IndexError: pass return data_sources def _has_scalar_data(object): """Tests if an object has scalar data. """ data_sources = _find_data(object) for source in data_sources: if source.point_data.scalars is not None: return True elif source.cell_data.scalars is not None: return True return False def _has_vector_data(object): """Tests if an object has vector data. """ data_sources = _find_data(object) for source in data_sources: if source.point_data.vectors is not None: return True elif source.cell_data.vectors is not None: return True return False def _has_tensor_data(object): """Tests if an object has tensor data. """ data_sources = _find_data(object) for source in data_sources: if source.point_data.tensors is not None: return True elif source.cell_data.tensors is not None: return True return False def _find_module_manager(object=None, data_type=None): """If an object is specified, returns its module_manager, elsewhere finds the first module_manager in the scene. """ if object is None: for object in _traverse(gcf()): if isinstance(object, ModuleManager): if ((data_type == 'scalar' and not _has_scalar_data(object)) or (data_type == 'vector' and not _has_vector_data(object)) or (data_type == 'tensor' and not _has_tensor_data(object))): continue return object else: print("No object in the scene has a color map") else: if hasattr(object, 'module_manager'): if ((data_type == 'scalar' and _has_scalar_data(object)) or (data_type == 'vector' and _has_vector_data(object)) or (data_type == 'tensor' and _has_tensor_data(object)) or data_type is None): return object.module_manager else: print("This object has no %s data" % data_type) else: print("This object has no color map") return None def _orient_colorbar(colorbar, orientation): """Orients the given colorbar (make it horizontal or vertical). """ if orientation == "vertical": colorbar.orientation = "vertical" colorbar.width = 0.1 colorbar.height = 0.8 colorbar.position = (0.01, 0.15) elif orientation == "horizontal": colorbar.orientation = "horizontal" colorbar.width = 0.8 colorbar.height = 0.17 colorbar.position = (0.1, 0.01) else: print "Unknown orientation" gcf().render() def _typical_distance(data_obj): """ Returns a typical distance in a cloud of points. This is done by taking the size of the bounding box, and dividing it by the cubic root of the number of points. """ x_min, x_max, y_min, y_max, z_min, z_max = data_obj.bounds distance = scipy.sqrt(((x_max-x_min)**2 + (y_max-y_min)**2 + (z_max-z_min)**2)/(4* data_obj.number_of_points**(0.33))) if distance == 0: return 1 else: return 0.4*distance ###################################################################### # Data creation def scalarscatter(*args, **kwargs): """ Creates scattered scalar data. Function signatures ------------------- vectorscatter(s, ...) vectorscatter(x, y, z, s, ...) vectorscatter(x, y, z, f, ...) If only 1 array s is passed the x, y and z arrays are assumed to be made from the indices of vectors. If 4 positional arguments are passed the last one must be an array s, or acallable, f, that returns an array. Arguments --------- x -- x coordinates of the points of the mesh (optional). y -- y coordinates of the points of the mesh (optional). z -- z coordinates of the points of the mesh (optional). s -- scalar value f -- callable that is used to build the scalar data (only if 4 positional arguments are passed). Keyword arguments ----------------- name -- The name of the vtk object created. Default: 'Scattered scalars' extent -- [xmin, xmax, ymin, ymax, zmin, zmax] Default is the x, y, z arrays extents. """ if len(args)==1: s = args[0] x, y, z = scipy.indices(s.shape) elif len(args)==4: x, y, z, s = args if callable(s): s = f(x, y, z) else: raise ValueError, "wrong number of arguments" assert ( x.shape == y.shape and y.shape == z.shape and s.shape == z.shape ), "argument shape are not equal" if 'extent' in kwargs: xmin, xmax, ymin, ymax, zmin, zmax = kwargs.pop('extent') x = xmin + x*(xmax - xmin)/float(x.max() - x.min()) -x.min() y = ymin + y*(ymax - ymin)/float(y.max() - y.min()) -y.min() z = zmin + z*(zmax - zmin)/float(z.max() - z.min()) -z.min() points = scipy.c_[x.ravel(), y.ravel(), z.ravel()] scalars = s.ravel() name = kwargs.pop('name', 'Scattered scalars') data = _make_glyph_data(points, None, scalars) data_obj = _add_data(data, name) return data_obj def vectorscatter(*args, **kwargs): """ Creates scattered vector data. Function signatures ------------------- vectorscatter(u, v, w, ...) vectorscatter(x, y, z, u, v, w, ...) vectorscatter(x, y, z, f, ...) If only 3 arrays u, v, w are passed the x, y and z arrays are assumed to be made from the indices of vectors. If 4 positional arguments are passed the last one must be a callable, f, that returns vectors. Arguments --------- x -- x coordinates of the points of the mesh (optional). y -- y coordinates of the points of the mesh (optional). z -- z coordinates of the points of the mesh (optional). u -- x coordinnate of the vector field v -- y coordinnate of the vector field w -- z coordinnate of the vector field f -- callable that is used to build the vector field (only if 4 positional arguments are passed). Keyword arguments ----------------- name -- The name of the vtk object created. Default: 'Scattered vector' extent -- [xmin, xmax, ymin, ymax, zmin, zmax] Default is the x, y, z arrays extents. scalars -- The scalars associated to the vectors. Defaults to none. """ if len(args)==3: u, v, w = args x, y, z = scipy.indices(u.shape) elif len(args)==6: x, y, z, u, v, w = args elif len(args)==4: x, y, z, f = args assert callable(f), "when used with 4 arguments last argument must be callable" u, v, w = f(x, y, z) else: raise ValueError, "wrong number of arguments" assert ( x.shape == y.shape and y.shape == z.shape and z.shape == u.shape and u.shape == v.shape and v.shape == w.shape ), "argument shape are not equal" if 'extent' in kwargs: xmin, xmax, ymin, ymax, zmin, zmax = kwargs.pop('extent') x = xmin + x*(xmax - xmin)/float(x.max() - x.min()) -x.min() y = ymin + y*(ymax - ymin)/float(y.max() - y.min()) -y.min() z = zmin + z*(zmax - zmin)/float(z.max() - z.min()) -z.min() points = scipy.c_[x.ravel(), y.ravel(), z.ravel()] vectors = scipy.c_[u.ravel(), v.ravel(), w.ravel()] if 'scalars' in kwargs: scalars = kwargs['scalars'].ravel() else: scalars = None name = kwargs.pop('name', 'Scattered vectors') data = _make_glyph_data(points, vectors, scalars) data_obj = _add_data(data, name) return data_obj def scalarfield(*args, **kwargs): """ Creates a scalar field data. Function signatures ------------------- scalarfield(s, ...) scalarfield(x, y, z, s, ...) scalarfield(x, y, z, f, ...) If only 1 array s is passed the x, y and z arrays are assumed to be made from the indices of the s array. If the x, y and z arrays are passed they are supposed to have been generated by `numpy.mgrid`. The function builds a scalar field assuming the points are regularily spaced. Arguments --------- x -- x coordinates of the points of the mesh (optional). y -- y coordinates of the points of the mesh (optional). z -- z coordinates of the points of the mesh (optional). s -- scalar values. f -- callable that is used to build the scalar field (only if 4 positional arguments are passed). Keyword arguments ----------------- name -- The name of the vtk object created. Default: 'Scalar field' extent -- [xmin, xmax, ymin, ymax, zmin, zmax] Default is the x, y, z arrays extents. """ # Get the keyword args. name = kwargs.get('name', 'Scalar field') if len(args)==1: s = args[0] x, y, z = scipy.indices(s.shape) elif len(args)==4: x, y, z, s = args if callable(s): s = f(x, y, z) else: raise ValueError, "wrong number of arguments" assert ( x.shape == y.shape and y.shape == z.shape and s.shape == z.shape ), "argument shape are not equal" if 'extent' in kwargs: xmin, xmax, ymin, ymax, zmin, zmax = kwargs.pop('extent') x = xmin + x*(xmax - xmin)/float(x.max() - x.min()) -x.min() y = ymin + y*(ymax - ymin)/float(y.max() - y.min()) -y.min() z = zmin + z*(zmax - zmin)/float(z.max() - z.min()) -z.min() points = scipy.c_[x.ravel(), y.ravel(), z.ravel()] dx = x[1, 0, 0] - x[0, 0, 0] dy = y[0, 1, 0] - y[0, 0, 0] dz = z[0, 0, 1] - z[0, 0, 0] data = ArraySource(scalar_data=s, origin=[points[0, 0], points[0, 1], points[0, 2]], spacing=[dx, dy, dz]) data_obj = _add_data(data, name) return data_obj def vectorfield(*args, **kwargs): """ Creates a vector field data. Function signatures ------------------- vectorfield(u, v, w, ...) vectorfield(x, y, z, u, v, w, ...) vectorfield(x, y, z, f, ...) If only 3 arrays u, v, w are passed the x, y and z arrays are assumed to be made from the indices of the u, v, w arrays. If the x, y and z arrays are passed they are supposed to have been generated by `numpy.mgrid`. The function builds a vector field assuming the points are regularily spaced. Arguments --------- x -- x coordinates of the points of the mesh (optional). y -- y coordinates of the points of the mesh (optional). z -- z coordinates of the points of the mesh (optional). u -- x coordinnate of the vector field v -- y coordinnate of the vector field w -- z coordinnate of the vector field f -- callable that is used to build the vector field (only if 4 positional arguments are passed). Keyword arguments ----------------- name -- The name of the vtk object created. Default: 'Vector field' extent -- [xmin, xmax, ymin, ymax, zmin, zmax] Default is the x, y, z arrays extents. scalars -- The scalars associated to the vectors. Defaults to none. transpose_vectors -- If the additional argument transpose_vectors is passed, then the input vectors array is suitably transposed. By default transpose_vectors is True so that the array is in the correct format that VTK expects. However, a transposed array is not contiguous and thus a copy is made, this also means that any changes to the users input array will will not be reflected in the renderered object (provided you know how to do this). Thus, sometimes users might want to provide already transposed data suitably formatted. In these cases one should set transpose_vectors to False. Default value: True """ # Get the keyword args. transpose_vectors = kwargs.get('transpose_vectors', True) if len(args)==3: u, v, w = args x, y, z = scipy.indices(u.shape) elif len(args)==6: x, y, z, u, v, w = args elif len(args)==4: x, y, z, f = args assert callable(f), "when used with 4 arguments last argument must be callable" u, v, w = f(x, y, z) else: raise ValueError, "wrong number of arguments" assert ( x.shape == y.shape and y.shape == z.shape and z.shape == u.shape and u.shape == v.shape and v.shape == w.shape ), "argument shape are not equal" if 'extent' in kwargs: xmin, xmax, ymin, ymax, zmin, zmax = kwargs.pop('extent') x = xmin + x*(xmax - xmin)/float(x.max() - x.min()) -x.min() y = ymin + y*(ymax - ymin)/float(y.max() - y.min()) -y.min() z = zmin + z*(zmax - zmin)/float(z.max() - z.min()) -z.min() points = scipy.c_[x.ravel(), y.ravel(), z.ravel()] vectors = scipy.concatenate([u[..., scipy.newaxis], v[..., scipy.newaxis], w[..., scipy.newaxis] ], axis=3) if 'scalars' in kwargs: scalars = kwargs['scalars'] else: scalars = None name = kwargs.pop('name', 'Vector field') dx = x[1, 0, 0] - x[0, 0, 0] dy = y[0, 1, 0] - y[0, 0, 0] dz = z[0, 0, 1] - z[0, 0, 0] if not transpose_vectors: vectors.shape = vectors.shape[::-1] data = ArraySource(transpose_input_array=transpose_vectors, vector_data=vectors, scalar_data=scalars, origin=[points[0, 0], points[0, 1], points[0, 2]], spacing=[dx, dy, dz]) data_obj = _add_data(data, name) return data_obj ###################################################################### # Module creation def isosurface(data_obj, name='IsoSurface', transparent=True, contours=5): """ Applies the IsoSsurface mayavi module to the given VTK data object. """ mayavi.engine.current_object = data_obj iso = IsoSurface() # Check what type the 'contours' are and do whatever is needed. contour_list = True try: len(contours) except TypeError: contour_list = False if contour_list: iso.contour.contours = contours else: assert type(contours) == int, "The contours argument must be an integer" assert contours > 1, "The contours argument must be positiv" iso.contour.set(auto_contours=True, number_of_contours=contours) mayavi.add_module(iso) if transparent: data_range = iso.module_manager.scalar_lut_manager.data_range iso.module_manager.scalar_lut_manager.lut.alpha_range = \ (0.2, 0.8) data_range = ( scipy.mean(data_range) + 0.4 * ( data_range.max() - data_range.min()) * scipy.array([-1, 1])) iso.scene.render() return iso def vectors(data_obj, color=None, name='Vectors', mode='2d', scale_factor=1.): """ Applies the Vectors mayavi module to the given VTK data object. """ v = Vectors(name=name) mayavi.engine.current_object = data_obj mayavi.add_module(v) mode_map = {'2d': 0, 'arrow': 1, 'cone': 2, 'cylinder': 3, 'sphere': 4, 'cube': 5, 'point': 6} if mode == 'point': v.glyph.glyph_source = tvtk.PointSource(radius=0, number_of_points=1) else: v.glyph.glyph_source = v.glyph.glyph_list[mode_map[mode]] if color: v.glyph.color_mode = 'no_coloring' v.actor.property.color = color elif _has_scalar_data(data_obj) : v.glyph.color_mode = 'color_by_scalar' else: v.glyph.color_mode = 'color_by_vector' v.glyph.glyph.scale_factor = scale_factor return v def glyph(data_obj, color=None, name='Glyph', mode='2d'): """ Applies the Glyph mayavi module to the given VTK data object. """ g = Glyph() mayavi.engine.current_object = data_obj mayavi.add_module(g) mode_map = {'2d': 0, 'arrow': 1, 'cone': 2, 'cylinder': 3, 'sphere': 4, 'cube': 5, 'point': 6} if mode == 'point': g.glyph.glyph_source = tvtk.PointSource(radius=0, number_of_points=1) else: g.glyph.glyph_source = g.glyph.glyph_list[mode_map[mode]] if color: g.actor.property.color = color if _has_scalar_data(data_obj) : g.glyph.color_mode = 'color_by_scalar' return g #FIXME : just started this procedure !! Need to modify the color so that if # none it warps a scalar. Need to add a kwarg for the source. def streamline(data_obj, color=None, name='Streamline', ): """ Applies the Streamline mayavi module to the given VTK data object. """ st = Streamline() mayavi.engine.current_object = data_obj mayavi.add_module(st) if color: st.actor.property.color = color elif _has_scalar_data(data_obj) : st.actor.mapper.scalar_visibility = True else: st.actor.mapper.interpolate_scalars_before_mapping = True st.actor.mapper.scalar_visibility = True return st ###################################################################### # Helper functions def quiver3d(*args, **kwargs): """ Plots glyphs (like arrows) indicating the direction of the vectors for a 3D volume of data supplied as arguments. Function signatures ------------------- quiver3d(vectordata, ...) quiver3d(u, v, w, ...) quiver3d(x, y, z, u, v, w, ...) quiver3d(x, y, z, f, ...) If only one positional argument is passed, it should be VTK data object with vector data. If only 3 arrays u, v, w are passed the x, y and z arrays are assumed to be made from the indices of vectors. If 4 positional arguments are passed the last one must be a callable, f, that returns vectors. Arguments --------- vectordata -- VTK data object with vector data, such as created by vectorscatter of vectorfield. x -- x coordinates of the points of the mesh (optional). y -- y coordinates of the points of the mesh (optional). z -- z coordinates of the points of the mesh (optional). u -- x coordinnate of the vector field v -- y coordinnate of the vector field w -- z coordinnate of the vector field f -- callable that is used to build the vector field (only if 4 positional arguments are passed). Keyword arguments ----------------- name -- The name of the vtk object created. Default: 'Quiver3D' mode -- This should be one of ['2d' (2d arrows), 'arrow', 'cone', 'cylinder', 'sphere', 'cube', 'point'] and depending on what is passed shows an appropriate glyph. It defaults to a 3d arrow. extent -- [xmin, xmax, ymin, ymax, zmin, zmax] Default is the x, y, z arrays extents. scalars -- The scalars used to display the color of the glyphs. Defaults to the norm of the vectors. color -- The color of the glyphs in the absence of scalars. Default: (1., 0., 0.) autoscale -- Autoscale the size of the glyph. Default: True scale_factor -- Default 1 """ if len(args)==1: data_obj = args[0] else: data_kwargs = kwargs.copy() data_kwargs.pop('name','') if len(args)==6: data_obj = vectorscatter(*args, **data_kwargs) else: data_obj = vectorfield(*args, **data_kwargs) if not 'name' in kwargs: kwargs['name'] = 'Quiver3D' if not 'mode' in kwargs: kwargs['mode'] = 'arrow' if not 'autoscale' in kwargs or kwargs['autoscale']: scale_factor = kwargs.get('scale_facotr', 1.) kwargs['scale_factor'] = (scale_factor * _typical_distance(_find_data(data_obj)[0]) ) return vectors(data_obj, **kwargs) def contour3d(*args, **kwargs): """ Plots iso-surfaces for a 3D volume of data suplied as arguments. Function signatures ------------------- contour3d(scalars, ...) contour3d(scalarfield, ...) Arguments --------- scalars -- A 3D array giving the value of the scalar on a grid. scalarfield -- VTK data object with scalar field data, such as created by scalarfield. Keyword arguments ----------------- name -- The name of the vtk object created. Default: 'Contour3D' transpose_scalars -- If the additional argument transpose_scalars is passed, then the input scalar array is suitably transposed. By default transpose_scalars is True so that the array is in the correct format that VTK expects. However, a transposed array is not contiguous and thus a copy is made, this also means that any changes to the users input array will will not be reflected in the renderered object (provided you know how to do this). Thus, sometimes users might want to provide already transposed data suitably formatted. In these cases one should set transpose_scalars to False. Default value: True contours -- Integer/list specifying number/list of iso-surfaces. Specifying 0 shows no contours. Specifying a list of values will only give the requested contours asked for. Default: 3 extent -- [xmin, xmax, ymin, ymax, zmin, zmax] Default is the shape of the scalars transparent -- Boolean to make the opacity of the isosurfaces depend on the scalar. Default: True """ if len(args)==1: if hasattr(args[0], 'shape'): scalars = args[0] assert len(scalars.shape) == 3, 'Only 3D arrays are supported.' data_kwargs = kwargs.copy() data_kwargs.pop('contours', '') data_kwargs.pop('transparent', '') if not 'name' in kwargs: data_kwargs['name'] = 'Contour3D' data_obj = scalarfield(scalars, **data_kwargs) else: data_obj = args[0] else: raise TypeError, "contour3d takes only one argument" # Remove extra kwargs that are not needed for the isosurface. kwargs.pop('extent', '') kwargs.pop('name', '') return isosurface(data_obj, **kwargs) ###################################################################### # The Mlab functionality. def plot3d(x, y, z, radius=0.01, use_tubes=True, color=(1., 0., 0.) , name='Plot3D'): """Draws lines between points Arguments --------- x -- x coordinates of the points of the line y -- y coordinates of the points of the line z -- z coordinates of the points of the line Keyword arguments ----------------- color -- color of the line. Default: (1., 0., 0.) use_tubes -- Enables the drawing of the lines as tubes. Default: True radius -- radius of the tubes created. Default: 0.01 name -- The name of the vtk object created. Default: 'Line3D' """ assert ( x.shape == y.shape and y.shape == z.shape and s.shape == z.shape ), "argument shape are not equal" points = c_[x, y, z] np = len(points) - 1 lines = scipy.zeros((np, 2), 'l') lines[:,0] = scipy.arange(0, np-0.5, 1, 'l') lines[:,1] = scipy.arange(1, np+0.5, 1, 'l') pd = tvtk.PolyData(points=points, lines=lines) _add_data(pd, name) if use_tubes: filter = tvtk.TubeFilter(number_of_sides=6) filter.radius = radius f = FilterBase(filter=filter, name='TubeFilter') mayavi.add_filter(f) s = Surface() s.actor.mapper.scalar_visibility = False mayavi.add_module(s) s.actor.property.color = color return s def surf_regular(x, y, z, warp=True, scale=[1.0, 1.0, 1.0], name='SurfRegular', f_args=(), **f_kwargs): """Creates a surface given regularly spaced values of x, y and the corresponding z as arrays. Also works if z is a function. Currently works only for regular data - can be enhanced later. The x and y arrays give the grid line positions along x and y. Arguments --------- x -- 1D array of x points (regularly spaced) y -- 1D array of y points (regularly spaced) z -- A 2D array for the x and y points with x varying fastest and y next. Also will work if z is a callable which supports x and y arrays as the arguments. Keyword arguments ----------------- warp -- If true, warp the data to show a 3D surface (default = 1). scale -- Scale the x, y and z axis as per passed values. Defaults to [1.0, 1.0, 1.0]. f_args -- additional positional arguments for func() (default is empty) f_kwargs -- a dict of additional keyword arguments for func() (default is empty) """ data = mlab.make_surf_actor(x, y, z, warp, scale, make_actor=False, *f_args, **f_kwargs) _add_data(data, name) if warp: from enthought.mayavi.filters.warp_scalar import WarpScalar from enthought.mayavi.filters.poly_data_normals import PolyDataNormals w = WarpScalar() w.filter.scale_factor=scale[2] mayavi.add_filter(w) n = PolyDataNormals() n.filter.feature_angle = 45 mayavi.add_filter(n) s = Surface() mayavi.add_module(s) return s def surf_regular_c(x, y, z, warp=True, scale=[1.0, 1.0, 1.0], number_of_contours=10, name='SurfRegularC', f_args=(), **f_kwargs): """ MayaVi1's `imv.surf` like functionality that plots surfaces given x (1D), y(1D) and z (or a callable) arrays. Also plots contour lines. """ s1 = surf_regular(x, y, z, warp, scale, name, *f_args, **f_kwargs) data_src = s1.module_manager.source.inputs[0].inputs[0] s2 = Surface(name='Contours') data_src.add_child(s2) s2.enable_contours = True s2.contour.number_of_contours = number_of_contours return s1, s2 def tri_mesh(triangles, points, scalars=None, scalar_visibility=False, surface=False, color=(0.5, 1.0, 0.5), name='TriMesh'): """Given triangle connectivity and points, plots a mesh of them. """ data = mlab.make_triangle_polydata(triangles, points, scalars) _add_data(data, name) s = Surface() representation = 'w' if surface: representation = 's' s.actor.mapper.scalar_visibility = scalar_visibility mayavi.add_module(s) if surface: s.actor.property.set(diffuse=0.0, ambient=1.0, color=color, representation=representation) else: s.actor.property.set(diffuse=1.0, ambient=0.0, color=color, representation=representation) return s def mesh(x, y, z, scalars=None, scalar_visibility=False, surface=False, color=(0.5, 1.0, 0.5), name='Mesh'): """Given x, y generated from scipy.mgrid, and a z to go with it. Along with optional scalars. This class builds the triangle connectivity (assuming that x, y are from scipy.mgrid) and builds a mesh and shows it. Arguments --------- x -- An array of x coordinate values formed using scipy.mgrid. y -- An array of y coordinate values formed using scipy.mgrid. z -- An array of z coordinate values formed using scipy.mgrid. Keyword arguments ----------------- scalars -- An optional array of scalars to associate with the points. scalar_visibility -- A boolean switching the visibility of the scalars. color -- The color of the mesh in the absence of scalars. name -- The name of the vtk object created. Default is "Mesh" """ triangles, points = mlab.make_triangles_points(x, y, z, scalars) return tri_mesh(triangles, points, scalars, scalar_visibility=scalar_visibility, surface=surface, color=color, name=name) def surf(*args, **kwargs): """ Plots a surface using grid spaced data supplied as 2D arrays. Function signatures ------------------- surf(z, scalars=z, ...) surf(x, y, z, scalars=z, ...) If only one array z is passed the x and y arrays are assumed to be made of the indices of z. z is the elevation matrix. If no `scalars` argument is passed the color of the surface also represents the z matrix. Setting the `scalars` argument to None prevents this. Arguments --------- x -- x coordinates of the points of the mesh (optional). y -- y coordinates of the points of the mesh (optional). z -- A 2D array giving the elevation of the mesh. Also will work if z is a callable which supports x and y arrays as the arguments, but x and y must then be supplied. Keyword arguments ----------------- extent -- [xmin, xmax, ymin, ymax, zmin, zmax] Default is the x, y, z arrays extents. scalars -- An array of the same shape as z that gives the color of the surface. This can also be a function that takes x and y as arguments. represention -- can be 'surface', 'wireframe' or 'points'. Default is 'surface' color -- The color of the mesh in the absence of scalars. name -- The name of the vtk object created. Default is "Surf" """ if len(args)==1: z = args[0] x, y = scipy.indices(z.shape) else: x, y, z = args if callable(z): z = z(x, y) if not 'scalars' in kwargs: kwargs['scalars'] = z if callable(kwargs['scalars']): kwargs['scalars'] = kwargs['scalars'](x, y) if 'color' in kwargs and kwargs['color']: kwargs['scalar_visibility'] = False if 'extent' in kwargs: xmin, xmax, ymin, ymax, zmin, zmax = kwargs.pop('extent') x = xmin + x*(xmax - xmin)/float(x.max() - x.min()) -x.min() y = ymin + y*(ymax - ymin)/float(y.max() - y.min()) -y.min() z = zmin + z*(zmax - zmin)/float(z.max() - z.min()) -z.min() return _surf(x, y, z, **kwargs) def _surf(x, y, z, scalars=None, scalar_visibility=True, color=(0.5, 1.0, 0.5), name='Surf', representation='surface'): """ Functions that does the work for "surf". It is called with the right number of arguments after the "surf" fonction does the magic to translate the user-supplied arguments into something this function understands. """ triangles, points = mlab.make_triangles_points(x, y, z, scalars) data = mlab.make_triangle_polydata(triangles, points, scalars) _add_data(data, name) s = Surface() s.actor.mapper.scalar_visibility = scalar_visibility mayavi.add_module(s) s.actor.property.color = color s.actor.property.represention = representation return s def contour_surf(*args, **kwargs): """ Plots the contours of a surface using grid spaced data supplied as 2D arrays. Function signatures ------------------- contour_surf(z, scalars=z, ...) contour_surf(x, y, z, scalars=z, ...) If only one array z is passed the x and y arrays are assumed to be made of the indices of z. z is the elevation matrix. If no `scalars` argument is passed the contours are contour lines of the elevation, elsewhere they are contour lines of the scalar array. Arguments --------- x -- x coordinates of the points of the mesh (optional). y -- y coordinates of the points of the mesh (optional). z -- A 2D array giving the elevation of the mesh. Also will work if z is a callable which supports x and y arrays as the arguments, but x and y must then be supplied. Keyword arguments ----------------- extent -- [xmin, xmax, ymin, ymax, zmin, zmax] Default is the x, y, z arrays extents. contours -- Integer/list specifying number/list of iso-surfaces. Specifying 0 shows no contours. Specifying a list of values will only give the requested contours asked for. Default: 10 scalars -- An array of the same shape as z that gives the scalar data to plot the contours of. This can also be a function that takes x and y as arguments. color -- The color of the contour lines. If None, this is given by the scalars. name -- The name of the vtk object created. Default is "Contour Surf" """ contours = kwargs.pop('contours', 10) if not 'name' in kwargs: kwargs['name'] = "Contour Surf" s = surf(*args, **kwargs) s.enable_contours = True # Check what type the 'contours' are and do whatever is needed. contour_list = True try: len(contours) except TypeError: contour_list = False if contour_list: s.contour.contours = contours s.contour.set(auto_contours=False) else: assert type(contours) == int, "The contours argument must be an integer" assert contours > 1, "The contours argument must be positiv" s.contour.set(auto_contours=True, number_of_contours=contours) return s def fancy_tri_mesh(triangles, points, scalars=None, scalar_visibility=False, tube_radius=0.05, sphere_radius=0.05, color=(0.5, 1.0, 0.5), name='FancyTriMesh'): """Plots the mesh using tubes and spheres so its fancier. """ data = mlab.make_triangle_polydata(triangles, points, scalars) _add_data(data, name) # Extract the edges. ef = tvtk.ExtractEdges() extract_edges = FilterBase(filter=ef, name='ExtractEdges') mayavi.add_filter(extract_edges) # Now show the lines with tubes. tf = tvtk.TubeFilter(radius=tube_radius) tube = FilterBase(filter=tf, name='TubeFilter') mayavi.add_filter(tube) s = Surface(name='Tubes') s.actor.mapper.scalar_visibility = scalar_visibility mayavi.add_module(s) s.actor.property.color = color # Show the points with glyphs. g = Glyph(name='Spheres') g.glyph.glyph_source = g.glyph.glyph_list[4] g.glyph.glyph_source.radius = sphere_radius extract_edges.add_child(g) g.glyph.scale_mode = 'data_scaling_off' g.actor.mapper.scalar_visibility=scalar_visibility g.actor.property.color = color def fancy_mesh(x, y, z, scalars=None, scalar_visibility=False, tube_radius=0.05, sphere_radius=0.05, color=(0.5, 1.0, 0.5), name='FancyMesh'): """Like mesh but shows the mesh using tubes and spheres. Arguments --------- x -- An array of x coordinate values formed using scipy.mgrid. y -- An array of y coordinate values formed using scipy.mgrid. z -- An array of z coordinate values formed using scipy.mgrid. Keyword arguments ----------------- scalars -- An optional array of scalars to associate with the points. """ triangles, points = mlab.make_triangles_points(x, y, z, scalars) return fancy_tri_mesh(triangles, points, scalars, scalar_visibility, tube_radius, sphere_radius, color, name=name) def imshow(arr, scale=[1.0, 1.0, 1.0], interpolate=False, lut_mode='blue-red', file_name='', name='ImShow'): """Allows one to view a 2D Numeric array as an image. This works best for very large arrays (like 1024x1024 arrays). Arguments --------- arr -- Array to be viewed. Keyword arguments ----------------- scale -- Scale the x, y and z axis as per passed values. Defaults to [1.0, 1.0, 1.0]. interpolate -- Boolean to interpolate the data in the image. """ # FIXME assert len(arr.shape) == 2, "Only 2D arrays can be viewed!" ny, nx = arr.shape dx, dy, junk = scipy.array(scale)*1.0 xa = scipy.arange(0, nx*scale[0] - 0.1*dx, dx, 'f') ya = scipy.arange(0, ny*scale[1] - 0.1*dy, dy, 'f') arr_flat = scipy.ravel(arr) min_val = min(arr_flat) max_val = max(arr_flat) sp = mlab._create_structured_points_direct(xa, ya) from enthought.mayavi.core.lut_manager import LUTManager lut = LUTManager(lut_mode=lut_mode, file_name=file_name) lut.data_range = min_val, max_val a = lut.lut.map_scalars(arr_flat, 0, 0) a.name = 'scalars' sp.point_data.scalars = a sp.scalar_type = 'unsigned_char' sp.number_of_scalar_components = 4 d = _add_data(sp, name) ia = ImageActor() ia.actor.interpolate = interpolate mayavi.add_module(ia) ###################################################################### # Non data-related drawing elements def outline(object=None, color=None, name='Outline'): """Creates an outline for the current data. Keyword arguments ----------------- object -- the object for which we create the outline (default None). color -- The color triplet, eg: ( 1., 0., 0.) """ from enthought.mayavi.modules.outline import Outline mayavi = _make_default_figure() scene = gcf() for obj in _traverse(scene): if isinstance(obj, Outline) and obj.name == name: o = obj break else: o = Outline(name=name) if object is not None: object.add_child(o) else: mayavi.add_module(o) if color is None: color = scene.scene.foreground if not color is None: o.actor.property.color = color return o def axes(color=None, xlabel=None, ylabel=None, zlabel=None, object=None, name='Axes'): """Creates an axes for the current data. Keyword arguments ----------------- color -- The color triplet, eg: (1., 0., 0.) xlabel -- the label of the x axis, default: '' ylabel -- the label of the y axis, default: '' zlabel -- the label of the z axis, default: '' object -- the object for which we create the axes. """ from enthought.mayavi.modules.axes import Axes mayavi = _make_default_figure() scene = gcf() for obj in _traverse(scene): if isinstance(obj, Axes) and obj.name == name: a = obj break else: a = Axes(name=name) if object is not None: object.add_child(a) else: mayavi.add_module(a) if color is None: color = scene.scene.foreground if xlabel is None: xlabel = '' if ylabel is None: ylabel = '' if zlabel is None: zlabel = '' if color is not None: a.property.color = color if xlabel is not None: a.axes.x_label = xlabel if ylabel is not None: a.axes.y_label = ylabel if zlabel is not None: a.axes.z_label = zlabel return a def figure(): """If you are running from IPython this will start up mayavi for you! This returns the current running MayaVi script instance. """ global mayavi, application if mayavi is not None and application.stopped is None: mayavi.new_scene() return mayavi m = Mayavi() m.main() m.script.new_scene() engine = m.script.engine mayavi = m.script application = m.application return mayavi def gcf(): """Return a handle to the current figure. """ return mayavi.engine.current_scene def clf(): """Clear the current figure. """ try: scene = gcf() scene.children[:] = [] except AttributeError: pass def xlabel(text): """Creates a set of axes if there isn't already one, and sets the x label """ return axes(xlabel=text) def ylabel(text): """Creates a set of axes if there isn't already one, and sets the y label """ return axes(ylabel=text) def zlabel(text): """ Creates a set of axes if there isn't already one, and sets the z label """ return axes(zlabel=text) def title(text=None, color=None, size=None, name='Title'): """Creates a title for the figure. Keyword arguments ----------------- text -- The text of the title, default: '' color -- The color triplet, eg: ( 1., 0., 0.) size -- The size, default: 1 """ from enthought.mayavi.modules.text import Text scene = gcf() for object in _traverse(scene): if isinstance(object, Text) and object.name==name: t = object break else: t = Text(name=name) mayavi.add_module(t) if color is None: color = scene.scene.foreground if text is None: text = 'title' if color is not None: t.property.color = color if text is not None: t.text = text if text is not None or size is not None: t.width = min(0.05*size*len(t.text), 1) t.x_position = 0.5*(1 - t.width) t.y_position = 0.8 return t def scalarbar(object=None, title=None, orientation=None): """Adds a colorbar for the scalar color mapping of the given object. If no object is specified, the first object with scalar data in the scene is used. Keyword arguments ----------------- title -- The title string orientation -- Can be 'horizontal' or 'vertical' """ module_manager = _find_module_manager(object=object, data_type="scalar") if module_manager is None: return if not module_manager.scalar_lut_manager.show_scalar_bar: if title is None: title = '' if orientation is None: orientation = 'horizontal' colorbar = module_manager.scalar_lut_manager.scalar_bar if title is not None: colorbar.title = title if orientation is not None: _orient_colorbar(colorbar, orientation) module_manager.scalar_lut_manager.show_scalar_bar = True return colorbar def vectorbar(object=None, title=None, orientation=None): """Adds a colorbar for the vector color mapping of the given object. If no object is specified, the first object with vector data in the scene is used. Keyword arguments ----------------- object -- Optional object to get the vector lut from title -- The title string orientation -- Can be 'horizontal' or 'vertical' """ module_manager = _find_module_manager(object=object, data_type="vector") if module_manager is None: return if not module_manager.vector_lut_manager.show_scalar_bar: title = '' orientation = 'horizontal' colorbar = module_manager.vector_lut_manager.scalar_bar if title is not None: colorbar.title = title if orientation is not None: _orient_colorbar(colorbar, orientation) module_manager.vector_lut_manager.show_scalar_bar = True return colorbar def colorbar(object=None, title=None, orientation=None): """Adds a colorbar for the color mapping of the given object. If the object has scalar data, the scalar color mapping is represented. Elsewhere the vector color mapping is represented, if available. If no object is specified, the first object with a color map in the scene is used. Keyword arguments ----------------- object -- Optional object to get the vector lut from title -- The title string orientation -- Can be 'horizontal' or 'vertical' """ colorbar = scalarbar(object=object, title=title, orientation=orientation) if colorbar is None: colorbar = vectorbar(object=object, title=title, orientation=orientation) return colorbar ###################################################################### # Test functions. ###################################################################### def test_arrows(): a = arrows([[-1,-1,-1],[1,0,0]], [[1,1,1],[0,1,0]], color=(1,0,0)) return a def test_plot3d(): """Generates a pretty set of lines.""" n_mer, n_long = 6, 11 pi = scipy.pi dphi = pi/1000.0 phi = scipy.arange(0.0, 2*pi + 0.5*dphi, dphi, 'd') mu = phi*n_mer x = scipy.cos(mu)*(1+scipy.cos(n_long*mu/n_mer)*0.5) y = scipy.sin(mu)*(1+scipy.cos(n_long*mu/n_mer)*0.5) z = scipy.sin(n_long*mu/n_mer)*0.5 l = plot3d(x, y, z, radius=0.05, color=(0.0, 0.0, 0.8)) return l def test_molecule(): """Generates and shows a Caffeine molecule.""" o = [[30, 62, 19],[8, 21, 10]] n = [[31, 21, 11], [18, 42, 14], [55, 46, 17], [56, 25, 13]] c = [[5, 49, 15], [30, 50, 16], [42, 42, 15], [43, 29, 13], [18, 28, 12], [32, 6, 8], [63, 36, 15], [59, 60, 20]] h = [[23, 5, 7], [32, 0, 16], [37, 5, 0], [73, 36, 16], [69, 60, 20], [54, 62, 28], [57, 66, 12], [6, 59, 16], [1, 44, 22], [0, 49, 6]] oxygen = spheres(o, radius=8, color=(1,0,0), name='Oxygen') nitrogen = spheres(n, radius=10, color=(0,0,1), name='Nitrogen') carbon = spheres(c, radius=10, color=(0,1,0), name='Carbon') hydrogen = spheres(h, radius=5, color=(1,1,1), name='Hydrogen') def test_trimesh(): """Test for simple triangle mesh.""" pts = scipy.array([[0.0,0,0], [1.0,0.0,0.0], [1,1,0]], 'd') triangles = [[0, 1, 2]] t1 = tri_mesh(triangles, pts) pts1 = pts.copy() pts1[:,2] = 1.0 t2 = fancy_tri_mesh(triangles, pts1) def test_surf_regular(contour=1): """Test Surf on regularly spaced co-ordinates like MayaVi.""" def f(x, y): sin, cos = scipy.sin, scipy.cos return sin(x+y) + sin(2*x - y) + cos(3*x+4*y) #return scipy.sin(x*y)/(x*y) x = scipy.arange(-7., 7.05, 0.1) y = scipy.arange(-5., 5.05, 0.05) if contour: s = surf_regular_c(x, y, f) else: s = surf_regular(x, y, f) return s def test_simple_surf(): """Test Surf with a simple collection of points.""" x, y = scipy.mgrid[0:3:1,0:3:1] z = x s = surf(x, y, z, scipy.asarray(z, 'd')) def test_surf(): """A very pretty picture of spherical harmonics translated from the octaviz example.""" pi = scipy.pi cos = scipy.cos sin = scipy.sin dphi, dtheta = pi/250.0, pi/250.0 [phi,theta] = scipy.mgrid[0:pi+dphi*1.5:dphi,0:2*pi+dtheta*1.5:dtheta] m0 = 4; m1 = 3; m2 = 2; m3 = 3; m4 = 6; m5 = 2; m6 = 6; m7 = 4; r = sin(m0*phi)**m1 + cos(m2*phi)**m3 + sin(m4*theta)**m5 + cos(m6*theta)**m7 x = r*sin(phi)*cos(theta) y = r*cos(phi) z = r*sin(phi)*sin(theta); s = surf(x, y, z) def test_mesh_sphere(): """Create a simple sphere and test the mesh.""" pi = scipy.pi cos = scipy.cos sin = scipy.sin du, dv = pi/20.0, pi/20.0 phi, theta = scipy.mgrid[0.01:pi+du*1.5:du, 0:2*pi+dv*1.5:dv] r = 1.0 x = r*sin(phi)*cos(theta) y = r*sin(phi)*sin(theta) z = r*cos(phi) s = fancy_mesh(x, y, z, z, scalar_visibility=True, tube_radius=0.01, sphere_radius=0.025) def test_mesh(): """Create a fancy looking mesh (example taken from octaviz).""" pi = scipy.pi cos = scipy.cos sin = scipy.sin du, dv = pi/20.0, pi/20.0 u, v = scipy.mgrid[0.01:pi+du*1.5:du, 0:2*pi+dv*1.5:dv] x = (1- cos(u))*cos(u+2*pi/3) * cos(v + 2*pi/3.0)*0.5 y = (1- cos(u))*cos(u+2*pi/3) * cos(v - 2*pi/3.0)*0.5 z = cos(u-2*pi/3.) m = fancy_mesh(x, y, z, z, scalar_visibility=True, tube_radius=0.0075, sphere_radius=0.02) def test_imshow(): """Show a large random array.""" import enthought.util.randomx as RandomArray z_large = RandomArray.random((1024, 512)) i = imshow(z_large) def test_contour3d(): dims = [64, 64, 64] xmin, xmax, ymin, ymax, zmin, zmax = [-5,5,-5,5,-5,5] x, y, z = scipy.ogrid[xmin:xmax:dims[0]*1j, ymin:ymax:dims[1]*1j, zmin:zmax:dims[2]*1j] x = x.astype('f') y = y.astype('f') z = z.astype('f') sin = scipy.sin scalars = x*x*0.5 + y*y + z*z*2.0 contour3d(scalars, contours=4, show_slices=True) # Show an outline and zoom appropriately. outline() mayavi.engine.current_scene.scene.isometric_view() def test_quiver3d(): dims = [16, 16, 16] xmin, xmax, ymin, ymax, zmin, zmax = [-5,5,-5,5,-5,5] x, y, z = scipy.mgrid[xmin:xmax:dims[0]*1j, ymin:ymax:dims[1]*1j, zmin:zmax:dims[2]*1j] x = x.astype('f') y = y.astype('f') z = z.astype('f') sin = scipy.sin cos = scipy.cos u = cos(x) v = sin(y) w = sin(x*z) # All these work! #quiver3d(u, v, w) quiver3d(x, y, z, u, v, w) # Show an outline and zoom appropriately. outline() mayavi.engine.current_scene.scene.isometric_view() def test_quiver3d_2d_data(): dims = [32, 32] xmin, xmax, ymin, ymax = [-5,5,-5,5] x, y = scipy.mgrid[xmin:xmax:dims[0]*1j, ymin:ymax:dims[1]*1j] x = x.astype('f') y = y.astype('f') sin = scipy.sin cos = scipy.cos u = cos(x) v = sin(y) w = scipy.zeros_like(x) quiver3d(x, y, w, u, v, w) # Show an outline and zoom appropriately. outline() mayavi.engine.current_scene.scene.isometric_view() From jstrunk at enthought.com Sun Mar 25 20:52:49 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Sun, 25 Mar 2007 19:52:49 -0500 Subject: [SciPy-user] www.scipy.org appears to be down In-Reply-To: <20070325183259.GA1028@zunzun.com> References: <20070325183259.GA1028@zunzun.com> Message-ID: <200703251952.49248.jstrunk@enthought.com> Thank you. I restarted it at 5:30pm central. -Jeff On Sunday 25 March 2007 1:33 pm, zunzun at zunzun.com wrote: > Quick note: www.scipy.org appears to be down. > > James > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From ckkart at hoc.net Sun Mar 25 21:57:43 2007 From: ckkart at hoc.net (Christian) Date: Mon, 26 Mar 2007 10:57:43 +0900 Subject: [SciPy-user] odr thread safe? In-Reply-To: <4601998C.7010404@ee.byu.edu> References: <46016E74.9060309@gmail.com> <4601998C.7010404@ee.byu.edu> Message-ID: Travis Oliphant wrote: > > In NumPy for the ufuncs, we use PyThreadState_GetDict and then if that > is NULL use PyEval_GetBuiltins so that we always get a per-thread > dictionary to store information in. So I should store the members of the odr_global structure in the thread or builtin dict and read it from there when needed? I am not really experienced with the Python C API but I always looked for an argument to start digging into it. Currently however my tests segfault and I cannot compile the extension with debugging symbols. Isn't build --debug supposed to do that? Thanks for your help, Christian From emilia12 at mail.bg Mon Mar 26 06:29:35 2007 From: emilia12 at mail.bg (emilia12 at mail.bg) Date: Mon, 26 Mar 2007 13:29:35 +0300 Subject: [SciPy-user] examples in the scipy Example List Message-ID: <1174904975.ae8068fc3cd66@mail.bg> hi, is it possible to post only working examples in the "scipy Example List" (http://www.scipy.org/scipy_Example_List) for eg in "convolve()", variable "h1" is missing ! e. ----------------------------- SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? ??????????. http://www.bgscena.com/ From raphael.langella at steria.com Mon Mar 26 09:27:33 2007 From: raphael.langella at steria.com (raphael langella) Date: Mon, 26 Mar 2007 15:27:33 +0200 Subject: [SciPy-user] building numpy/scipy on Solaris Message-ID: <10737139e1.139e110737@steria.com> ---- Messages d?origine ---- De: raphael langella Date: jeudi, mars 22, 2007 10:39 am Objet: Re: [SciPy-user] building numpy/scipy on Solaris > ---- Messages d?origine ---- > De: "David M. Cooke" > Date: jeudi, mars 22, 2007 9:41 am > Objet: Re: [SciPy-user] building numpy/scipy on Solaris > > > Ahh, just realized. We can do this completely with the C > compiler; the > > -xlic_lib=sunperf -xarch=v9b is only being used for Fortran. Try > > > > CFLAGS='-xlic_lib=sunperf -xarch=v9b' CPPFLAGS='- > > DNO_APPEND_FORTRAN' python setup.py install > > > > Since sunperf has C bindings, we don't need the Fortran compiler > at > > allfor Numpy (and I don't think it was being used in the first > place). > well, options for C don't have the same syntax as Fortran, so I used > '-mcpu=v9 -lsunperf' and got : > > ImportError: ld.so.1: python: fatal: relocation error: file > /Produits/sun/forte/7/SUNWspro/lib/libsunperf.so.4: symbol __f95_sign: > referenced symbol not found Well, any idea about this error? It would be neat to be able to build numpy with libsunperf. From cookedm at physics.mcmaster.ca Mon Mar 26 09:33:57 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 26 Mar 2007 09:33:57 -0400 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: <10737139e1.139e110737@steria.com> References: <10737139e1.139e110737@steria.com> Message-ID: On Mar 26, 2007, at 09:27 , raphael langella wrote: > ---- Messages d?origine ---- > De: raphael langella > Date: jeudi, mars 22, 2007 10:39 am > Objet: Re: [SciPy-user] building numpy/scipy on Solaris > >> ---- Messages d?origine ---- >> De: "David M. Cooke" >> Date: jeudi, mars 22, 2007 9:41 am >> Objet: Re: [SciPy-user] building numpy/scipy on Solaris >> >>> Ahh, just realized. We can do this completely with the C >> compiler; the >>> -xlic_lib=sunperf -xarch=v9b is only being used for Fortran. Try >>> >>> CFLAGS='-xlic_lib=sunperf -xarch=v9b' CPPFLAGS='- >>> DNO_APPEND_FORTRAN' python setup.py install >>> >>> Since sunperf has C bindings, we don't need the Fortran compiler >> at >>> allfor Numpy (and I don't think it was being used in the first >> place). > >> well, options for C don't have the same syntax as Fortran, so I used >> '-mcpu=v9 -lsunperf' and got : >> >> ImportError: ld.so.1: python: fatal: relocation error: file >> /Produits/sun/forte/7/SUNWspro/lib/libsunperf.so.4: symbol >> __f95_sign: >> referenced symbol not found > > Well, any idea about this error? It would be neat to be able to build > numpy with libsunperf. Looks like you may have to link with the Fortran compiler, or link with the appropriate Fortran support libraries. You could try setting the LD environment variable to your Fortran compiler, and building again. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: This is a digitally signed message part URL: From nowar00 at hotmail.com Mon Mar 26 09:59:49 2007 From: nowar00 at hotmail.com (javi markez bigara) Date: 26 Mar 2007 06:59:49 -0700 Subject: [SciPy-user] problems with scipy.gplt.pyPlot Message-ID: Date: Mon, 26 Mar 2007 13:59:49 +0000 Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1; format=flowed hi, i am new in this type of forums, i am looking for the solution to a problem coming from my low knoledge of python. well, so i would like to plot a list of coordenates using scipy this is wat i write: draw=scipy.gplt.pyPlot.Plot() draw.plot(newlist1) newlist is a list of longitude latitude like [-4.6925871,45.36987745] each line i have like 140000 lines like this one. so whenever i run the program i get a plot but the x axe is the line number althought i get in the y axe two lines one with the longitude ane other one with the latitude . i would like to get one point 4 each line can anybody help me please....... thank in advance _________________________________________________________________ Grandes ?xitos, superh?roes, imitaciones, cine y TV... http://es.msn.kiwee.com/ Lo mejor para tu m?vil. From wachao at cse.ohio-state.edu Mon Mar 26 10:04:38 2007 From: wachao at cse.ohio-state.edu (chao wang) Date: Mon, 26 Mar 2007 10:04:38 -0400 (EDT) Subject: [SciPy-user] Two questions about maxentropy module Message-ID: Good day! It's nice to see the scipy.maxentropy module. I have two questions about this module. Q1) The current model fitting algorithms include CG, BFGS, LBFGSB, Powell, and Nelder-Mead. How about L-BFGS (unbounded optimization)? Is it implemented or is there any specific concern of not using it here? Also, it seems that LBFGSB does not work ... Q2) bergerexamplesimulated.py example shows how to use bigmodel. However, it requires the event space to be explicitly enumerated (the enumerated event space is used when defining a uniform instrumental distribution for sampling), which is intractable for large event space. E.g., if I wanna model a maxent distribution over 100 binary variables. What change should I make to the example code? Any advice appreciated. Thanks! From nowar00 at hotmail.com Mon Mar 26 11:09:43 2007 From: nowar00 at hotmail.com (javi markez bigara) Date: Mon, 26 Mar 2007 15:09:43 +0000 Subject: [SciPy-user] FW: problems with scipy.gplt.pyPlot Message-ID: >From: "javi markez bigara" >Reply-To: SciPy Users List >To: SciPy-user at scipy.org >Subject: [SciPy-user] problems with scipy.gplt.pyPlot >Date: 26 Mar 2007 06:59:49 -0700 > >Date: Mon, 26 Mar 2007 13:59:49 +0000 >Mime-Version: 1.0 >Content-Type: text/plain; charset=iso-8859-1; format=flowed > >hi, i am new in this type of forums, i am looking for the solution to a >problem coming from my low knoledge of python. well, so i would like to >plot >a list of coordenates using scipy this is wat i write: > >draw=scipy.gplt.pyPlot.Plot() >draw.plot(newlist1) > > >newlist is a list of longitude latitude > >like [-4.6925871,45.36987745] each line >i have like 140000 lines like this one. so whenever i run the program >i get a plot but the x axe is the line number >althought i get in the y axe two lines one with the longitude ane other one >with the latitude . >i would like to get one point 4 each line >can anybody help me please....... > >thank in advance > >_________________________________________________________________ >Grandes ?xitos, superh?roes, imitaciones, cine y TV... >http://es.msn.kiwee.com/ Lo mejor para tu m?vil. > _________________________________________________________________ Descarga gratis la Barra de Herramientas de MSN http://www.msn.es/usuario/busqueda/barra?XAPID=2031&DI=1055&SU=http%3A//www.hotmail.com&HL=LINKTAG1OPENINGTEXT_MSNBH -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: text3.txt URL: From schaouette at free.fr Mon Mar 26 11:35:28 2007 From: schaouette at free.fr (Gilles =?utf-8?q?Gr=C3=A9goire?=) Date: Mon, 26 Mar 2007 17:35:28 +0200 Subject: [SciPy-user] examples in the scipy Example List In-Reply-To: <1174904975.ae8068fc3cd66@mail.bg> References: <1174904975.ae8068fc3cd66@mail.bg> Message-ID: <200703261735.29178.schaouette@free.fr> Le Lundi 26 Mars 2007 12:29, emilia12 at mail.bg a ?crit?: > hi, Hello! > > is it possible to post only working examples in the "scipy > Example List" (http://www.scipy.org/scipy_Example_List) I changed the examples so that they should all work using copy/paste now. --Gilles > > for eg in "convolve()", variable "h1" is missing ! > > e. > > > > ----------------------------- > > SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? > ??????????. http://www.bgscena.com/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From scipy-user at monte-stello.com Mon Mar 26 12:22:38 2007 From: scipy-user at monte-stello.com (Antoine Sirinelli) Date: Mon, 26 Mar 2007 18:22:38 +0200 Subject: [SciPy-user] [Cookbook] FiltFilt does not work Message-ID: <20070326162238.GA10284@monte-stello.com> Hi, If you try to run the FiltFilt script presented in the Cookbook (http://scipy.org/Cookbook/FiltFilt) you will see that the result is not the one presented. I have the problem with these versions: Python: 2.4.4 numpy: 1.0.1 scipy: 0.5.1 Do you experiment the same problem ? Antoine From Axel.Kowald at rub.de Mon Mar 26 12:36:51 2007 From: Axel.Kowald at rub.de (Axel Kowald) Date: Mon, 26 Mar 2007 18:36:51 +0200 Subject: [SciPy-user] Scipy and Multiple Regression ?? Message-ID: <4607F6A3.5030105@rub.de> Hi everybody, I'm new to scipy and wondered it I can also do multiple regression with it ? I only found routines for normal linear regression. Thanx, Axel From robert.kern at gmail.com Mon Mar 26 12:58:32 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 26 Mar 2007 11:58:32 -0500 Subject: [SciPy-user] Scipy and Multiple Regression ?? In-Reply-To: <4607F6A3.5030105@rub.de> References: <4607F6A3.5030105@rub.de> Message-ID: <4607FBB8.9060809@gmail.com> Axel Kowald wrote: > Hi everybody, > > I'm new to scipy and wondered it I can also do multiple regression with it ? What exactly do you mean by "multiple regression"? If you mean linear least-squares with models of the form y = a0 + a1*x + a2*x**2 + ... or y = a0 + a1*x1 + a2*x2 + ... then scipy.linalg.lstsq() should work for you. > I only found routines for normal linear regression. There's scipy.optimize.leastsq() for nonlinear least-squares regression. There's scipy.odr for nonlinear ordinary least-squares, orthogonal distance regression, and implicit regression (models of the form f(x)=0), all with as many input dimensions as you like. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From steveire at gmail.com Mon Mar 26 12:59:16 2007 From: steveire at gmail.com (Stephen Kelly) Date: Mon, 26 Mar 2007 17:59:16 +0100 Subject: [SciPy-user] Arrayfns in numpy? In-Reply-To: <18fbbe5a0703140833o4cc196a4pa653439f60a6bd4e@mail.gmail.com> References: <18fbbe5a0703051224i5eba702ey8b9ee3b51da8b528@mail.gmail.com> <20070305205644.GA18701@virginia.edu> <18fbbe5a0703060254v4dd06c17td5d266a1a772ae1e@mail.gmail.com> <45ED948A.2040708@ee.byu.edu> <18fbbe5a0703060825nd8b9783ib2e9eea18da41645@mail.gmail.com> <45ED9A2F.3050803@ee.byu.edu> <45EED6DF.9020502@american.edu> <45EEF5A5.6080202@ee.byu.edu> <18fbbe5a0703140833o4cc196a4pa653439f60a6bd4e@mail.gmail.com> Message-ID: <18fbbe5a0703260959q60c34428r9dc5b62e49a499ae@mail.gmail.com> Here is the function I'm using after the advice of Alok Singhal above. Just thought I'd post it in case someone else is looking for similar. def interp(y_values, x_values, new_x_values): """ Perform interpolation similar to deprecated arrayfns.interp in Numeric. interp(y, x, z) = y(z) interpolated by treating y(x) as piecewise fcn. """ x = array(x_values, dtype='float') y = array(y_values, dtype='float') xx = array(new_x_values, dtype='float') # High indices hi = searchsorted(x, xx) # Low indices lo = hi - 1 slopes = (y[hi] - y[lo])/(x[hi] - x[lo]) # Interpolated data yy = y[lo] + slopes*(xx - x[lo]) return yy It gives almost identical results for my data, except for points out of range for which it creates zeros unlike arrayfns. Thanks again for the help. Stephen. From jonathan.taylor at stanford.edu Mon Mar 26 13:01:21 2007 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Mon, 26 Mar 2007 13:01:21 -0400 Subject: [SciPy-user] Scipy and Multiple Regression ?? In-Reply-To: <4607F6A3.5030105@rub.de> References: <4607F6A3.5030105@rub.de> Message-ID: <4607FC61.6030900@stanford.edu> There's a package in the sandbox called models that allows you to do this -- hopefully it will move out of the sandbox soon. There is also code for: - generalized linear models - robust regression (M-estimators) - Cox proportional hazard - mixed models - smoothing splines / generalized additive models The last two items need work. Axel Kowald wrote: > Hi everybody, > > I'm new to scipy and wondered it I can also do multiple regression with it ? > I only found routines for normal linear regression. > > Thanx, > > Axel > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lorenzo.isella at gmail.com Mon Mar 26 13:07:04 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Mon, 26 Mar 2007 19:07:04 +0200 Subject: [SciPy-user] Automatic Indentation Message-ID: Dear All, Again a newbie question: I am postprocessing some data files with Python. Let us say they are named {filea001...filea100}, {fileb001...fileb100}, {filec001...filec100} and so on. I need to perform different operations on the three different data sets, and I wrote a code for that. The problem is that in the future the three data sets will not be present simultaneously and I thought about simply defining a parameter which assumes values {1,2,3} depending on which datasets are present and using some if-conditions to tell the code which data it should read and process, without changing dramatically the code structure. Now, the problem is that I cannot simply encapsulate large parts of the code into an if condition without also changing the indentation and this is getting tedious and error prone. I am sure I cannot be the first Python user to have come across this, so I would like to know how more experienced users deal with this. Kind Regards Lorenzo Isella From zunzun at zunzun.com Mon Mar 26 13:37:21 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Mon, 26 Mar 2007 13:37:21 -0400 Subject: [SciPy-user] Scipy and Multiple Regression ?? In-Reply-To: <4607FC61.6030900@stanford.edu> References: <4607F6A3.5030105@rub.de> <4607FC61.6030900@stanford.edu> Message-ID: <20070326173720.GA28763@zunzun.com> That will be very, very useful - thanks to all who are working on it. I'd sure like to add the M-estimators to my site. I'm working on adding covariance matrix estimators to Robert Kern's existing odr fortran wrappers if you need them. This will allow for good standard fit statistics for nonlinear models when combined with http://www.scipy.org/Cookbook/OLS James Phillips http://zunzun.com On Mon, Mar 26, 2007 at 01:01:21PM -0400, Jonathan Taylor wrote: > > There is also code for: > > - generalized linear models > - robust regression (M-estimators) > - Cox proportional hazard From zunzun at zunzun.com Mon Mar 26 13:44:41 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Mon, 26 Mar 2007 13:44:41 -0400 Subject: [SciPy-user] Automatic Indentation In-Reply-To: References: Message-ID: <20070326174441.GC28763@zunzun.com> I wound up wrapping my data in an abstracted Data Object class and changing my code to use that. Kind of like the Data class in the ODR code, sorta. Doing so greatly simplified my programming as the data is all encapsulated in the object, and it allows me to change the data handling separately from the other code. James Phillips http://zunzun.com On Mon, Mar 26, 2007 at 07:07:04PM +0200, Lorenzo Isella wrote: > I need to perform different operations on the three different data > sets, and I wrote a code for that. > The problem is that in the future the three data sets will not be > present simultaneously... From emilia12 at mail.bg Mon Mar 26 14:12:35 2007 From: emilia12 at mail.bg (emilia12 at mail.bg) Date: Mon, 26 Mar 2007 21:12:35 +0300 Subject: [SciPy-user] [Cookbook] FiltFilt does not work Message-ID: <1174932755.baa896d35359e@mail.bg> Hi Antoine, i am with Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit (Intel)] on win32 numpy: 1.0.1 scipy: 0.5.1 and my results are very close to curves on the picture from the example the difference is only in the noise (the signal have a small random part, which is unique for each calculation) e. ----------------------------- SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? ??????????. http://www.bgscena.com/ From joseph.a.crider at boeing.com Mon Mar 26 14:24:21 2007 From: joseph.a.crider at boeing.com (Crider, Joseph A) Date: Mon, 26 Mar 2007 13:24:21 -0500 Subject: [SciPy-user] Automatic Indentation In-Reply-To: References: Message-ID: I frequently use jEdit (www.jedit.org) for editing code, and it has a command on the Edit menu to move indentation of a selected block either left or right, which has been very useful for me in similar situations. I've encountered similar features on some other editors, but I can't recall which ones right off. J. Allen Crider -----Original Message----- From: Lorenzo Isella [mailto:lorenzo.isella at gmail.com] Sent: Monday, March 26, 2007 12:07 PM To: scipy-user at scipy.org Subject: [SciPy-user] Automatic Indentation Dear All, Again a newbie question: I am postprocessing some data files with Python. Let us say they are named {filea001...filea100}, {fileb001...fileb100}, {filec001...filec100} and so on. I need to perform different operations on the three different data sets, and I wrote a code for that. The problem is that in the future the three data sets will not be present simultaneously and I thought about simply defining a parameter which assumes values {1,2,3} depending on which datasets are present and using some if-conditions to tell the code which data it should read and process, without changing dramatically the code structure. Now, the problem is that I cannot simply encapsulate large parts of the code into an if condition without also changing the indentation and this is getting tedious and error prone. I am sure I cannot be the first Python user to have come across this, so I would like to know how more experienced users deal with this. Kind Regards Lorenzo Isella _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From elcorto at gmx.net Mon Mar 26 15:41:12 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 26 Mar 2007 21:41:12 +0200 Subject: [SciPy-user] Automatic Indentation In-Reply-To: References: Message-ID: <460821D8.40309@gmx.net> Crider, Joseph A wrote: > I frequently use jEdit (www.jedit.org) for editing code, and it has a > command on the Edit menu to move indentation of a selected block either > left or right, which has been very useful for me in similar situations. > I've encountered similar features on some other editors, but I can't > recall which ones right off. Almost every text editor offers some functionallity of this kind. I favour vim and ther it is CTRL-v for "visual block mode", then mark your block and > for moving the block 'shiftwidth' characters to the right, which should be equal to your tab-setting (set shiftwidth=4, set tabstop=4 in my case). Anyway if you have to indent very large code chunks, you're probably writing "spaghetti code" and should consider packing parts of it in functions and classes to keep the clarity :) -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From scipy-user at monte-stello.com Mon Mar 26 16:48:55 2007 From: scipy-user at monte-stello.com (Antoine Sirinelli) Date: Mon, 26 Mar 2007 21:48:55 +0100 Subject: [SciPy-user] [Cookbook] FiltFilt does not work In-Reply-To: <1174932755.baa896d35359e@mail.bg> References: <1174932755.baa896d35359e@mail.bg> Message-ID: <20070326204851.GB15905@localhost.localdomain> On Mon, Mar 26, 2007 at 09:12:35PM +0300, emilia12 at mail.bg wrote: > and my results are very close to curves on the picture from > the example > the difference is only in the noise (the signal have a small > random part, which is unique for each calculation) I agree that the random part have to be different but as you can see on the figure attached, the last points show a sligtly big difference. The filtered signal goes away from the noisy signal. Antoine -------------- next part -------------- A non-text attachment was scrubbed... Name: filtfilt.png Type: image/png Size: 32066 bytes Desc: not available URL: From dbecker at alum.dartmouth.org Mon Mar 26 21:13:35 2007 From: dbecker at alum.dartmouth.org (Dan Becker) Date: Tue, 27 Mar 2007 01:13:35 +0000 (UTC) Subject: [SciPy-user] Finding lots of roots (probably with brentq) Message-ID: Hi, I have a single equation whose roots I would like to find with many sets of arguments. I can do this by iterating over the sets of arguments, but that doesn't feel very elegant. I'd rather send the arrays or a matrix of arguments to brentq once. To be concrete, consider this code: --- import time from scipy.optimize import brentq from numpy import ones def tominimize(x,y): return x**2-y t2=time.time() for thisArg in xrange(1000): c=brentq(tominimize,0,100,thisArg) print(time.time()-t2) I wonder if this might be faster (and prettier) if I could call brentq once with a whole set of arguments as in brentq(tominimize,0,100,[thisArg for thisArg in xrange(1000)) I haven't figured out how to do this without getting type errors. Thanks for any help! Dan From Axel.Kowald at rub.de Tue Mar 27 03:32:56 2007 From: Axel.Kowald at rub.de (Axel Kowald) Date: Tue, 27 Mar 2007 09:32:56 +0200 Subject: [SciPy-user] Scipy and Multiple Regression ?? In-Reply-To: References: Message-ID: <4608C8A8.5060703@rub.de> Hi, thanks for the quick reply. Yes, I want to do linear least-squares and it seems scipy.linalg.lstsq() is what I'm looking for. Many thanks, axel > What exactly do you mean by "multiple regression"? If you mean linear > least-squares with models of the form > > y = a0 + a1*x + a2*x**2 + ... > > or > > y = a0 + a1*x1 + a2*x2 + ... > > then scipy.linalg.lstsq() should work for you. From nowar00 at hotmail.com Tue Mar 27 08:22:15 2007 From: nowar00 at hotmail.com (javi markez bigara) Date: Tue, 27 Mar 2007 12:22:15 +0000 Subject: [SciPy-user] plotting with scipy Message-ID: hi, i am looking for the easy way to plot some coordinates with scipy.gplt.pyPlot this is how i try it but is not working as i would like to.... the new list is a list of pair coordinates something like this: [-6.654644,65.55555] [-6.645654,65.98798] [-6.987813,65.28951] [-6.645654,65.65987] [-6.634895,65.36985] >>>import scipy.plt >>>from scipy import gplt >>>draw=scipy.gplt.pyPlot.Plot() >>>draw.plot(newlist) >>>draw.title('little map') >>>draw.xtitle('longitude') >>>draw.ytitle('latitude') the thing is that i get in the x axes the number of list and then two strait lines 65_________________________________ 50 35 20 5 -10------------------------------------------------- 0 1 2 3 4 5 ............ this is wat it plots more or less can anybody help me. thanks in advance Nowar00 _________________________________________________________________ Descarga gratis la Barra de Herramientas de MSN http://www.msn.es/usuario/busqueda/barra?XAPID=2031&DI=1055&SU=http%3A//www.hotmail.com&HL=LINKTAG1OPENINGTEXT_MSNBH From ryanlists at gmail.com Tue Mar 27 08:30:48 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 27 Mar 2007 07:30:48 -0500 Subject: [SciPy-user] plotting with scipy In-Reply-To: References: Message-ID: I think almost all users of scipy use matplotlib instead of scipy's plotting capabilities. It is really a great package especially if you only need 2D plotting. Check out matplotlib.sourceforge.net. Ryan On 3/27/07, javi markez bigara wrote: > hi, > > i am looking for the easy way to plot some coordinates with > scipy.gplt.pyPlot this is how i try it but is not working as i would like > to.... > the new list is a list of pair coordinates something like this: > > [-6.654644,65.55555] > [-6.645654,65.98798] > [-6.987813,65.28951] > [-6.645654,65.65987] > [-6.634895,65.36985] > > >>>import scipy.plt > >>>from scipy import gplt > >>>draw=scipy.gplt.pyPlot.Plot() > >>>draw.plot(newlist) > >>>draw.title('little map') > >>>draw.xtitle('longitude') > >>>draw.ytitle('latitude') > > the thing is that i get in the x axes the number of list and then two strait > lines > > > 65_________________________________ > 50 > 35 > 20 > 5 > -10------------------------------------------------- > 0 1 2 3 4 5 ............ > > this is wat it plots more or less > can anybody help me. > thanks in advance > Nowar00 > > _________________________________________________________________ > Descarga gratis la Barra de Herramientas de MSN > http://www.msn.es/usuario/busqueda/barra?XAPID=2031&DI=1055&SU=http%3A//www.hotmail.com&HL=LINKTAG1OPENINGTEXT_MSNBH > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nowar00 at hotmail.com Tue Mar 27 08:51:11 2007 From: nowar00 at hotmail.com (javi markez bigara) Date: Tue, 27 Mar 2007 12:51:11 +0000 Subject: [SciPy-user] plotting with scipy Message-ID: >From: "javi markez bigara" >Reply-To: SciPy Users List >To: SciPy-user at scipy.org >Subject: [SciPy-user] plotting with scipy >Date: Tue, 27 Mar 2007 12:22:15 +0000 > >hi, > >i am looking for the easy way to plot some coordinates with >scipy.gplt.pyPlot this is how i try it but is not working as i would like >to.... >the new list is a list of pair coordinates something like this: > >[-6.654644,65.55555] >[-6.645654,65.98798] >[-6.987813,65.28951] >[-6.645654,65.65987] >[-6.634895,65.36985] > > >>>import scipy.plt > >>>from scipy import gplt > >>>draw=scipy.gplt.pyPlot.Plot() > >>>draw.plot(newlist) > >>>draw.title('little map') > >>>draw.xtitle('longitude') > >>>draw.ytitle('latitude') > >the thing is that i get in the x axes the number of list and then two >strait >lines > > >65_________________________________ >50 >35 >20 >5 >-10------------------------------------------------- >0 1 2 3 4 5 ............ >(i would like to get a point for each line.......)any sugestions >this is wat it plots more or less >can anybody help me. >thanks in advance >Nowar00 > >_________________________________________________________________ >Descarga gratis la Barra de Herramientas de MSN >http://www.msn.es/usuario/busqueda/barra?XAPID=2031&DI=1055&SU=http%3A//www.hotmail.com&HL=LINKTAG1OPENINGTEXT_MSNBH > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Dale rienda suelta a tu tiempo libre. Mil ideas para exprimir tu ocio con MSN Entretenimiento. http://entretenimiento.msn.es/ From raphael.langella at steria.com Tue Mar 27 09:21:54 2007 From: raphael.langella at steria.com (raphael langella) Date: Tue, 27 Mar 2007 15:21:54 +0200 Subject: [SciPy-user] building numpy/scipy on Solaris Message-ID: OK, I tried so many different ways of compiling numpy with libsunperf, I think I'm heading nowhere. Let's try with Atlas. It's much more documented and tested. I get this when compiling numpy : gcc -shared build/temp.solaris-2.8-sun4u-2.5/numpy/core/blasdot/_dotblas.o -L/tmp/ATLAS/lib/SunOS_SunUSIII_8 -llapack -lf77blas -lcblas -latlas -o build/lib.solaris-2.8-sun4u-2.5/numpy/core/_dotblas.so Text relocation remains referenced against symbol offset in file ATL_zupKBmm32_8_1_b0 0x514 /tmp/ATLAS/lib/SunOS_SunUSIII_8/libatlas.a(ATL_zupKBmm_b0.o) ... (hundreds of similar lines) ... ATL_zaxpby_aX_bXi0 0x1a8 /tmp/ATLAS/lib/SunOS_SunUSIII_8/libatlas.a(ATL_zaxpby.o) ld: fatal: relocations remain against allocatable but non-writable sections collect2: ld returned 1 exit status error: Command "gcc -shared build/temp.solaris-2.8-sun4u-2.5/numpy/core/blasdot/_dotblas.o -L/tmp/ATLAS/lib/SunOS_SunUSIII_8 -llapack -lf77blas -lcblas -latlas -o build/lib.solaris-2.8-sun4u-2.5/numpy/core/_dotblas.so" failed with exit status 1 by the way, my site.cfg looks like this : [DEFAULT] library_dirs = /tmp/ATLAS/lib/SunOS_SunUSIII_8 include_dirs = /tmp/ATLAS/include/SunOS_SunUSIII_8 [atlas] atlas_libs = lapack, f77blas, cblas, atlas I also tried to add search_static_first = 0 but it didn't change anything. So, I converted all the atlas library to dynamic versions and tried again : building 'numpy.core._dotblas' extension compiling C sources C compiler: gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC creating build/temp.solaris-2.8-sun4u-2.5/numpy/core/blasdot compile options: '-DNO_ATLAS_INFO=2 -Inumpy/core/blasdot -Inumpy/core/include -Ibuild/src.solaris-2.8-sun4u-2.5/numpy/core -Inumpy/core/src -Inumpy/core/include -I/Produits/publics/sparc.SunOS.5.8/python/2.5.0/include/python2.5 -c' gcc: numpy/core/blasdot/_dotblas.c gcc -shared build/temp.solaris-2.8-sun4u-2.5/numpy/core/blasdot/_dotblas.o -L/tmp/ATLAS/lib/SunOS_SunUSIII_8 -llapack -lf77blas -lcblas -latlas -o build/lib.solaris-2.8-sun4u-2.5/numpy/core/_dotblas.so ld: fatal: symbol `lsame_' is multiply-defined: (file /tmp/ATLAS/lib/SunOS_SunUSIII_8/liblapack.so type=FUNC; file /tmp/ATLAS/lib/SunOS_SunUSIII_8/libf77blas.so type=FUNC); ld: fatal: symbol `xerbla_' is multiply-defined: (file /tmp/ATLAS/lib/SunOS_SunUSIII_8/liblapack.so type=FUNC; file /tmp/ATLAS/lib/SunOS_SunUSIII_8/libf77blas.so type=FUNC); ld: fatal: File processing errors. No output written to build/lib.solaris-2.8-sun4u-2.5/numpy/core/_dotblas.so collect2: ld returned 1 exit status ld: fatal: symbol `lsame_' is multiply-defined: (file /tmp/ATLAS/lib/SunOS_SunUSIII_8/liblapack.so type=FUNC; file /tmp/ATLAS/lib/SunOS_SunUSIII_8/libf77blas.so type=FUNC); ld: fatal: symbol `xerbla_' is multiply-defined: (file /tmp/ATLAS/lib/SunOS_SunUSIII_8/liblapack.so type=FUNC; file /tmp/ATLAS/lib/SunOS_SunUSIII_8/libf77blas.so type=FUNC); ld: fatal: File processing errors. No output written to build/lib.solaris-2.8-sun4u-2.5/numpy/core/_dotblas.so collect2: ld returned 1 exit status error: Command "gcc -shared build/temp.solaris-2.8-sun4u-2.5/numpy/core/blasdot/_dotblas.o -L/tmp/ATLAS/lib/SunOS_SunUSIII_8 -llapack -lf77blas -lcblas -latlas -o build/lib.solaris-2.8-sun4u-2.5/numpy/core/_dotblas.so" failed with exit status 1 I tried with both GNU and Sun ld with same result. Any idea ? -------------- next part -------------- _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From nowar00 at hotmail.com Tue Mar 27 09:22:27 2007 From: nowar00 at hotmail.com (javi markez bigara) Date: Tue, 27 Mar 2007 13:22:27 +0000 Subject: [SciPy-user] plotting with scipy In-Reply-To: Message-ID: thanks a lot for ur reply i will download it but the thig is that i dont know were to put it them to import it from python..... thanks a lot Ryan nowar >From: "Ryan Krauss" >Reply-To: SciPy Users List >To: "SciPy Users List" >Subject: Re: [SciPy-user] plotting with scipy >Date: Tue, 27 Mar 2007 07:30:48 -0500 > >I think almost all users of scipy use matplotlib instead of scipy's >plotting capabilities. It is really a great package especially if you >only need 2D plotting. Check out matplotlib.sourceforge.net. > >Ryan > >On 3/27/07, javi markez bigara wrote: > > hi, > > > > i am looking for the easy way to plot some coordinates with > > scipy.gplt.pyPlot this is how i try it but is not working as i would >like > > to.... > > the new list is a list of pair coordinates something like this: > > > > [-6.654644,65.55555] > > [-6.645654,65.98798] > > [-6.987813,65.28951] > > [-6.645654,65.65987] > > [-6.634895,65.36985] > > > > >>>import scipy.plt > > >>>from scipy import gplt > > >>>draw=scipy.gplt.pyPlot.Plot() > > >>>draw.plot(newlist) > > >>>draw.title('little map') > > >>>draw.xtitle('longitude') > > >>>draw.ytitle('latitude') > > > > the thing is that i get in the x axes the number of list and then two >strait > > lines > > > > > > 65_________________________________ > > 50 > > 35 > > 20 > > 5 > > -10------------------------------------------------- > > 0 1 2 3 4 5 ............ > > > > this is wat it plots more or less > > can anybody help me. > > thanks in advance > > Nowar00 > > > > _________________________________________________________________ > > Descarga gratis la Barra de Herramientas de MSN > > >http://www.msn.es/usuario/busqueda/barra?XAPID=2031&DI=1055&SU=http%3A//www.hotmail.com&HL=LINKTAG1OPENINGTEXT_MSNBH > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Acepta el reto MSN Premium: Correos m?s divertidos con fotos y textos incre?bles en MSN Premium. Desc?rgalo y pru?balo 2 meses gratis. http://join.msn.com?XAPID=1697&DI=1055&HL=Footer_mailsenviados_correosmasdivertidos From ryanlists at gmail.com Tue Mar 27 09:37:52 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 27 Mar 2007 08:37:52 -0500 Subject: [SciPy-user] plotting with scipy In-Reply-To: References: Message-ID: What platform are you running? There should be an executable for windows that puts them in the proper directory. They can go anywhere on your PYTHONPATH, but the right answer is in the directory C:\Python24\Lib\site-packages (or I presume Python25 if you are running the current version of python). So, I have the folders C:\Python24\Lib\site-packages\matplotlib C:\Python24\Lib\site-packages\scipy etc. On linux, you usually want to unzip the tarball and there should be a setup.py file in the top level. Then just run sudo python setup.py install (or similar if you aren't on a debian based "sudo" distribution). Let me know if you still have trouble. Ryan On 3/27/07, javi markez bigara wrote: > > thanks a lot for ur reply i will download it but the thig is that i dont > know were to put it them to import it from python..... > thanks a lot Ryan > nowar > > >From: "Ryan Krauss" > >Reply-To: SciPy Users List > >To: "SciPy Users List" > >Subject: Re: [SciPy-user] plotting with scipy > >Date: Tue, 27 Mar 2007 07:30:48 -0500 > > > >I think almost all users of scipy use matplotlib instead of scipy's > >plotting capabilities. It is really a great package especially if you > >only need 2D plotting. Check out matplotlib.sourceforge.net. > > > >Ryan > > > >On 3/27/07, javi markez bigara wrote: > > > hi, > > > > > > i am looking for the easy way to plot some coordinates with > > > scipy.gplt.pyPlot this is how i try it but is not working as i would > >like > > > to.... > > > the new list is a list of pair coordinates something like this: > > > > > > [-6.654644,65.55555] > > > [-6.645654,65.98798] > > > [-6.987813,65.28951] > > > [-6.645654,65.65987] > > > [-6.634895,65.36985] > > > > > > >>>import scipy.plt > > > >>>from scipy import gplt > > > >>>draw=scipy.gplt.pyPlot.Plot() > > > >>>draw.plot(newlist) > > > >>>draw.title('little map') > > > >>>draw.xtitle('longitude') > > > >>>draw.ytitle('latitude') > > > > > > the thing is that i get in the x axes the number of list and then two > >strait > > > lines > > > > > > > > > 65_________________________________ > > > 50 > > > 35 > > > 20 > > > 5 > > > -10------------------------------------------------- > > > 0 1 2 3 4 5 ............ > > > > > > this is wat it plots more or less > > > can anybody help me. > > > thanks in advance > > > Nowar00 > > > > > > _________________________________________________________________ > > > Descarga gratis la Barra de Herramientas de MSN > > > > >http://www.msn.es/usuario/busqueda/barra?XAPID=2031&DI=1055&SU=http%3A//www.hotmail.com&HL=LINKTAG1OPENINGTEXT_MSNBH > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > > _________________________________________________________________ > Acepta el reto MSN Premium: Correos m?s divertidos con fotos y textos > incre?bles en MSN Premium. Desc?rgalo y pru?balo 2 meses gratis. > http://join.msn.com?XAPID=1697&DI=1055&HL=Footer_mailsenviados_correosmasdivertidos > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nowar00 at hotmail.com Tue Mar 27 09:48:01 2007 From: nowar00 at hotmail.com (javi markez bigara) Date: Tue, 27 Mar 2007 13:48:01 +0000 Subject: [SciPy-user] plotting with scipy In-Reply-To: Message-ID: this is the python i am using, i download the mathplotlib but i think is not working i just save it in the Desktop.. and this is the forder i get: matplotlib-0.90.0 Python 2.3.5 (#2, Sep 4 2005, 22:01:42) [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 >From: "Ryan Krauss" >Reply-To: SciPy Users List >To: "SciPy Users List" >Subject: Re: [SciPy-user] plotting with scipy >Date: Tue, 27 Mar 2007 08:37:52 -0500 > >What platform are you running? There should be an executable for >windows that puts them in the proper directory. They can go anywhere >on your PYTHONPATH, but the right answer is in the directory >C:\Python24\Lib\site-packages >(or I presume Python25 if you are running the current version of python). > >So, I have the folders >C:\Python24\Lib\site-packages\matplotlib >C:\Python24\Lib\site-packages\scipy >etc. > >On linux, you usually want to unzip the tarball and there should be a >setup.py file in the top level. Then just run >sudo python setup.py install >(or similar if you aren't on a debian based "sudo" distribution). > >Let me know if you still have trouble. > >Ryan > >On 3/27/07, javi markez bigara wrote: > > > > thanks a lot for ur reply i will download it but the thig is that i dont > > know were to put it them to import it from python..... > > thanks a lot Ryan > > nowar > > > > >From: "Ryan Krauss" > > >Reply-To: SciPy Users List > > >To: "SciPy Users List" > > >Subject: Re: [SciPy-user] plotting with scipy > > >Date: Tue, 27 Mar 2007 07:30:48 -0500 > > > > > >I think almost all users of scipy use matplotlib instead of scipy's > > >plotting capabilities. It is really a great package especially if you > > >only need 2D plotting. Check out matplotlib.sourceforge.net. > > > > > >Ryan > > > > > >On 3/27/07, javi markez bigara wrote: > > > > hi, > > > > > > > > i am looking for the easy way to plot some coordinates with > > > > scipy.gplt.pyPlot this is how i try it but is not working as i would > > >like > > > > to.... > > > > the new list is a list of pair coordinates something like this: > > > > > > > > [-6.654644,65.55555] > > > > [-6.645654,65.98798] > > > > [-6.987813,65.28951] > > > > [-6.645654,65.65987] > > > > [-6.634895,65.36985] > > > > > > > > >>>import scipy.plt > > > > >>>from scipy import gplt > > > > >>>draw=scipy.gplt.pyPlot.Plot() > > > > >>>draw.plot(newlist) > > > > >>>draw.title('little map') > > > > >>>draw.xtitle('longitude') > > > > >>>draw.ytitle('latitude') > > > > > > > > the thing is that i get in the x axes the number of list and then >two > > >strait > > > > lines > > > > > > > > > > > > 65_________________________________ > > > > 50 > > > > 35 > > > > 20 > > > > 5 > > > > -10------------------------------------------------- > > > > 0 1 2 3 4 5 ............ > > > > > > > > this is wat it plots more or less > > > > can anybody help me. > > > > thanks in advance > > > > Nowar00 > > > > > > > > _________________________________________________________________ > > > > Descarga gratis la Barra de Herramientas de MSN > > > > > > > >http://www.msn.es/usuario/busqueda/barra?XAPID=2031&DI=1055&SU=http%3A//www.hotmail.com&HL=LINKTAG1OPENINGTEXT_MSNBH > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > >_______________________________________________ > > >SciPy-user mailing list > > >SciPy-user at scipy.org > > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _________________________________________________________________ > > Acepta el reto MSN Premium: Correos m?s divertidos con fotos y textos > > incre?bles en MSN Premium. Desc?rgalo y pru?balo 2 meses gratis. > > >http://join.msn.com?XAPID=1697&DI=1055&HL=Footer_mailsenviados_correosmasdivertidos > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Grandes ?xitos, superh?roes, imitaciones, cine y TV... http://es.msn.kiwee.com/ Lo mejor para tu m?vil. From nowar00 at hotmail.com Tue Mar 27 09:54:57 2007 From: nowar00 at hotmail.com (javi markez bigara) Date: Tue, 27 Mar 2007 13:54:57 +0000 Subject: [SciPy-user] plotting with scipy In-Reply-To: Message-ID: this is where the folder is /home2/diazdeim/matplotlib-0.90.0 i want to run the setup from the terminal but i dont know how to do it thaks for helping >From: "javi markez bigara" >Reply-To: SciPy Users List >To: scipy-user at scipy.org >Subject: Re: [SciPy-user] plotting with scipy >Date: Tue, 27 Mar 2007 13:48:01 +0000 > > >this is the python i am using, i download the mathplotlib but i think is >not >working >i just save it in the Desktop.. and this is the forder i get: >matplotlib-0.90.0 > >Python 2.3.5 (#2, Sep 4 2005, 22:01:42) >[GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 > > > >From: "Ryan Krauss" > >Reply-To: SciPy Users List > >To: "SciPy Users List" > >Subject: Re: [SciPy-user] plotting with scipy > >Date: Tue, 27 Mar 2007 08:37:52 -0500 > > > >What platform are you running? There should be an executable for > >windows that puts them in the proper directory. They can go anywhere > >on your PYTHONPATH, but the right answer is in the directory > >C:\Python24\Lib\site-packages > >(or I presume Python25 if you are running the current version of python). > > > >So, I have the folders > >C:\Python24\Lib\site-packages\matplotlib > >C:\Python24\Lib\site-packages\scipy > >etc. > > > >On linux, you usually want to unzip the tarball and there should be a > >setup.py file in the top level. Then just run > >sudo python setup.py install > >(or similar if you aren't on a debian based "sudo" distribution). > > > >Let me know if you still have trouble. > > > >Ryan > > > >On 3/27/07, javi markez bigara wrote: > > > > > > thanks a lot for ur reply i will download it but the thig is that i >dont > > > know were to put it them to import it from python..... > > > thanks a lot Ryan > > > nowar > > > > > > >From: "Ryan Krauss" > > > >Reply-To: SciPy Users List > > > >To: "SciPy Users List" > > > >Subject: Re: [SciPy-user] plotting with scipy > > > >Date: Tue, 27 Mar 2007 07:30:48 -0500 > > > > > > > >I think almost all users of scipy use matplotlib instead of scipy's > > > >plotting capabilities. It is really a great package especially if >you > > > >only need 2D plotting. Check out matplotlib.sourceforge.net. > > > > > > > >Ryan > > > > > > > >On 3/27/07, javi markez bigara wrote: > > > > > hi, > > > > > > > > > > i am looking for the easy way to plot some coordinates with > > > > > scipy.gplt.pyPlot this is how i try it but is not working as i >would > > > >like > > > > > to.... > > > > > the new list is a list of pair coordinates something like this: > > > > > > > > > > [-6.654644,65.55555] > > > > > [-6.645654,65.98798] > > > > > [-6.987813,65.28951] > > > > > [-6.645654,65.65987] > > > > > [-6.634895,65.36985] > > > > > > > > > > >>>import scipy.plt > > > > > >>>from scipy import gplt > > > > > >>>draw=scipy.gplt.pyPlot.Plot() > > > > > >>>draw.plot(newlist) > > > > > >>>draw.title('little map') > > > > > >>>draw.xtitle('longitude') > > > > > >>>draw.ytitle('latitude') > > > > > > > > > > the thing is that i get in the x axes the number of list and then > >two > > > >strait > > > > > lines > > > > > > > > > > > > > > > 65_________________________________ > > > > > 50 > > > > > 35 > > > > > 20 > > > > > 5 > > > > > -10------------------------------------------------- > > > > > 0 1 2 3 4 5 ............ > > > > > > > > > > this is wat it plots more or less > > > > > can anybody help me. > > > > > thanks in advance > > > > > Nowar00 > > > > > > > > > > _________________________________________________________________ > > > > > Descarga gratis la Barra de Herramientas de MSN > > > > > > > > > > > >http://www.msn.es/usuario/busqueda/barra?XAPID=2031&DI=1055&SU=http%3A//www.hotmail.com&HL=LINKTAG1OPENINGTEXT_MSNBH > > > > > > > > > > _______________________________________________ > > > > > SciPy-user mailing list > > > > > SciPy-user at scipy.org > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > >_______________________________________________ > > > >SciPy-user mailing list > > > >SciPy-user at scipy.org > > > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _________________________________________________________________ > > > Acepta el reto MSN Premium: Correos m?s divertidos con fotos y textos > > > incre?bles en MSN Premium. Desc?rgalo y pru?balo 2 meses gratis. > > > > >http://join.msn.com?XAPID=1697&DI=1055&HL=Footer_mailsenviados_correosmasdivertidos > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > >_________________________________________________________________ >Grandes ?xitos, superh?roes, imitaciones, cine y TV... >http://es.msn.kiwee.com/ Lo mejor para tu m?vil. > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Acepta el reto MSN Premium: Correos m?s divertidos con fotos y textos incre?bles en MSN Premium. Desc?rgalo y pru?balo 2 meses gratis. http://join.msn.com?XAPID=1697&DI=1055&HL=Footer_mailsenviados_correosmasdivertidos From ryanlists at gmail.com Tue Mar 27 09:57:18 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 27 Mar 2007 08:57:18 -0500 Subject: [SciPy-user] plotting with scipy In-Reply-To: References: Message-ID: I am not running Debian and am not on a linux box right now. I can be in a few minutes. I think we should move this to the matplotlib users list. Someone there can help you install on Debian. Ryan On 3/27/07, javi markez bigara wrote: > this is where the folder is > /home2/diazdeim/matplotlib-0.90.0 > i want to run the setup from the terminal but i dont know how to do it > thaks for helping > > > >From: "javi markez bigara" > >Reply-To: SciPy Users List > >To: scipy-user at scipy.org > >Subject: Re: [SciPy-user] plotting with scipy > >Date: Tue, 27 Mar 2007 13:48:01 +0000 > > > > > >this is the python i am using, i download the mathplotlib but i think is > >not > >working > >i just save it in the Desktop.. and this is the forder i get: > >matplotlib-0.90.0 > > > >Python 2.3.5 (#2, Sep 4 2005, 22:01:42) > >[GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 > > > > > > >From: "Ryan Krauss" > > >Reply-To: SciPy Users List > > >To: "SciPy Users List" > > >Subject: Re: [SciPy-user] plotting with scipy > > >Date: Tue, 27 Mar 2007 08:37:52 -0500 > > > > > >What platform are you running? There should be an executable for > > >windows that puts them in the proper directory. They can go anywhere > > >on your PYTHONPATH, but the right answer is in the directory > > >C:\Python24\Lib\site-packages > > >(or I presume Python25 if you are running the current version of python). > > > > > >So, I have the folders > > >C:\Python24\Lib\site-packages\matplotlib > > >C:\Python24\Lib\site-packages\scipy > > >etc. > > > > > >On linux, you usually want to unzip the tarball and there should be a > > >setup.py file in the top level. Then just run > > >sudo python setup.py install > > >(or similar if you aren't on a debian based "sudo" distribution). > > > > > >Let me know if you still have trouble. > > > > > >Ryan > > > > > >On 3/27/07, javi markez bigara wrote: > > > > > > > > thanks a lot for ur reply i will download it but the thig is that i > >dont > > > > know were to put it them to import it from python..... > > > > thanks a lot Ryan > > > > nowar > > > > > > > > >From: "Ryan Krauss" > > > > >Reply-To: SciPy Users List > > > > >To: "SciPy Users List" > > > > >Subject: Re: [SciPy-user] plotting with scipy > > > > >Date: Tue, 27 Mar 2007 07:30:48 -0500 > > > > > > > > > >I think almost all users of scipy use matplotlib instead of scipy's > > > > >plotting capabilities. It is really a great package especially if > >you > > > > >only need 2D plotting. Check out matplotlib.sourceforge.net. > > > > > > > > > >Ryan > > > > > > > > > >On 3/27/07, javi markez bigara wrote: > > > > > > hi, > > > > > > > > > > > > i am looking for the easy way to plot some coordinates with > > > > > > scipy.gplt.pyPlot this is how i try it but is not working as i > >would > > > > >like > > > > > > to.... > > > > > > the new list is a list of pair coordinates something like this: > > > > > > > > > > > > [-6.654644,65.55555] > > > > > > [-6.645654,65.98798] > > > > > > [-6.987813,65.28951] > > > > > > [-6.645654,65.65987] > > > > > > [-6.634895,65.36985] > > > > > > > > > > > > >>>import scipy.plt > > > > > > >>>from scipy import gplt > > > > > > >>>draw=scipy.gplt.pyPlot.Plot() > > > > > > >>>draw.plot(newlist) > > > > > > >>>draw.title('little map') > > > > > > >>>draw.xtitle('longitude') > > > > > > >>>draw.ytitle('latitude') > > > > > > > > > > > > the thing is that i get in the x axes the number of list and then > > >two > > > > >strait > > > > > > lines > > > > > > > > > > > > > > > > > > 65_________________________________ > > > > > > 50 > > > > > > 35 > > > > > > 20 > > > > > > 5 > > > > > > -10------------------------------------------------- > > > > > > 0 1 2 3 4 5 ............ > > > > > > > > > > > > this is wat it plots more or less > > > > > > can anybody help me. > > > > > > thanks in advance > > > > > > Nowar00 > > > > > > > > > > > > _________________________________________________________________ > > > > > > Descarga gratis la Barra de Herramientas de MSN > > > > > > > > > > > > > > > >http://www.msn.es/usuario/busqueda/barra?XAPID=2031&DI=1055&SU=http%3A//www.hotmail.com&HL=LINKTAG1OPENINGTEXT_MSNBH > > > > > > > > > > > > _______________________________________________ > > > > > > SciPy-user mailing list > > > > > > SciPy-user at scipy.org > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > >_______________________________________________ > > > > >SciPy-user mailing list > > > > >SciPy-user at scipy.org > > > > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > _________________________________________________________________ > > > > Acepta el reto MSN Premium: Correos m?s divertidos con fotos y textos > > > > incre?bles en MSN Premium. Desc?rgalo y pru?balo 2 meses gratis. > > > > > > >http://join.msn.com?XAPID=1697&DI=1055&HL=Footer_mailsenviados_correosmasdivertidos > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > >_______________________________________________ > > >SciPy-user mailing list > > >SciPy-user at scipy.org > > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > >_________________________________________________________________ > >Grandes ?xitos, superh?roes, imitaciones, cine y TV... > >http://es.msn.kiwee.com/ Lo mejor para tu m?vil. > > > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > > _________________________________________________________________ > Acepta el reto MSN Premium: Correos m?s divertidos con fotos y textos > incre?bles en MSN Premium. Desc?rgalo y pru?balo 2 meses gratis. > http://join.msn.com?XAPID=1697&DI=1055&HL=Footer_mailsenviados_correosmasdivertidos > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From v-nijs at kellogg.northwestern.edu Tue Mar 27 09:58:15 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Tue, 27 Mar 2007 08:58:15 -0500 Subject: [SciPy-user] Scipy and Multiple Regression ?? In-Reply-To: <4608C8A8.5060703@rub.de> Message-ID: Alex, If you just want OLS and would like a bunch of fit-statistics to come with it you might also want to check-out http://www.scipy.org/Cookbook/OLS Best, Vincent On 3/27/07 2:32 AM, "Axel Kowald" wrote: > Hi, > > thanks for the quick reply. Yes, I want to do linear least-squares and > it seems scipy.linalg.lstsq() is what I'm looking for. > > Many thanks, > > axel > >> What exactly do you mean by "multiple regression"? If you mean linear >> least-squares with models of the form >> >> y = a0 + a1*x + a2*x**2 + ... >> >> or >> >> y = a0 + a1*x1 + a2*x2 + ... >> >> then scipy.linalg.lstsq() should work for you. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fredmfp at gmail.com Tue Mar 27 10:06:30 2007 From: fredmfp at gmail.com (fred) Date: Tue, 27 Mar 2007 16:06:30 +0200 Subject: [SciPy-user] plotting with scipy In-Reply-To: References: Message-ID: <460924E6.2070901@gmail.com> Ryan Krauss a ?crit : > I am not running Debian and am not on a linux box right now. I can be > in a few minutes. I think we should move this to the matplotlib users > list. Someone there can help you install on Debian. > Sure :-) Simply type apt-get install python-matplotlib. You should be on your own. -- http://scipy.org/FredericPetit From pgmdevlist at gmail.com Tue Mar 27 10:06:38 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 27 Mar 2007 10:06:38 -0400 Subject: [SciPy-user] plotting with scipy In-Reply-To: References: Message-ID: <200703271006.38193.pgmdevlist@gmail.com> > On 3/27/07, javi markez bigara wrote: > > this is where the folder is > > /home2/diazdeim/matplotlib-0.90.0 > > i want to run the setup from the terminal but i dont know how to do it > > thaks for helping Er, cd /home2/diazdeim/matplotlib-0.90.0 python setup.py install doesn't work for you ? From nowar00 at hotmail.com Tue Mar 27 10:18:22 2007 From: nowar00 at hotmail.com (javi markez bigara) Date: Tue, 27 Mar 2007 14:18:22 +0000 Subject: [SciPy-user] plotting with scipy In-Reply-To: <200703271006.38193.pgmdevlist@gmail.com> Message-ID: hey PIER thanks a lot now i think i have instaled it thanks a lot man >From: Pierre GM >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] plotting with scipy >Date: Tue, 27 Mar 2007 10:06:38 -0400 > > > > On 3/27/07, javi markez bigara wrote: > > > this is where the folder is > > > /home2/diazdeim/matplotlib-0.90.0 > > > i want to run the setup from the terminal but i dont know how to do it > > > thaks for helping > >Er, >cd /home2/diazdeim/matplotlib-0.90.0 >python setup.py install > >doesn't work for you ? >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Un amor, una aventura, compa??a para un viaje. Reg?strate gratis en MSN Amor & Amistad. http://match.msn.es/match/mt.cfm?pg=channel&tcid=162349 From zufus at zufus.org Tue Mar 27 16:40:17 2007 From: zufus at zufus.org (Marco Presi) Date: Tue, 27 Mar 2007 21:40:17 +0100 Subject: [SciPy-user] plotting with scipy In-Reply-To: References: Message-ID: <1175028017.4093.0.camel@mafia.sssup.it> Il giorno mar, 27/03/2007 alle 08.57 -0500, Ryan Krauss ha scritto: > I am not running Debian and am not on a linux box right now. I can be > in a few minutes. I think we should move this to the matplotlib users > list. Someone there can help you install on Debian. #aptitude install python-matplotlib that's it. After installing, run $ipython -pylab Regards Marco > Ryan > > On 3/27/07, javi markez bigara wrote: > > this is where the folder is > > /home2/diazdeim/matplotlib-0.90.0 > > i want to run the setup from the terminal but i dont know how to do it > > thaks for helping > > > > > > >From: "javi markez bigara" > > >Reply-To: SciPy Users List > > >To: scipy-user at scipy.org > > >Subject: Re: [SciPy-user] plotting with scipy > > >Date: Tue, 27 Mar 2007 13:48:01 +0000 > > > > > > > > >this is the python i am using, i download the mathplotlib but i think is > > >not > > >working > > >i just save it in the Desktop.. and this is the forder i get: > > >matplotlib-0.90.0 > > > > > >Python 2.3.5 (#2, Sep 4 2005, 22:01:42) > > >[GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 > > > > > > > > > >From: "Ryan Krauss" > > > >Reply-To: SciPy Users List > > > >To: "SciPy Users List" > > > >Subject: Re: [SciPy-user] plotting with scipy > > > >Date: Tue, 27 Mar 2007 08:37:52 -0500 > > > > > > > >What platform are you running? There should be an executable for > > > >windows that puts them in the proper directory. They can go anywhere > > > >on your PYTHONPATH, but the right answer is in the directory > > > >C:\Python24\Lib\site-packages > > > >(or I presume Python25 if you are running the current version of python). > > > > > > > >So, I have the folders > > > >C:\Python24\Lib\site-packages\matplotlib > > > >C:\Python24\Lib\site-packages\scipy > > > >etc. > > > > > > > >On linux, you usually want to unzip the tarball and there should be a > > > >setup.py file in the top level. Then just run > > > >sudo python setup.py install > > > >(or similar if you aren't on a debian based "sudo" distribution). > > > > > > > >Let me know if you still have trouble. > > > > > > > >Ryan > > > > > > > >On 3/27/07, javi markez bigara wrote: > > > > > > > > > > thanks a lot for ur reply i will download it but the thig is that i > > >dont > > > > > know were to put it them to import it from python..... > > > > > thanks a lot Ryan > > > > > nowar > > > > > > > > > > >From: "Ryan Krauss" > > > > > >Reply-To: SciPy Users List > > > > > >To: "SciPy Users List" > > > > > >Subject: Re: [SciPy-user] plotting with scipy > > > > > >Date: Tue, 27 Mar 2007 07:30:48 -0500 > > > > > > > > > > > >I think almost all users of scipy use matplotlib instead of scipy's > > > > > >plotting capabilities. It is really a great package especially if > > >you > > > > > >only need 2D plotting. Check out matplotlib.sourceforge.net. > > > > > > > > > > > >Ryan > > > > > > > > > > > >On 3/27/07, javi markez bigara wrote: > > > > > > > hi, > > > > > > > > > > > > > > i am looking for the easy way to plot some coordinates with > > > > > > > scipy.gplt.pyPlot this is how i try it but is not working as i > > >would > > > > > >like > > > > > > > to.... > > > > > > > the new list is a list of pair coordinates something like this: > > > > > > > > > > > > > > [-6.654644,65.55555] > > > > > > > [-6.645654,65.98798] > > > > > > > [-6.987813,65.28951] > > > > > > > [-6.645654,65.65987] > > > > > > > [-6.634895,65.36985] > > > > > > > > > > > > > > >>>import scipy.plt > > > > > > > >>>from scipy import gplt > > > > > > > >>>draw=scipy.gplt.pyPlot.Plot() > > > > > > > >>>draw.plot(newlist) > > > > > > > >>>draw.title('little map') > > > > > > > >>>draw.xtitle('longitude') > > > > > > > >>>draw.ytitle('latitude') > > > > > > > > > > > > > > the thing is that i get in the x axes the number of list and then > > > >two > > > > > >strait > > > > > > > lines > > > > > > > > > > > > > > > > > > > > > 65_________________________________ > > > > > > > 50 > > > > > > > 35 > > > > > > > 20 > > > > > > > 5 > > > > > > > -10------------------------------------------------- > > > > > > > 0 1 2 3 4 5 ............ > > > > > > > > > > > > > > this is wat it plots more or less > > > > > > > can anybody help me. > > > > > > > thanks in advance > > > > > > > Nowar00 > > > > > > > > > > > > > > _________________________________________________________________ > > > > > > > Descarga gratis la Barra de Herramientas de MSN > > > > > > > > > > > > > > > > > > > >http://www.msn.es/usuario/busqueda/barra?XAPID=2031&DI=1055&SU=http%3A//www.hotmail.com&HL=LINKTAG1OPENINGTEXT_MSNBH > > > > > > > > > > > > > > _______________________________________________ > > > > > > > SciPy-user mailing list > > > > > > > SciPy-user at scipy.org > > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > >_______________________________________________ > > > > > >SciPy-user mailing list > > > > > >SciPy-user at scipy.org > > > > > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _________________________________________________________________ > > > > > Acepta el reto MSN Premium: Correos m?s divertidos con fotos y textos > > > > > incre?bles en MSN Premium. Desc?rgalo y pru?balo 2 meses gratis. > > > > > > > > >http://join.msn.com?XAPID=1697&DI=1055&HL=Footer_mailsenviados_correosmasdivertidos > > > > > > > > > > _______________________________________________ > > > > > SciPy-user mailing list > > > > > SciPy-user at scipy.org > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > >_______________________________________________ > > > >SciPy-user mailing list > > > >SciPy-user at scipy.org > > > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > >_________________________________________________________________ > > >Grandes ?xitos, superh?roes, imitaciones, cine y TV... > > >http://es.msn.kiwee.com/ Lo mejor para tu m?vil. > > > > > >_______________________________________________ > > >SciPy-user mailing list > > >SciPy-user at scipy.org > > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _________________________________________________________________ > > Acepta el reto MSN Premium: Correos m?s divertidos con fotos y textos > > incre?bles en MSN Premium. Desc?rgalo y pru?balo 2 meses gratis. > > http://join.msn.com?XAPID=1697&DI=1055&HL=Footer_mailsenviados_correosmasdivertidos > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From emilia12 at mail.bg Tue Mar 27 17:28:08 2007 From: emilia12 at mail.bg (emilia12 at mail.bg) Date: Wed, 28 Mar 2007 00:28:08 +0300 Subject: [SciPy-user] [Cookbook] FiltFilt does not work In-Reply-To: <20070326204851.GB15905@localhost.localdomain> References: <1174932755.baa896d35359e@mail.bg> <20070326204851.GB15905@localhost.localdomain> Message-ID: <1175030888.7f1e1d87f62d8@mail.bg> hello Antoine, you are right, the last 20-30 points are too far. I agree that the picture do not match the result from the code. What a shame ... IMO this is not the sci way to speak in filter praise ;-) best regards, e. ????? ?? ????? ?? Antoine Sirinelli : > On Mon, Mar 26, 2007 at 09:12:35PM +0300, > emilia12 at mail.bg wrote: > > and my results are very close to curves on the picture > from > > the example > > the difference is only in the noise (the signal have a > small > > random part, which is unique for each calculation) > > I agree that the random part have to be different but as > you can see on > the figure attached, the last points show a sligtly big > difference. The > filtered signal goes away from the noisy signal. > > Antoine > > ----------------------------- SCENA - ???????????? ????????? ???????? ?? ??????? ??????????? ? ??????????. http://www.bgscena.com/ From emin.shopper at gmail.com Tue Mar 27 17:30:08 2007 From: emin.shopper at gmail.com (Emin.shopper Martinian.shopper) Date: Tue, 27 Mar 2007 17:30:08 -0400 Subject: [SciPy-user] fmin stopping on something not a local optimum Message-ID: <32e43bb70703271430u5d98d411y86eda0776395607d@mail.gmail.com> Dear Experts, I am getting some strange behavior with scipy.optimize.fmin. It seems to "converge" to what it thinks is an optimal solution that is not even a local optimum. Consequently, I end up having to do things like answers = [initialGuess] for i in range(5): answers.append( scipy.optimize.fmin(func,x0=answers[-1]) ) In each iteration of the loop, fmin prints information saying it terminated successfully (i.e., it is not hitting the maxiter or maxfun constraints since I have set these very high) yet the "Current function value" keeps improving. Should I be setting the parameters for fmin in a special way to tell it not to stop too early? I've tried fiddling with xtol and ftol without much success. Is one supposed to call fmin repeatedly like this? Thanks, -Emin -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Tue Mar 27 18:27:39 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 27 Mar 2007 18:27:39 -0400 Subject: [SciPy-user] Scipy examples Message-ID: <200703271827.39154.pgmdevlist@gmail.com> All, Is there an equivalent of the Numpy_Example wiki page for scipy, listing functions per package ? If not, how difficult is it to get one ? From robert.kern at gmail.com Tue Mar 27 18:30:10 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Mar 2007 17:30:10 -0500 Subject: [SciPy-user] Scipy examples In-Reply-To: <200703271827.39154.pgmdevlist@gmail.com> References: <200703271827.39154.pgmdevlist@gmail.com> Message-ID: <46099AF2.5060605@gmail.com> Pierre GM wrote: > All, > Is there an equivalent of the Numpy_Example wiki page for scipy, listing > functions per package ? If not, how difficult is it to get one ? Here is the start of such a page: http://www.scipy.org/scipy_Example_List -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Tue Mar 27 18:35:00 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 27 Mar 2007 18:35:00 -0400 Subject: [SciPy-user] Scipy examples In-Reply-To: <46099AF2.5060605@gmail.com> References: <200703271827.39154.pgmdevlist@gmail.com> <46099AF2.5060605@gmail.com> Message-ID: <200703271835.00984.pgmdevlist@gmail.com> On Tuesday 27 March 2007 18:30:10 Robert Kern wrote: > Pierre GM wrote: > > All, > > Is there an equivalent of the Numpy_Example wiki page for scipy, listing > > functions per package ? If not, how difficult is it to get one ? > > Here is the start of such a page: > > http://www.scipy.org/scipy_Example_List OK, that's a start. Thanks a lot! From akumar at iitm.ac.in Tue Mar 27 21:30:36 2007 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Wed, 28 Mar 2007 07:00:36 +0530 Subject: [SciPy-user] A communications module Message-ID: <20070328013036.GA7095@localhost> Dear SciPy users, I have been using SciPy for quite some time now, although at a basic level. It seems to be very nice for doing the stuff, and it has been a long time since I have done anything much in GNU Octave, my other favourite. There is one thing that I have noticed in SciPy which makes it attractive for everyone, that is it's compatibility with Matlab. However, in this respect, the communications module seems to be missing. I was wondering whether we could put some snippets to do the basic stuff just the way Matlab does (PSK, QAM, AWGN channel etc.) and get started with a ``scipy.comm'' module. I wanted to ping the list because I wanted to know of anyone else shares this view, and I would love to contribute code to such a module, but I would like to wait for comments, to get to know the "right way" to start this off. I mean, identifying the first things to write, how to modularize the stuff etc. Thanks. Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From david.huard at gmail.com Wed Mar 28 09:26:50 2007 From: david.huard at gmail.com (David Huard) Date: Wed, 28 Mar 2007 09:26:50 -0400 Subject: [SciPy-user] fmin stopping on something not a local optimum In-Reply-To: <32e43bb70703271430u5d98d411y86eda0776395607d@mail.gmail.com> References: <32e43bb70703271430u5d98d411y86eda0776395607d@mail.gmail.com> Message-ID: <91cf711d0703280626n11519ce5re26fa83d783ba928@mail.gmail.com> Hi Emin, just a thought: fmin searches for a minimum, not an optimum in the general sense. Could that be the problem ? If not, the problem could indeed be with ftol being set too high. What kind of function are you optimizing ? Maybe try using brute force optimization just to get an idea of what's going on. I'm using fmin extensively and the only times it gives me trouble, it ends up the source of the problem is sitting in front of the screen. David 2007/3/27, Emin.shopper Martinian.shopper : > > Dear Experts, > > I am getting some strange behavior with scipy.optimize.fmin. It seems to > "converge" to what it thinks is an optimal solution that is not even a local > optimum. Consequently, I end up having to do things like > > answers = [initialGuess] > for i in range(5): > answers.append( scipy.optimize.fmin(func,x0=answers[-1]) ) > > In each iteration of the loop, fmin prints information saying it > terminated successfully (i.e., it is not hitting the maxiter or maxfun > constraints since I have set these very high) yet the "Current function > value" keeps improving. > > Should I be setting the parameters for fmin in a special way to tell it > not to stop too early? I've tried fiddling with xtol and ftol without much > success. > > Is one supposed to call fmin repeatedly like this? > > Thanks, > -Emin > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rfl26 at cam.ac.uk Wed Mar 28 10:00:48 2007 From: rfl26 at cam.ac.uk (Fred Ludlow) Date: Wed, 28 Mar 2007 15:00:48 +0100 Subject: [SciPy-user] fmin stopping on something not a local optimum In-Reply-To: <32e43bb70703271430u5d98d411y86eda0776395607d@mail.gmail.com> References: <32e43bb70703271430u5d98d411y86eda0776395607d@mail.gmail.com> Message-ID: <460A7510.4010601@cam.ac.uk> Emin.shopper Martinian.shopper wrote: > Dear Experts, > > I am getting some strange behavior with scipy.optimize.fmin. It seems to > "converge" to what it thinks is an optimal solution that is not even a > local optimum. Consequently, I end up having to do things like > > answers = [initialGuess] > for i in range(5): > answers.append( scipy.optimize.fmin(func,x0=answers[-1]) ) > > In each iteration of the loop, fmin prints information saying it > terminated successfully (i.e., it is not hitting the maxiter or maxfun > constraints since I have set these very high) yet the "Current function > value" keeps improving. > > Should I be setting the parameters for fmin in a special way to tell it > not to stop too early? I've tried fiddling with xtol and ftol without > much success. > > Is one supposed to call fmin repeatedly like this? Hi Emin, I don't think that's how it's supposed to work. This is a bit of a guess, and I'm not an expert, but I'm sure there'll be someone helpful reading this to correct my mistakes... :) What kind of function are you using as "func"? Looking at the fmin source, the first points it evaluates are the x0 you supply, then it varies each parameter in turn to +5% of it's current value (or just 0.05 if it starts at zero), to generate the intial simplex. So, if you're objective function is not very smooth (for example if it's generated by another iterative procedure with fairly coarse convergence requirements) it might immediately find itself in a very local minimum that's an artifact of an imprecise objective function. On the other hand, if your function is arithmetic and you don't lose much precision in it I don't suppose this is relevant. Fred From giorgio.luciano at chimica.unige.it Wed Mar 28 10:58:42 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Wed, 28 Mar 2007 16:58:42 +0200 Subject: [SciPy-user] matlab, idle, interactivity and teaching Message-ID: <460A82A2.90602@chimica.unige.it> Hello to all, I've thread that apperead some time ago on this list about matlab and teaching. I've discovered python recently and translated part of the routine I use in python (www.chemometrics.it). Some of my collegue asked me if I could show them how to use python. For matlab user I guess the first problem is to setup everything, but I just fixed it preparing a directory with all the package I need and a matplotlibrc file for interactive mode + a shortcut for idle -n use. The second problem is that people now wants some bells and whistles of matlab that I have to admit sometime can be very helpful for saving time. The bells and whistles are about the workspace. It's difficult to cut and paste from gnumeric/excel (I generally use txt file but it's no so immediate) and also there is no "visual" workspace. I cannot succeed also in saving workspace (I know there is a function so iosave.mat but I didn't manage easily hot to use it) For overpass this problems I've tried to use QME-DEV which is in early stage of development (alpha) but promise well. What people like of python/matplot/scipy -its free ;) -they like a lot the plotting style and capabilities (they find the png and svg file very clear and accurate) -they like IDLE as editor (ehy it's has the same color of matlab ;) ! ) So my question is . Do you have a similar experience ? How do you help people in moving the first step ? do you use (and also does it exist) a more friendly environment than IDLE except from QME-DEV. I know that this question may look silly, but in my opinion also how much is user friendly a software is very important for getting new users. Cheers to all Giorgio From peridot.faceted at gmail.com Wed Mar 28 12:00:41 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 28 Mar 2007 12:00:41 -0400 Subject: [SciPy-user] fmin stopping on something not a local optimum In-Reply-To: <32e43bb70703271430u5d98d411y86eda0776395607d@mail.gmail.com> References: <32e43bb70703271430u5d98d411y86eda0776395607d@mail.gmail.com> Message-ID: On 27/03/07, Emin.shopper Martinian.shopper wrote: > Dear Experts, Well, this isn't addressed to me, but I do use fmin; my objective function is obtained by numerical integration, but the only real problem I've had was when it started returning NaNs (numerical overflows in scipy.stats.chi2.pdf; is there no log-pdf function?). > I am getting some strange behavior with scipy.optimize.fmin. It seems to > "converge" to what it thinks is an optimal solution that is not even a local > optimum. Consequently, I end up having to do things like > > answers = [initialGuess] > for i in range(5): > answers.append( scipy.optimize.fmin(func,x0=answers[-1]) ) > > In each iteration of the loop, fmin prints information saying it terminated > successfully (i.e., it is not hitting the maxiter or maxfun constraints > since I have set these very high) yet the "Current function value" keeps > improving. > > Should I be setting the parameters for fmin in a special way to tell it not > to stop too early? I've tried fiddling with xtol and ftol without much > success. > > Is one supposed to call fmin repeatedly like this? No. It does have a procedure for deciding when to stop, which can be fooled. When I was getting NaNs, it signalled error by running to completion and returning my input guesses, unimproved; the error handling could be better. ftol is an absolute tolerance - is your function actually changing on the scale of ftol? are your x values varying on the scale of xtol? does your goal function produce values more accurate than ftol? (I know you said you'd fiddled with them.) You might look at the code - IIRC it's not some unintelligible FORTRAN code, it's just python. Anne From fccoelho at gmail.com Wed Mar 28 13:57:06 2007 From: fccoelho at gmail.com (Flavio Coelho) Date: Wed, 28 Mar 2007 14:57:06 -0300 Subject: [SciPy-user] returning an array from weave inline Message-ID: I get a compilation error when I try to return an array from weave.inline here is my test code ( a simple matrix multplication) from scipy import weave from scipy.weave import converters def Dot(a1,a2): """ multiplica??o de matrizes em C """ #print a1.shape,a2.shape d1,d2,d3 = a1.shape[0],a2.shape[1], a1.shape[1] a3 = zeros((d1,d2)) code = """ { int i, j, k; for( i = 0; i < d1; i++) for( j = 0; j < d2; j++) for( k = 0; k < d3; k++) a3(i,j) += a1(i,k)*a2(k,j); } return_val = a3; """ return weave.inline (code,['a1','a2','a3','d1','d2','d3'],type_converters=converters.blitz ,compiler='gcc') this is the compilation error I get:: /home/flavio/.python24_compiled/sc_80afed445639e2098f517fccd0b436f93.cpp: In function 'PyObject* compiled_func(PyObject*, PyObject*)': /home/flavio/.python24_compiled/sc_80afed445639e2098f517fccd0b436f93.cpp:738: error: no match for 'operator=' in 'return_val = a3' /usr/lib/python2.4/site-packages/scipy/weave/scxx/object.h:179: note: candidates are: py::object& py::object::operator=(const py::object&) /home/flavio/.python24_compiled/sc_80afed445639e2098f517fccd0b436f93.cpp: In function 'PyObject* compiled_func(PyObject*, PyObject*)': /home/flavio/.python24_compiled/sc_80afed445639e2098f517fccd0b436f93.cpp:738: error: no match for 'operator=' in 'return_val = a3' /usr/lib/python2.4/site-packages/scipy/weave/scxx/object.h:179: note: candidates are: py::object& py::object::operator=(const py::object&) Traceback (most recent call last): File "DengueRJdin_01.py", line 314, in ? mainMeld(K,L) File "DengueRJdin_01.py", line 221, in mainMeld phi,q1theta = Rep(K, tmax,**para) File "DengueRJdin_01.py", line 22, in Rep res,inc = simEpid(tmax,q1theta[0][i],q1theta[1][i],q1theta[2][i],**args) File "DengueRJdin_01.py", line 143, in simEpid yant[:,j] = Dot(D,l) File "DengueRJdin_01.py", line 120, in Dot return weave.inline (code,['a1','a2','a3','d1','d2','d3'],type_converters=converters.blitz ,compiler='gcc') File "/usr/lib/python2.4/site-packages/scipy/weave/inline_tools.py", line 338, in inline auto_downcast = auto_downcast, File "/usr/lib/python2.4/site-packages/scipy/weave/inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "/usr/lib/python2.4/site-packages/scipy/weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/usr/lib/python2.4/site-packages/scipy/weave/build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/usr/lib/python2.4/site-packages/numpy-1.0.1-py2.4-linux-i686.egg/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/usr/lib/python2.4/distutils/core.py", line 166, in setup raise SystemExit, "error: " + str(msg) scipy.weave.build_tools.CompileError: error: Command "i686-pc-linux-gnu-g++ -pthread -fno-strict-aliasing -DNDEBUG -fPIC -I/usr/lib/python2.4/site-packages/scipy/weave -I/usr/lib/python2.4/site-packages/scipy/weave/scxx -I/usr/lib/python2.4/site-packages/scipy/weave/blitz -I/usr/lib/python2.4/site-packages/numpy-1.0.1-py2.4-linux-i686.egg/numpy/core/include -I/usr/include/python2.4 -c /home/flavio/.python24_compiled/sc_80afed445639e2098f517fccd0b436f93.cpp -o /tmp/flavio/python24_intermediate/compiler_060e323e3e036dd7c46bf6c968ae89ac/home/flavio/.python24_compiled/sc_80afed445639e2098f517fccd0b436f93.o" failed with exit status 1 any help will be greatly appreciated. -- Fl?vio Code?o Coelho registered Linux user # 386432 get counted at http://counter.li.org --------------------------- "software gets slower faster than hardware gets faster" Niklaus Wirth's law -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Mar 28 14:05:45 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Mar 2007 12:05:45 -0600 Subject: [SciPy-user] returning an array from weave inline In-Reply-To: References: Message-ID: On 3/28/07, Flavio Coelho wrote: > I get a compilation error when I try to return an array from weave.inline > > here is my test code ( a simple matrix multplication) > > from scipy import weave > from scipy.weave import converters > > def Dot(a1,a2): > """ > multiplica??o de matrizes em C > """ > #print a1.shape,a2.shape > d1,d2,d3 = a1.shape[0],a2.shape[1], a1.shape[1] > a3 = zeros((d1,d2)) > code = """ > { > int i, j, k; > for( i = 0; i < d1; i++) > for( j = 0; j < d2; j++) > for( k = 0; k < d3; k++) > a3(i,j) += a1(i,k)*a2(k,j); > } > return_val = a3; > """ > return > weave.inline(code,['a1','a2','a3','d1','d2','d3'],type_converters=converters.blitz,compiler='gcc') Don't return anything. Your code has already filled in a3, so there's no need for you to return. Cheers, f From s.mientki at ru.nl Wed Mar 28 15:17:18 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 28 Mar 2007 21:17:18 +0200 Subject: [SciPy-user] matlab, idle, interactivity and teaching In-Reply-To: <460A82A2.90602@chimica.unige.it> References: <460A82A2.90602@chimica.unige.it> Message-ID: <460ABF3E.7050809@ru.nl> hello Giorgio, Giorgio Luciano wrote: > Hello to all, > I've thread that apperead some time ago on this list about matlab and > teaching. > I'm not directly involved in teaching, but my advise was asked if Labview would be a good replacement better alternative for MatLab, in education of medical students and for medical research. When I discovered that Labview was certainly not a good replacement, I first tried SciLab, but soon discovered Python, which with some tools, looks very promising. > I've discovered python recently and translated part of the routine I > use in python (www.chemometrics.it). > Some of my collegue asked me if I could show them how to use python. For > matlab user I guess the first problem is to setup everything, but I just > fixed it preparing a directory with all the package I need and a > matplotlibrc file for interactive mode + a shortcut for idle -n use. > Which operating system are you using ? Did you use Enthought edition ? On windows, with just a few clicks, I had installed Enthought edition and PyScripter, and could start working, almost exactly as I was used in MatLab. > The second problem is that people now wants some bells and whistles of > matlab that I have to admit sometime can be very helpful for saving > time. The bells and whistles are about the workspace. > It's difficult to cut and paste from gnumeric/excel (I generally use txt > file but it's no so immediate) and also there is no "visual" workspace. > I cannot succeed also in saving workspace (I know there is a function so > iosave.mat but I didn't manage easily hot to use it) > >From my experience, MatLab workspace is somewhat better, but still very limited, if you've large data sets. In the past I've several dedicated workspaces for MatLab to overcome these limitations. Pyscripter gives you almost the same workspace as MatLab, and I wouldn't be surprised if you ask the creater of PyScripter, to make something like the MatLab workspace, he'll do it. > For overpass this problems I've tried to use QME-DEV which is in early > stage of development (alpha) but promise well. > What people like of python/matplot/scipy > -its free ;) > -they like a lot the plotting style and capabilities (they find the png > and svg file very clear and accurate) > -they like IDLE as editor (ehy it's has the same color of matlab ;) ! ) > and probably can print in color, which Matlab can't ;-) > So my question is . Do you have a similar experience ? > How do you help people in moving the first step ? > do you use (and also does it exist) a more friendly environment than > IDLE except from QME-DEV. > Yes, I think we've similar experiences here, and I even go one step further, the MatLab workspace is also too limited. I've looked at QME-DEV, and although it looks very promising, > I know that this question may look silly, but in my opinion also how > much is user friendly a software is very important for getting new users. > Cheers to all > Giorgio > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- cheers, Stef Mientki http://pic.flappie.nl From fccoelho at gmail.com Wed Mar 28 18:18:49 2007 From: fccoelho at gmail.com (Flavio Coelho) Date: Wed, 28 Mar 2007 19:18:49 -0300 Subject: [SciPy-user] returning an array from weave inline In-Reply-To: References: Message-ID: Thanks Fernando, I thought variables converted to C were copies, so that there could not be an in place operation on a Python variable. Fl?vio On 3/28/07, Fernando Perez wrote: > > On 3/28/07, Flavio Coelho wrote: > > I get a compilation error when I try to return an array from > weave.inline > > > > here is my test code ( a simple matrix multplication) > > > > from scipy import weave > > from scipy.weave import converters > > > > def Dot(a1,a2): > > """ > > multiplica??o de matrizes em C > > """ > > #print a1.shape,a2.shape > > d1,d2,d3 = a1.shape[0],a2.shape[1], a1.shape[1] > > a3 = zeros((d1,d2)) > > code = """ > > { > > int i, j, k; > > for( i = 0; i < d1; i++) > > for( j = 0; j < d2; j++) > > for( k = 0; k < d3; k++) > > a3(i,j) += a1(i,k)*a2(k,j); > > } > > return_val = a3; > > """ > > return > > weave.inline(code,['a1','a2','a3','d1','d2','d3'],type_converters= > converters.blitz,compiler='gcc') > > Don't return anything. Your code has already filled in a3, so there's > no need for you to return. > > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Fl?vio Code?o Coelho registered Linux user # 386432 get counted at http://counter.li.org --------------------------- "software gets slower faster than hardware gets faster" Niklaus Wirth's law -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgio.luciano at chimica.unige.it Thu Mar 29 03:22:25 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Thu, 29 Mar 2007 09:22:25 +0200 Subject: [SciPy-user] matlab, idle, interactivity and teaching In-Reply-To: <460ABF3E.7050809@ru.nl> References: <460A82A2.90602@chimica.unige.it> <460ABF3E.7050809@ru.nl> Message-ID: <460B6931.8030109@chimica.unige.it> Thanks for your reply, I currently does not use Enthought (I'm waiting for a more uptodate package) but I surely give a try to Pyscripter and eventually conctact the author. Thanks for sharing Giorgio From olivetti at itc.it Thu Mar 29 03:45:24 2007 From: olivetti at itc.it (Emanuele Olivetti) Date: Thu, 29 Mar 2007 09:45:24 +0200 Subject: [SciPy-user] returning an array from weave inline In-Reply-To: References: Message-ID: <460B6E94.8010708@itc.it> Flavio Coelho wrote: > Thanks Fernando, > > I thought variables converted to C were copies, so that there could not be > an in place operation on a Python variable. They are copies like it is in the C way of thinking: if you pass a basic type (e.g. an int) to the inlined C code and modify it then when you are back to python there is no change in the initial variable you passed since just the copy was modified. If you pass something more complex (e.g. a numpy array) then you are just giving a copy of the it's reference (or pointer) to that function. With that reference (or copy of) you can access the the _original_ array and modify it. So when you are back to python your array _is_ changed. This mechanism allows better efficiency: what happens if you have a huge 2Gb array and pass it to the inline code? It's not desirable to make a whole copy and allocate other 2Gb or RAM... Anyway python itself works like that. E.g.: ---- def f(a,b): a=a+1 b.append(1) return a=1 b=[3,2] print a,b f(a,b) print a,b ---- Hope this helps, Emanuele ------------------ ITC -> dall'1 marzo 2007 Fondazione Bruno Kessler ITC -> since 1 March 2007 Fondazione Bruno Kessler ------------------ From raphael.langella at steria.com Thu Mar 29 08:03:45 2007 From: raphael.langella at steria.com (raphael langella) Date: Thu, 29 Mar 2007 14:03:45 +0200 Subject: [SciPy-user] building numpy/scipy on Solaris Message-ID: <13a1ee25e.e25e13a1e@steria.com> I just realized I've got much more recent versions of libsunperf and C and fortran compilers. I've got Sun Studio 10 & 11 installed. So I tried building numpy with both of them and each time I get this error when importing numpy : >>> import numpy Traceback (most recent call last): File "", line 1, in File "/tmp/lib/python2.5/site-packages/numpy/__init__.py", line 40, in import linalg File "/tmp/lib/python2.5/site-packages/numpy/linalg/__init__.py", line 4, in from linalg import * File "/tmp/lib/python2.5/site-packages/numpy/linalg/linalg.py", line 25, in from numpy.linalg import lapack_lite ImportError: ld.so.1: python: fatal: relocation error: file /tmp/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so: symbol __mt_MasterFunction_rtc_: referenced symbol not found I tried the CPPFLAGS='-DNO_APPEND_FORTRAN' option, but it didn't change anything. Any idea? -------------- next part -------------- _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From raphael.langella at steria.com Thu Mar 29 08:09:20 2007 From: raphael.langella at steria.com (raphael langella) Date: Thu, 29 Mar 2007 14:09:20 +0200 Subject: [SciPy-user] building numpy/scipy on Solaris Message-ID: I just realized I've got much more recent versions of libsunperf and C and fortran compilers. I've got Sun Studio 10 & 11 installed. So I tried building numpy with both of them and each time I get this error when importing numpy : >>> import numpy Traceback (most recent call last): File "", line 1, in File "/tmp/lib/python2.5/site-packages/numpy/__init__.py", line 40, in import linalg File "/tmp/lib/python2.5/site-packages/numpy/linalg/__init__.py", line 4, in from linalg import * File "/tmp/lib/python2.5/site-packages/numpy/linalg/linalg.py", line 25, in from numpy.linalg import lapack_lite ImportError: ld.so.1: python: fatal: relocation error: file /tmp/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so: symbol __mt_MasterFunction_rtc_: referenced symbol not found I tried the CPPFLAGS='-DNO_APPEND_FORTRAN' option, but it didn't change anything. Any idea? ---- Messages d?origine ---- De: raphael langella Date: mardi, mars 20, 2007 12:08 pm Objet: [SciPy-user] building numpy/scipy on Solaris > I'm trying to build numpy and scipy on Solaris 8. > The BLAS FAQ on netlib.org suggests using optimized BLAS librairies > provided by computer vendor, like the SUN Performance Library. This > library is supposed to provide enhanced and optimized version of BLAS > and LAPACK. I happen to have Forte 7 installed, so I first tried to > build against this library (libsunperf.a). > I tried several versions and different compilation options, but I > alwaysget undefined symbols. Is this supported? As anyone ever > succeeded in > using this library to compile scipy and numpy? > I also did some unsuccessful tests with Atlas. But, I think I'm now > veryclose to have it working with the standard BLAS/LAPACK (from > netlib.org). I have to use GNU ld (unresolved symbols when using Sun's > ld). Numpy builds and tests without any error. I've got a few > errors and > failures with scipy (see attachment for the complete test report). Are > these errors critical? I don't understand where they come from and how > to correct them. > Thanks for your help. > > Rapha?l Langella > -------------- next part -------------- _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From cookedm at physics.mcmaster.ca Thu Mar 29 11:13:16 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 29 Mar 2007 11:13:16 -0400 Subject: [SciPy-user] building numpy/scipy on Solaris In-Reply-To: <13a1ee25e.e25e13a1e@steria.com> References: <13a1ee25e.e25e13a1e@steria.com> Message-ID: <460BD78C.8090800@physics.mcmaster.ca> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 raphael langella wrote: > I just realized I've got much more recent versions of libsunperf and C > and fortran compilers. I've got Sun Studio 10 & 11 installed. So I tried > building numpy with both of them and each time I get this error when > importing numpy : > >>>> import numpy > Traceback (most recent call last): > File "", line 1, in > File "/tmp/lib/python2.5/site-packages/numpy/__init__.py", line 40, in > > import linalg > File "/tmp/lib/python2.5/site-packages/numpy/linalg/__init__.py", line > 4, in > from linalg import * > File "/tmp/lib/python2.5/site-packages/numpy/linalg/linalg.py", line > 25, in > from numpy.linalg import lapack_lite > ImportError: ld.so.1: python: fatal: relocation error: file > /tmp/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so: symbol > __mt_MasterFunction_rtc_: referenced symbol not found > > I tried the CPPFLAGS='-DNO_APPEND_FORTRAN' option, but it didn't change > anything. Any idea? Some library is not getting linked in. Best I can come up with is its something to do with threads (mt == multithread? rtc == real-time clock?). Maybe add '-mt' to the CFLAGS? That's equivalent to - -D_REENTRANT -lthread. - -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGC9eM+kNzddXW8YwRAtonAKC1vK7o0APKkXlpPgQSH57AlZhAFgCg5eW1 2GTO3w6H6gxP6y80okigldc= =Rab3 -----END PGP SIGNATURE----- From david.huard at gmail.com Thu Mar 29 21:07:54 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 29 Mar 2007 20:07:54 -0500 Subject: [SciPy-user] Numpy distutils failing to report ndarrayobject.h In-Reply-To: <46040D99.6020509@ee.byu.edu> References: <4603FF06.2090206@cse.ucsc.edu> <46040D99.6020509@ee.byu.edu> Message-ID: <91cf711d0703291807w35e8730do343842413a38edc0@mail.gmail.com> I checked and it's rather that ndarrayobject.h simply isn't in any folder. If I remember correctly, on this installation numpy came installed with enthought binaries (python24). Installing the numpy binaries from scipy.orgsolves the problem. David 2007/3/23, Travis Oliphant : > > Anand Patil wrote: > > >Hi all, > > > >The following in a Pyrex extension module: > > > >cdef extern from "numpy/ndarrayobject.h": > > void* PyArray_DATA(object obj) > > > >produces the following in C: > > > >#include "numpy/ndarrayobject.h". > > > >The numpy distutils apparently fail to compile the extension module > >under win2k, Python 2.4: > > > >PyMC2/PyrexLazyFunction.c:15:33: numpy/ndarrayobject.h: No such file or > directory > >PyMC2/PyrexLazyFunction.c: In function > >`__pyx_f_17PyrexLazyFunction_12LazyFunction_get_array_data': > > > > > >but it compiles fine on my machine, running OSX 10.4 and Python 2.5. > >Unfortunately I can't experiment much because I don't have a Windows > >machine available. Can anyone help me out with this? > > > > > > The include directory is not in the list of directories to search for > the header files and so it is not being found. I'm not sure what would > cause this. It's possible that the windows system doesn't have NumPy > installed (or else the headers are not installed). Or, it's possible > that numpy.distutils is not adding the NumPy directory to the list of > directories to search for header files. > > I'm not sure why it's not adding the location of the NumPy headers to > the compile line. > > -Travis > > > > >Thanks, > >Anand > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hai at lpt.rwth-aachen.de Fri Mar 30 10:05:38 2007 From: hai at lpt.rwth-aachen.de (Ri Hai) Date: Fri, 30 Mar 2007 16:05:38 +0200 Subject: [SciPy-user] how can I get fsolve() work?! Message-ID: <1DA101C8C553404E8367810F73E3926A73E919@squirrel.lpt.rwth-aachen.de> Hell folks, When I use fsolve, python crashes! Now I am testing fsolve with the code from the mailing list: #----------- from scipy.optimize import fsolve def func2(x): out = [x[0]+2.*x[1]+2.*x[2]-1.] out.append(x[0]+x[2] - 2.*x[1]) out.append(x[3]+x[4]-1.) out.append(1./5-2./5*x[2]+2./5*x[0]*x[3]-x[0]) out.append(1./5-x[1]/5.-x[2]/5.+2/5.*x[1]*x[4]+x[1]/5.-x[1]) #out.append(1/5-2/5*x[1]+2*x[2]/5+x[2]*x[3]/5) return out x02 = fsolve(func2, [0.25,0.25,0.5,0.5,0.5]) #---------------- The code is taken form the mail from john in July 2006 and he said it did work. However my python crashed again! I really don't know how to solve this problem. I use python 2.5, scipy 0.5.2, numpy 1.01 and windows XP. Has someone the same problem? Can somebody help me? Please... Hai -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Fri Mar 30 10:25:09 2007 From: hasslerjc at comcast.net (John Hassler) Date: Fri, 30 Mar 2007 10:25:09 -0400 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <1DA101C8C553404E8367810F73E3926A73E919@squirrel.lpt.rwth-aachen.de> References: <1DA101C8C553404E8367810F73E3926A73E919@squirrel.lpt.rwth-aachen.de> Message-ID: <460D1DC5.80305@comcast.net> An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Mar 30 10:25:29 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 30 Mar 2007 16:25:29 +0200 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <1DA101C8C553404E8367810F73E3926A73E919@squirrel.lpt.rwth-aachen.de> References: <1DA101C8C553404E8367810F73E3926A73E919@squirrel.lpt.rwth-aachen.de> Message-ID: <460D1DD9.101@iam.uni-stuttgart.de> Ri Hai wrote: > > Hell folks, > > When I use fsolve, python crashes! Now I am testing fsolve with the code from the mailing list: > > #----------- > from scipy.optimize import fsolve > > > def func2(x): > out = [x[0]+2.*x[1]+2.*x[2]-1.] > out.append(x[0]+x[2] - 2.*x[1]) > out.append(x[3]+x[4]-1.) > out.append(1./5-2./5*x[2]+2./5*x[0]*x[3]-x[0]) > out.append(1./5-x[1]/5.-x[2]/5.+2/5.*x[1]*x[4]+x[1]/5.-x[1]) > #out.append(1/5-2/5*x[1]+2*x[2]/5+x[2]*x[3]/5) > return out > > x02 = fsolve(func2, [0.25,0.25,0.5,0.5,0.5]) > > #---------------- > / / > The code is taken form the mail from john in July 2006 and he said it did work. However my python crashed again! I really don?t know how to solve this problem. > > I use python 2.5, scipy 0.5.2, numpy 1.01 and windows XP. > > Has someone the same problem? Can somebody help me? Please? > > Hai > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > No problem here. >>> numpy.__version__ '1.0.2.dev3617' >>> scipy.__version__ '0.5.3.dev2892' >>> x02 array([ 0.125 , 0.1875, 0.25 , 0.5 , 0.5 ]) >>> func2(x02) [0.0, 0.0, 0.0, 3.4924590996965321e-11, -1.7462309376270468e-11] What do you mean by crash ? Nils From hai at lpt.rwth-aachen.de Fri Mar 30 10:48:05 2007 From: hai at lpt.rwth-aachen.de (Ri Hai) Date: Fri, 30 Mar 2007 16:48:05 +0200 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <460D1DD9.101@iam.uni-stuttgart.de> Message-ID: <1DA101C8C553404E8367810F73E3926A73E91A@squirrel.lpt.rwth-aachen.de> -----Urspr?ngliche Nachricht----- Von: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] Im Auftrag von Nils Wagner Gesendet: Freitag, 30. M?rz 2007 16:25 An: SciPy Users List Betreff: Re: [SciPy-user] how can I get fsolve() work?! Ri Hai wrote: > > Hell folks, > > When I use fsolve, python crashes! Now I am testing fsolve with the code from the mailing list: > > #----------- > from scipy.optimize import fsolve > > > def func2(x): > out = [x[0]+2.*x[1]+2.*x[2]-1.] > out.append(x[0]+x[2] - 2.*x[1]) > out.append(x[3]+x[4]-1.) > out.append(1./5-2./5*x[2]+2./5*x[0]*x[3]-x[0]) > out.append(1./5-x[1]/5.-x[2]/5.+2/5.*x[1]*x[4]+x[1]/5.-x[1]) > #out.append(1/5-2/5*x[1]+2*x[2]/5+x[2]*x[3]/5) > return out > > x02 = fsolve(func2, [0.25,0.25,0.5,0.5,0.5]) > > #---------------- > / / > The code is taken form the mail from john in July 2006 and he said it did work. However my python crashed again! I really don't know how to solve this problem. > > I use python 2.5, scipy 0.5.2, numpy 1.01 and windows XP. > > Has someone the same problem? Can somebody help me? Please... > > Hai > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > No problem here. >>> numpy.__version__ '1.0.2.dev3617' >>> scipy.__version__ '0.5.3.dev2892' >>> x02 array([ 0.125 , 0.1875, 0.25 , 0.5 , 0.5 ]) >>> func2(x02) [0.0, 0.0, 0.0, 3.4924590996965321e-11, -1.7462309376270468e-11] What do you mean by crash ? Nils --------------------------------- Hello Nils, Using fsolve, I get the following error message (in German and I have translated it): "pythonw.exe ascertained a problem and python will be closed now." So I could not even know what is wrong. I have tried with your commands. You can see the information below. >>> numpy.__version__ '1.0.1' >>> scipy.__version__ '0.5.2' Is there anything wrong with the version I have taken? Hai From nwagner at iam.uni-stuttgart.de Fri Mar 30 10:55:20 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 30 Mar 2007 16:55:20 +0200 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <1DA101C8C553404E8367810F73E3926A73E91A@squirrel.lpt.rwth-aachen.de> References: <1DA101C8C553404E8367810F73E3926A73E91A@squirrel.lpt.rwth-aachen.de> Message-ID: <460D24D8.8020706@iam.uni-stuttgart.de> Ri Hai wrote: > > -----Urspr?ngliche Nachricht----- > Von: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] Im Auftrag von Nils Wagner > Gesendet: Freitag, 30. M?rz 2007 16:25 > An: SciPy Users List > Betreff: Re: [SciPy-user] how can I get fsolve() work?! > > Ri Hai wrote: > >> Hell folks, >> >> When I use fsolve, python crashes! Now I am testing fsolve with the code from the mailing list: >> >> #----------- >> from scipy.optimize import fsolve >> >> >> def func2(x): >> out = [x[0]+2.*x[1]+2.*x[2]-1.] >> out.append(x[0]+x[2] - 2.*x[1]) >> out.append(x[3]+x[4]-1.) >> out.append(1./5-2./5*x[2]+2./5*x[0]*x[3]-x[0]) >> out.append(1./5-x[1]/5.-x[2]/5.+2/5.*x[1]*x[4]+x[1]/5.-x[1]) >> #out.append(1/5-2/5*x[1]+2*x[2]/5+x[2]*x[3]/5) >> return out >> >> x02 = fsolve(func2, [0.25,0.25,0.5,0.5,0.5]) >> >> #---------------- >> / / >> The code is taken form the mail from john in July 2006 and he said it did work. However my python crashed again! I really don't know how to solve this problem. >> >> I use python 2.5, scipy 0.5.2, numpy 1.01 and windows XP. >> >> Has someone the same problem? Can somebody help me? Please... >> >> Hai >> >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > > No problem here. > >>>> numpy.__version__ >>>> > '1.0.2.dev3617' > >>>> scipy.__version__ >>>> > '0.5.3.dev2892' > > >>>> x02 >>>> > array([ 0.125 , 0.1875, 0.25 , 0.5 , 0.5 ]) > >>>> func2(x02) >>>> > [0.0, 0.0, 0.0, 3.4924590996965321e-11, -1.7462309376270468e-11] > > What do you mean by crash ? > > > Nils > > --------------------------------- > Hello Nils, > > > Using fsolve, I get the following error message (in German and I have translated it): > > "pythonw.exe ascertained a problem and python will be closed now." > > So I could not even know what is wrong. > > I have tried with your commands. You can see the information below. > > >>>> numpy.__version__ >>>> > '1.0.1' > >>>> scipy.__version__ >>>> > '0.5.2' > > Is there anything wrong with the version I have taken? > > I don't think so. It seems to be a Windows specific problem. Nils > Hai > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From hai at lpt.rwth-aachen.de Fri Mar 30 11:14:07 2007 From: hai at lpt.rwth-aachen.de (Ri Hai) Date: Fri, 30 Mar 2007 17:14:07 +0200 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <460D1DC5.80305@comcast.net> Message-ID: <1DA101C8C553404E8367810F73E3926A73E91B@squirrel.lpt.rwth-aachen.de> Hallo John, I think there must be something wrong with my installation, but I don't know where. Please read the following message. >>> scipy.test() Warning: FAILURE importing tests for C:\Python25\lib\site-packages\scipy\linsolve\umfpack\tests\test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ) Found 4 tests for scipy.io.array_import Found 1 tests for scipy.cluster.vq Found 128 tests for scipy.linalg.fblas Found 397 tests for scipy.ndimage Found 10 tests for scipy.integrate.quadpack Found 98 tests for scipy.stats.stats Found 53 tests for scipy.linalg.decomp Found 3 tests for scipy.integrate.quadrature Found 96 tests for scipy.sparse.sparse Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 6 tests for scipy.interpolate Found 12 tests for scipy.io.mmio Found 10 tests for scipy.stats.morestats Found 4 tests for scipy.linalg.lapack Found 4 tests for scipy.io.recaster Warning: FAILURE importing tests for C:\Python25\lib\site-packages\scipy\linsolve\umfpack\tests\test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ) Found 4 tests for scipy.optimize.zeros Found 6 tests for scipy.interpolate.fitpack Found 28 tests for scipy.io.mio Found 41 tests for scipy.linalg.basic Found 2 tests for scipy.maxentropy.maxentropy Found 358 tests for scipy.special.basic Found 128 tests for scipy.lib.blas.fblas Found 7 tests for scipy.linalg.matfuncs Found 42 tests for scipy.lib.lapack Found 18 tests for scipy.fftpack.basic Found 1 tests for scipy.optimize.cobyla Found 16 tests for scipy.lib.blas Found 1 tests for scipy.integrate Found 14 tests for scipy.linalg.blas Found 70 tests for scipy.stats.distributions Found 4 tests for scipy.fftpack.helper Found 4 tests for scipy.signal.signaltools Found 0 tests for __main__ Don't worry about a warning regarding the number of bytes read. ........................................................................................................................................................................................................................................................................................................................................................................................, at this point, python was closed. Do you know why? Hai -----Urspr?ngliche Nachricht----- Von: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] Im Auftrag von John Hassler Gesendet: Freitag, 30. M?rz 2007 16:25 An: SciPy Users List Betreff: Re: [SciPy-user] how can I get fsolve() work?! I copy-pasted from your post, and it still works for me. Does your scipy.test() work without errors? john Ri Hai wrote: Hell folks, When I use fsolve, python crashes! Now I am testing fsolve with the code from the mailing list: #----------- from scipy.optimize import fsolve def func2(x): out = [x[0]+2.*x[1]+2.*x[2]-1.] out.append(x[0]+x[2] - 2.*x[1]) out.append(x[3]+x[4]-1.) out.append(1./5-2./5*x[2]+2./5*x[0]*x[3]-x[0]) out.append(1./5-x[1]/5.-x[2]/5.+2/5.*x[1]*x[4]+x[1]/5.-x[1]) #out.append(1/5-2/5*x[1]+2*x[2]/5+x[2]*x[3]/5) return out x02 = fsolve(func2, [0.25,0.25,0.5,0.5,0.5]) #---------------- The code is taken form the mail from john in July 2006 and he said it did work. However my python crashed again! I really don't know how to solve this problem. I use python 2.5, scipy 0.5.2, numpy 1.01 and windows XP. Has someone the same problem? Can somebody help me? Please... Hai ________________________________ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user ________________________________ No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.446 / Virus Database: 268.18.22/739 - Release Date: 3/29/2007 1:36 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Fri Mar 30 11:25:14 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 30 Mar 2007 17:25:14 +0200 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <460D24D8.8020706@iam.uni-stuttgart.de> References: <1DA101C8C553404E8367810F73E3926A73E91A@squirrel.lpt.rwth-aachen.de> <460D24D8.8020706@iam.uni-stuttgart.de> Message-ID: <460D2BDA.4050201@gmx.net> Nils Wagner wrote: > Ri Hai wrote: >> -----Urspr?ngliche Nachricht----- >> Von: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] Im Auftrag von Nils Wagner >> Gesendet: Freitag, 30. M?rz 2007 16:25 >> An: SciPy Users List >> Betreff: Re: [SciPy-user] how can I get fsolve() work?! >> >> Ri Hai wrote: >> >>> Hell folks, >>> >>> When I use fsolve, python crashes! Now I am testing fsolve with the code from the mailing list: >>> >>> #----------- >>> from scipy.optimize import fsolve >>> >>> >>> def func2(x): >>> out = [x[0]+2.*x[1]+2.*x[2]-1.] >>> out.append(x[0]+x[2] - 2.*x[1]) >>> out.append(x[3]+x[4]-1.) >>> out.append(1./5-2./5*x[2]+2./5*x[0]*x[3]-x[0]) >>> out.append(1./5-x[1]/5.-x[2]/5.+2/5.*x[1]*x[4]+x[1]/5.-x[1]) >>> #out.append(1/5-2/5*x[1]+2*x[2]/5+x[2]*x[3]/5) >>> return out >>> >>> x02 = fsolve(func2, [0.25,0.25,0.5,0.5,0.5]) >>> >>> #---------------- >>> / / >>> The code is taken form the mail from john in July 2006 and he said it did work. However my python crashed again! I really don't know how to solve this problem. >>> >>> I use python 2.5, scipy 0.5.2, numpy 1.01 and windows XP. >>> >>> Has someone the same problem? Can somebody help me? Please... >>> >>> Hai >>> >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >> No problem here. >> >>>>> numpy.__version__ >>>>> >> '1.0.2.dev3617' >> >>>>> scipy.__version__ >>>>> >> '0.5.3.dev2892' >> >> >>>>> x02 >>>>> >> array([ 0.125 , 0.1875, 0.25 , 0.5 , 0.5 ]) >> >>>>> func2(x02) >>>>> >> [0.0, 0.0, 0.0, 3.4924590996965321e-11, -1.7462309376270468e-11] >> >> What do you mean by crash ? >> >> >> Nils >> >> --------------------------------- >> Hello Nils, >> >> >> Using fsolve, I get the following error message (in German and I have translated it): >> >> "pythonw.exe ascertained a problem and python will be closed now." >> >> So I could not even know what is wrong. >> >> I have tried with your commands. You can see the information below. >> >> >>>>> numpy.__version__ >>>>> >> '1.0.1' >> >>>>> scipy.__version__ >>>>> >> '0.5.2' >> >> Is there anything wrong with the version I have taken? >> >> > I don't think so. It seems to be a Windows specific problem. > > Nils > This may be the case. Some days ago I had to convert some code to work also under Windows (Python 2.5, scipy 0.5.2, numpy 1.0.1). The code "crashed" with a message box saying the same thing as on Ri's machine (although it was "python.exe", not "pythonw.exe" I think). Part of the code uses scipy's odeint. Clicking on "Details" I found that the crash occured when Python tried to load _odepack.pyd (same as _odepack.so on *nix). Just for the record, with Python 2.4.4 under Win it gave me some DLL load error. Testing it on other Win boxes (also XP running) the code executed without error. So I blamed a crappy Win installation and didn't investigate it further (I didn't try using functions that call other *.pyd libs like _zeros.pyd called by the brentq, brenth, bisect, for example). hth -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From elcorto at gmx.net Fri Mar 30 11:33:44 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 30 Mar 2007 17:33:44 +0200 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <1DA101C8C553404E8367810F73E3926A73E91B@squirrel.lpt.rwth-aachen.de> References: <1DA101C8C553404E8367810F73E3926A73E91B@squirrel.lpt.rwth-aachen.de> Message-ID: <460D2DD8.8080903@gmx.net> Ri Hai wrote: [snip] > > Warning: FAILURE importing tests for from '...\\linsolve\\umfpack\\__init__.pyc'> > > C:\Python25\lib\site-packages\scipy\linsolve\umfpack\tests\test_umfpack.py:17: > AttributeError: 'module' object has no attribute 'umfpack' (in ) > [snip] The UMFPACK maessage just tells you that your installation has no module scipy.linsolve.umfpack (the build of this was disabled when scipy was compliled) which shouldn't be something to worry about as long as you don't want to use it. > > Don't worry about a warning regarding the number of bytes read. > > > > ........................................................................................................................................................................................................................................................................................................................................................................................, > at this point, python was closed. > Does this mean the test didn't finish? Did you see something like #----------------------------------------------------------------------# >>> scipy.test() [snip] ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ...Result may be inaccurate, approximate err = 6.99076835435e-09 ...Result may be inaccurate, approximate err = 7.27595761418e-12 .............................................................Residual: 1.05006950608e-07 . **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ...................... ---------------------------------------------------------------------- Ran 1602 tests in 3.848s OK >>> #----------------------------------------------------------------------# -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From hai at lpt.rwth-aachen.de Fri Mar 30 11:45:45 2007 From: hai at lpt.rwth-aachen.de (Ri Hai) Date: Fri, 30 Mar 2007 17:45:45 +0200 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <460D2DD8.8080903@gmx.net> Message-ID: <1DA101C8C553404E8367810F73E3926A73E91C@squirrel.lpt.rwth-aachen.de> Hello Steve, you are right, the test didn't finish and I didn't see any text that you added. Now I am reading the information about installation in Scipy homepage. Maybe I have done something wrong. If you have any idea, please let me know. Thank you! Hai -----Urspr?ngliche Nachricht----- Von: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] Im Auftrag von Steve Schmerler Gesendet: Freitag, 30. M?rz 2007 17:34 An: SciPy Users List Betreff: Re: [SciPy-user] how can I get fsolve() work?! Ri Hai wrote: [snip] > > Warning: FAILURE importing tests for from '...\\linsolve\\umfpack\\__init__.pyc'> > > C:\Python25\lib\site-packages\scipy\linsolve\umfpack\tests\test_umfpack.py:17: > AttributeError: 'module' object has no attribute 'umfpack' (in ) > [snip] The UMFPACK maessage just tells you that your installation has no module scipy.linsolve.umfpack (the build of this was disabled when scipy was compliled) which shouldn't be something to worry about as long as you don't want to use it. > > Don't worry about a warning regarding the number of bytes read. > > > > ........................................................................................................................................................................................................................................................................................................................................................................................, > at this point, python was closed. > Does this mean the test didn't finish? Did you see something like #----------------------------------------------------------------------# >>> scipy.test() [snip] ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ...Result may be inaccurate, approximate err = 6.99076835435e-09 ...Result may be inaccurate, approximate err = 7.27595761418e-12 .............................................................Residual: 1.05006950608e-07 . **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ...................... ---------------------------------------------------------------------- Ran 1602 tests in 3.848s OK >>> #----------------------------------------------------------------------# -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From elcorto at gmx.net Fri Mar 30 12:11:34 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 30 Mar 2007 18:11:34 +0200 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <1DA101C8C553404E8367810F73E3926A73E91C@squirrel.lpt.rwth-aachen.de> References: <1DA101C8C553404E8367810F73E3926A73E91C@squirrel.lpt.rwth-aachen.de> Message-ID: <460D36B6.7040205@gmx.net> Ri Hai wrote: > Hello Steve, > > you are right, the test didn't finish and I didn't see any text that you added. With the test, does it show the same error message box as it does if you run your fsolve() example? If so, can you click on "details" (I think upper right corner of the box) and see if it's related to some kind of *.pyd file? > > Now I am reading the information about installation in Scipy homepage. Maybe I have done something wrong. If you have any idea, please let me know. Thank you! > On Win, it's usually just installing Python (python-2.5.msi or something), numpy (numpy-1.0.1.win32-py2.5.exe) and scipy (scipy-0.5.2.win32-py2.5.exe). If you plan to build things from source, that's another story (haven't done that on Win). Anyway, if it is the same problem that I had (loading a *.pyd lib) I suspect that it's not a scipy-specific issue but rather a Windows problem (but I'm not an expert here...). If you have access to another Win box, I suggest that you try running you example there before doing anything else. -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From fccoelho at gmail.com Fri Mar 30 12:18:07 2007 From: fccoelho at gmail.com (Flavio Coelho) Date: Fri, 30 Mar 2007 13:18:07 -0300 Subject: [SciPy-user] returning an array from weave inline In-Reply-To: <460B6E94.8010708@itc.it> References: <460B6E94.8010708@itc.it> Message-ID: Yes Emanuele, that makes complete sense. thanks, Fl?vio On 3/29/07, Emanuele Olivetti wrote: > > Flavio Coelho wrote: > > Thanks Fernando, > > > > I thought variables converted to C were copies, so that there could not > be > > an in place operation on a Python variable. > > They are copies like it is in the C way of thinking: if you pass a > basic type (e.g. an int) to the inlined C code and modify it then > when you are back to python there is no change in the initial variable > you passed since just the copy was modified. If you pass something more > complex > (e.g. a numpy array) then you are just giving a copy of the it's reference > (or pointer) to that function. With that reference (or copy of) you > can access the the _original_ array and modify it. So when you are back to > python your array _is_ changed. This mechanism allows better efficiency: > what happens if you have a huge 2Gb array and pass it to the inline code? > It's not desirable to make a whole copy and allocate other 2Gb or RAM... > > Anyway python itself works like that. E.g.: > ---- > def f(a,b): > a=a+1 > b.append(1) > return > > a=1 > b=[3,2] > print a,b > f(a,b) > print a,b > ---- > > Hope this helps, > > Emanuele > > ------------------ > ITC -> dall'1 marzo 2007 Fondazione Bruno Kessler > ITC -> since 1 March 2007 Fondazione Bruno Kessler > ------------------ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Fl?vio Code?o Coelho registered Linux user # 386432 get counted at http://counter.li.org --------------------------- "software gets slower faster than hardware gets faster" Niklaus Wirth's law -------------- next part -------------- An HTML attachment was scrubbed... URL: From hai at lpt.rwth-aachen.de Fri Mar 30 12:26:37 2007 From: hai at lpt.rwth-aachen.de (Ri Hai) Date: Fri, 30 Mar 2007 18:26:37 +0200 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <460D36B6.7040205@gmx.net> Message-ID: <1DA101C8C553404E8367810F73E3926A73E91D@squirrel.lpt.rwth-aachen.de> Hello Steve, > With the test, does it show the same error message box as it does if you > run your fsolve() example? Yes. > If so, can you click on "details" (I think upper right corner of the box) > and see if it's related to some kind of *.pyd file? Yes! > Anyway, if it is the same problem that I had (loading a *.pyd lib) I > suspect that it's not a scipy-specific issue but rather a Windows > problem (but I'm not an expert here...). I think you are right!! > If you have access to another Win box, I suggest that you try running you > example there before doing anything else. What do you mean by Win box? Hai -----Urspr?ngliche Nachricht----- Von: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] Im Auftrag von Steve Schmerler Gesendet: Freitag, 30. M?rz 2007 18:12 An: SciPy Users List Betreff: Re: [SciPy-user] how can I get fsolve() work?! Ri Hai wrote: > Hello Steve, > > you are right, the test didn't finish and I didn't see any text that you added. With the test, does it show the same error message box as it does if you run your fsolve() example? If so, can you click on "details" (I think upper right corner of the box) and see if it's related to some kind of *.pyd file? > > Now I am reading the information about installation in Scipy homepage. Maybe I have done something wrong. If you have any idea, please let me know. Thank you! > On Win, it's usually just installing Python (python-2.5.msi or something), numpy (numpy-1.0.1.win32-py2.5.exe) and scipy (scipy-0.5.2.win32-py2.5.exe). If you plan to build things from source, that's another story (haven't done that on Win). Anyway, if it is the same problem that I had (loading a *.pyd lib) I suspect that it's not a scipy-specific issue but rather a Windows problem (but I'm not an expert here...). If you have access to another Win box, I suggest that you try running you example there before doing anything else. -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From elcorto at gmx.net Fri Mar 30 13:05:32 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 30 Mar 2007 19:05:32 +0200 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <1DA101C8C553404E8367810F73E3926A73E91D@squirrel.lpt.rwth-aachen.de> References: <1DA101C8C553404E8367810F73E3926A73E91D@squirrel.lpt.rwth-aachen.de> Message-ID: <460D435C.30703@gmx.net> Ri Hai wrote: >> If you have access to another Win box, I suggest that you try running you > example there before doing anything else. > > What do you mean by Win box? > A computer running Windows :) -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From eric at deeplycloudy.com Fri Mar 30 13:27:55 2007 From: eric at deeplycloudy.com (Eric Bruning) Date: Fri, 30 Mar 2007 12:27:55 -0500 Subject: [SciPy-user] voronoi diagrams and the delaunay module Message-ID: I'd like to retrieve the Voronoi diagram for a set of points. Internally, the delaunay module (in the sandbox) appears calculate the voronoi diagram, but I don't think it's exposed in python. What would need to be done do to enable voronoi output? Eventually, I'm hoping to pass the polygons of the voronoi diagram to a polycollection in matplotlib... Eric From jtravs at gmail.com Fri Mar 30 13:44:49 2007 From: jtravs at gmail.com (John Travers) Date: Fri, 30 Mar 2007 18:44:49 +0100 Subject: [SciPy-user] collocation code for boundary value ODEs Message-ID: <3a1077e70703301044j1f5252d6xce20126139daddcb@mail.gmail.com> Hi all, I'm wondering if there is any python code in scipy or elsewhere which implements a collocation method for solving boundary value ODEs? Something similar to or based on COLNEW or COLSYS from netlib. I'd be very happy if there is. If not, then I'll write my own and post it back here. Thanks for any help, John Travers From robert.kern at gmail.com Fri Mar 30 14:15:13 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Mar 2007 13:15:13 -0500 Subject: [SciPy-user] voronoi diagrams and the delaunay module In-Reply-To: References: Message-ID: <460D53B1.8010200@gmail.com> Eric Bruning wrote: > I'd like to retrieve the Voronoi diagram for a set of points. > Internally, the delaunay module (in the sandbox) appears calculate the > voronoi diagram, but I don't think it's exposed in python. > > What would need to be done do to enable voronoi output? > > Eventually, I'm hoping to pass the polygons of the voronoi diagram to a > polycollection in matplotlib... All of the information that you need and that the algorithm can provide is already given. Just connect the circumcenter of each triangle to the circumcenters of the triangle's neighbors. I left out doing much more because I have no idea what concrete formats consumers might need. Voronoi diagrams are somewhat abstract beasts; the circumcenters and the connections between them (and a point at infinity) are all that is really defined for them. Doing something concrete with them requires more information than a general tool really has available. For example, if you are going to draw the lines out to infinity, how should I represent them? Forming them into polygons is a bit tricky, but not too bad. For each point in the interior of the triangulation (i.e., points not on the convex hull), walk around the triangles that impinge on it counterclockwise and connect their circumcenters. To handle the edges of the Voronoi diagram that go out to infinity, you need to know the bounding box of your display. Walk around the edges of the convex hull, find the circumcenter of the triangle with the edge and connect it to a point outside the display bounding box along the line from the circumcenter perpendicular to the convex hull edge. Now walk around the points of the convex hull, and make a Voronoi polygon for each of them using the appropriate "points at infinity" that you made. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From emmanuelj at aims.ac.za Fri Mar 30 17:35:50 2007 From: emmanuelj at aims.ac.za (Emmanuel Ohieku Jonah) Date: Fri, 30 Mar 2007 23:35:50 +0200 Subject: [SciPy-user] weave.blitz Message-ID: <460D82B6.7020003@aims.ac.za> Hi All, I need a simple code to illustrate how weave.blitz works. A working example might help. Thanks. -- Emmanuel O. Jonah African Institute for Mathematical Sciences No. 6 Melrose road Muizenberg, 7945 Cape Town South Africa. Mobile Phone: +27 78 324485 email: emmanuelj at aims.ac.za emijones2001 at yahoo.com From rhc28 at cornell.edu Fri Mar 30 18:24:10 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Fri, 30 Mar 2007 18:24:10 -0400 Subject: [SciPy-user] collocation code for boundary value ODEs In-Reply-To: <3a1077e70703301044j1f5252d6xce20126139daddcb@mail.gmail.com> References: <3a1077e70703301044j1f5252d6xce20126139daddcb@mail.gmail.com> Message-ID: John, Sorry it won't be very soon but members of my group and I are actively working on an interface to AUTO's collocation routines that will allow us to solve BVPs as part of PyDSTool (and in a much more user-friendly way than the native AUTO setup). If you would like to contribute to that work it would move it along a lot quicker, but of course that would require you to learn and use the PyDSTool environment for your computations! Otherwise, to my knowledge there hasn't been any noise made about new BVP solvers involving Scipy since the last time your question was asked here a few months ago. -Rob On 30/03/07, John Travers wrote: > Hi all, > I'm wondering if there is any python code in scipy or elsewhere which > implements a collocation method for solving boundary value ODEs? > Something similar to or based on COLNEW or COLSYS from netlib. > I'd be very happy if there is. If not, then I'll write my own and post > it back here. > Thanks for any help, > John Travers > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ckkart at hoc.net Fri Mar 30 20:48:41 2007 From: ckkart at hoc.net (Christian K) Date: Sat, 31 Mar 2007 09:48:41 +0900 Subject: [SciPy-user] weave.blitz In-Reply-To: <460D82B6.7020003@aims.ac.za> References: <460D82B6.7020003@aims.ac.za> Message-ID: Emmanuel Ohieku Jonah wrote: > Hi All, > > I need a simple code to illustrate how weave.blitz works. A working > example might help. I first read this: http://www.scipy.org/PerformancePython the weave docs are at: http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/weave/doc/tutorial.html?format=raw Christian From hai at lpt.rwth-aachen.de Sat Mar 31 06:45:11 2007 From: hai at lpt.rwth-aachen.de (Ri Hai) Date: Sat, 31 Mar 2007 12:45:11 +0200 Subject: [SciPy-user] how can I get fsolve() work?! In-Reply-To: <460D435C.30703@gmx.net> Message-ID: <1DA101C8C553404E8367810F73E3926A73E91E@squirrel.lpt.rwth-aachen.de> Oh! :-) Thanks, I will try it. Ri Ri Hai wrote: >> If you have access to another Win box, I suggest that you try running you > example there before doing anything else. > > What do you mean by Win box? > A computer running Windows :) -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From jtravs at gmail.com Sat Mar 31 08:00:06 2007 From: jtravs at gmail.com (John Travers) Date: Sat, 31 Mar 2007 13:00:06 +0100 Subject: [SciPy-user] collocation code for boundary value ODEs In-Reply-To: References: <3a1077e70703301044j1f5252d6xce20126139daddcb@mail.gmail.com> Message-ID: <3a1077e70703310500w71da3b61of994a2379fda55a6@mail.gmail.com> Hi Rob, all, Thanks for the info. PyDSTool looks good, I didn't know about this package. However, I have found a quicker solution to my problem, a direct wrapper of COLNEW: http://www.elisanet.fi/ptvirtan/software/bvp/index.html by Pauli Virtanen. Thanks again, John From rhc28 at cornell.edu Sat Mar 31 11:26:10 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Sat, 31 Mar 2007 11:26:10 -0400 Subject: [SciPy-user] collocation code for boundary value ODEs In-Reply-To: <3a1077e70703310500w71da3b61of994a2379fda55a6@mail.gmail.com> References: <3a1077e70703301044j1f5252d6xce20126139daddcb@mail.gmail.com> <3a1077e70703310500w71da3b61of994a2379fda55a6@mail.gmail.com> Message-ID: This package might be very useful to us, too. Thanks for sharing! Rob From cclarke at chrisdev.com Sat Mar 31 12:53:51 2007 From: cclarke at chrisdev.com (Christopher Clarke) Date: Sat, 31 Mar 2007 12:53:51 -0400 Subject: [SciPy-user] Spyce and Wingware Message-ID: Hi All Anybody manage to configure Wing and spyce?? Regards Chris From silva at crans.org Sat Mar 31 13:57:34 2007 From: silva at crans.org (Fabrice Silva) Date: Sat, 31 Mar 2007 17:57:34 +0000 (UTC) Subject: [SciPy-user] invfreq Message-ID: Hello to all, I wonder there is no 'invfreq' function in scipy.signal like in octave (http://www.koders.com/matlab/fidF8CAA512B66C33B66B3C078DE89255A3BBA49C55.aspx) Is there another implementation or an equivalent function?