From f.braennstroem at gmx.de Sun Oct 1 05:15:29 2006 From: f.braennstroem at gmx.de (Fabian Braennstroem) Date: Sun, 1 Oct 2006 11:15:29 +0200 Subject: [SciPy-user] Pros and Cons of Python verses other array environments References: <200609291458.34486.thyreau@shfj.cea.fr> Message-ID: Hi, * Benjamin THYREAU wrote: > (...) >> >> When it comes to the distribution, I like Enthought's Python package, but >> it is only for Windows. What I have done for Linux is set up a gar make >> system (also used by konstruct and gargnome) to build everything anyone >> needs to get a python system up and running. The download is quite small >> (1 makefile per package to be built). The makefile tells the system the >> name and location of the download. When make is run it automatically goes >> to the web, gets the source, builds and installs it. All the user has to >> do is type 'make install' and they have a full scientific python >> environment waiting for them. Right now I have it set up to build blas, >> lapack, numpy, scipy, pygtk, matplotlib, ipython and python2.4 (if it is >> not already there). I also add a little script in their path that can be >> run to set the correct environment variables, launch ipython and import >> scipy and mpl. >> >> Perhaps this is a way to easily get Linux users up and running. >> >> s. >> >> > Sounds interesting. I remember having mostly good experience with Konstrut. > Is there some publicy available location where i could find your script ? I would like to see your script too! Sounds really nice :-) Greetings! Fabian From peridot.faceted at gmail.com Sun Oct 1 10:35:58 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Sun, 1 Oct 2006 10:35:58 -0400 Subject: [SciPy-user] Running mean: numpy or pyrex? In-Reply-To: References: Message-ID: On 30/09/06, David Finlayson wrote: > I was fairly impressed with the filter performance using psyco until I met > some truly huge files. Now I am wondering if I would get any speed benefit > from using numpy (for the 2D array access) or would I be better off trying > pyrex for the running mean loops? One of the issues I am facing is the > possibility that very large files may need to be read from the disk as > needed instead of holding the whole thing in ram. I also need to distrubute > this code to my team, so minimizing the pain of installing numeric and other > packages would be an issue. Should I expect significant speed benefits from > a numy version of this type of code? Numpy is designed for exactly this kind of computation. It will almost certainly be simpler to express your calculation in numpy, and it may be faster (as the internal loops are written in C). Numpy is also capable of operating on on-disk arrays by mmap()ing them (so that pages are loaded and unloaded as needed). Once in numpy, there are also tools such as weave which should allow you to accelerate calculations further (by running more code in C). On the other hand, depending what you're doing, psyco may accelerate your program, and it is certainly easy to add (two lines: import psyco; psyco.full() ) though it has never actually accelerated any computation that I tried it on. If you don't care about linear algebra speed, I think numpy is relatively easy to install. (For me it was just a question of clicking on a checkbox as iti is packaged for my distribution of Linux.) If you're doing major calculations, you may need to ensure that your favourite fast linear algebra package gets detected. If you do end up trying it, please report on your experiences. Performance is not always what we expect. (For example, I tried converting a number of examples from the Great Computer Language Shootout to use Numeric only to discover that they ran more slowly.) A. M. Archibald From jstano at interchange.ubc.ca Sun Oct 1 15:00:58 2006 From: jstano at interchange.ubc.ca (joe stano) Date: Sun, 01 Oct 2006 12:00:58 -0700 Subject: [SciPy-user] Cobyla Help part 2 Message-ID: <0J6H00HLR0TIEE@smtp.interchange.ubc.ca> Thanks for the advice. Although reshape would not work because the size was larger then 32, I am still able to reshape the array with loops. The script now runs, however, the constraints do not seem to be working (i.e., I get answers that I should not). I have made sure that each constraint returns a single value (+ if constraint is met and - if not) and each constraint takes in the decision variable. I have attached a portion of the script (I know it is not pretty but I am just starting out), so that maybe someone can see if I have incorrectly called something? Thanks for any help, joe -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: try_optimize_help2.py Type: application/octet-stream Size: 4953 bytes Desc: not available URL: From skokot at po.opole.pl Sun Oct 1 16:08:31 2006 From: skokot at po.opole.pl (Seweryn Kokot) Date: Sun, 01 Oct 2006 22:08:31 +0200 Subject: [SciPy-user] matplotlib, numpy and scipy Message-ID: <8764f3u7cw.fsf@poczta.po.opole.pl> As I remember some time ago matplotlib required scipy and now requires numpy. Any explaination? I read somewhere that scipy may conflict with numpy... is it true? Regards, Seweryn Kokot From robert.kern at gmail.com Sun Oct 1 16:30:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 01 Oct 2006 15:30:38 -0500 Subject: [SciPy-user] matplotlib, numpy and scipy In-Reply-To: <8764f3u7cw.fsf@poczta.po.opole.pl> References: <8764f3u7cw.fsf@poczta.po.opole.pl> Message-ID: <4520256E.7000001@gmail.com> Seweryn Kokot wrote: > As I remember some time ago matplotlib required scipy and now requires > numpy. Any explaination? numpy used to be called "scipy core" and was renamed because people got confused and thought that implied they had to install all of scipy. > I read somewhere that scipy may conflict with numpy... is it true? No, scipy builds on top of numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Sun Oct 1 17:07:13 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 1 Oct 2006 23:07:13 +0200 Subject: [SciPy-user] Q: Building SciPy for Windows without Cygwindependency? In-Reply-To: <451EF0C5.1010308@cox.net> Message-ID: Hello all > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] > On Behalf Of Kyle Ferrio > Sent: 01 October 2006 00:34 > To: SciPy Users > Subject: [SciPy-user] Q: Building SciPy for Windows without > Cygwindependency? > > I want to build a full-up SciPy for MS Windows in line with the Python > and SciPy licenses. Are you planning to do this with MinGW or with the Microsoft compiler? > I noticed the following text in the excellent installation instructions, > regarding the choice of compiler: > > "Cygwin is required if you want to build ATLAS yourself." > > Well, I often build ATLAS myself. And now (implicitly) I want to build > ATLAS in a way that is available to SciPy. > > So I have two questions: > > 1. Must ATLAS -- to be used with SciPy -- be built with a dependence on > the GPL'ed Cygwin runtime library? > (I can't imagine why this would be the case, but I want to test my > understanding.) I've built ATLAS 3.7.11 under Cygwin, AFAIR, this produced a static library that is not linked to the Cygwin runtime. > 2. Is the "need" for Cygwin actually just a way to get (extremely > convenient) tools like 'ar' which do not infect the ATLAS lib with the > GPL? The need for Cygwin is a requirement of ATLAS's build system, parts of which are written in C and have a very Unix-centric view of the world. Regards, Albert From ckkart at hoc.net Sun Oct 1 19:13:47 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Mon, 02 Oct 2006 08:13:47 +0900 Subject: [SciPy-user] Cobyla Help part 2 In-Reply-To: <0J6H00HLR0TIEE@smtp.interchange.ubc.ca> References: <0J6H00HLR0TIEE@smtp.interchange.ubc.ca> Message-ID: <45204BAB.9000704@hoc.net> joe stano wrote: > Thanks for the advice. > > Although reshape would not work because the size was larger then 32, I am > still able to reshape the array with loops. ? Why that? Why can't you reshape an array larger than 32 (elements, I guess) ? If you have an NxN array with N=32 you can call numpy.ravel(data) to get a 1D-arrayu and numpy.reshape(data, (N,N) to get its original shape back. > The script now runs, however, the constraints do not seem to be working > (i.e., I get answers that I should not). I have made sure that each > constraint returns a single value (+ if constraint is met and - if not) and > each constraint takes in the decision variable. I think the constraints should return a continous functions, not just -1 and +1, so the minimizer knows where to go to improve the result. I might be wrong here. > I have attached a portion of the script (I know it is not pretty but I am > just starting out), so that maybe someone can see if I have incorrectly > called something? Please send a complete example, including your input data, otherwise it is difficult to find out what's happening. Christain From ckkart at hoc.net Sun Oct 1 22:22:42 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Mon, 02 Oct 2006 11:22:42 +0900 Subject: [SciPy-user] does scipy have a function... partII Message-ID: <452077F2.5060001@hoc.net> Hi, concerning the discussions about finding the upper/lower limit of some data some days ago (http://article.gmane.org/gmane.comp.python.scientific.user/8926), I've enhanced my script and added constraints to the 2nd derivative of the function. This gives much smoother results, at least for that special type of data Bill has to deal with. That additional constraint seems to help the minimizer find a solution and the number of function evaluations goes down significantly. I'm really impressed how well fmin_cobyla works. Regards, Christian -------------- next part -------------- A non-text attachment was scrubbed... Name: baselineDer.py Type: text/x-python Size: 1436 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: example.png Type: image/png Size: 55666 bytes Desc: not available URL: From johnlichtenstein at mac.com Mon Oct 2 01:03:34 2006 From: johnlichtenstein at mac.com (John Lichtenstein) Date: Sun, 1 Oct 2006 22:03:34 -0700 Subject: [SciPy-user] matplotlib on osx In-Reply-To: <20061001023457.GA14026@clipper.ens.fr> References: <20061001023457.GA14026@clipper.ens.fr> Message-ID: Has anyone gotten matplotlib running on OSX recently? I tried upgrading to 10.4.8, and building Python 2.5, numpy, scipy, libpng, and freetype from source. Everything went fine, even matplotlib installed without an error. But when I import matplotlib I get 21:49:24: Debug: ../src/common/object.cpp(224): assert "sm_classTable- >Get(m_className) == NULL" failed: class already in RTTI table - have you used IMPLEMENT_DYNAMIC_CLASS() twice (may be by linking some object module(s) twice)? ../src/common/object.cpp(224): assert "sm_classTable->Get (m_className) == NULL" failed: class already in RTTI table - have you used IMPLEMENT_DYNAMIC_CLASS() twice (may be by linking some object module(s) twice)? Trace/BPT trap What's the clean way to uninstall Python? I used to have a working matplotlib using the pythonmac distribution. Maybe I can get back to that. From willemjagter at gmail.com Mon Oct 2 04:30:59 2006 From: willemjagter at gmail.com (William Hunter) Date: Mon, 2 Oct 2006 10:30:59 +0200 Subject: [SciPy-user] Sparse: Efficient metod to convert to CSC format Message-ID: <8b3894bc0610020130n48bf6029sb8122ed92ecf2c5d@mail.gmail.com> I've been bugging Robert C. and Ed S. (thanks guys) for a while about this now, but I think it might be good if I share my experiences with other users. When converting a sparse matrix to CSC format, one of the quickest ways to get there is to give three vectors containing; (1) [values]: the values, (2) [rowind]: the row indices and (3) [colptr]: the column pointer. If one times how long it takes to get your CSC matrix by doing sparse.csc_matrix((values, rowind, colptr)) compared to the other ways to get it, e.g., csc_matrix(dense_mtx), I've found that the first method is faster (enough to be significant for me). One can solve a sparse system with , if your matrix is in CSC format. And here's my question: Let's say I have a way (and I might :-)) to construct those 3 vectors very quickly, how can I use them directly as arguments in ? For example: solution = linsolve.spsolve([values],[rowind],[colptr],[RHS]) This way I can skip the conversion to CSC format since it's already in that form, albeit 'decomposed'. -- William From willemjagter at gmail.com Mon Oct 2 07:14:39 2006 From: willemjagter at gmail.com (William Hunter) Date: Mon, 2 Oct 2006 13:14:39 +0200 Subject: [SciPy-user] error building pyopengl... In-Reply-To: <451DB1D8.8000707@gmail.com> References: <451DB1D8.8000707@gmail.com> Message-ID: <8b3894bc0610020414q3b396436wdaa9e906c7939536@mail.gmail.com> Can't you make use of a deb or rpm instead of building it yourself, perhaps? On 30/09/06, fred wrote: > Hi the list, > > Yes, I know it's a little bit OT, but I saw on Wilna DuToit Scipy > webpage his mat3d.py and thus > I need to build PyOpenGL (2.0.2.01). > > I can't build it because gcc failed with exit status 1... > > What do I need to build it ? > > PyOpenGL seems to be quite old on sourceforge and not to be still > maintained. > > > Cheers, > > > PS : python2.4, swig 1.3.23 > > -- > ? Python, c'est un peu comme la ba?onnette en 14, faut toujours sortir avec, > et pas h?siter d'en mettre un coup en cas de besoin. ? > J.-Ph. D. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fredmfp at gmail.com Mon Oct 2 07:24:55 2006 From: fredmfp at gmail.com (fred) Date: Mon, 02 Oct 2006 13:24:55 +0200 Subject: [SciPy-user] error building pyopengl... In-Reply-To: <8b3894bc0610020414q3b396436wdaa9e906c7939536@mail.gmail.com> References: <451DB1D8.8000707@gmail.com> <8b3894bc0610020414q3b396436wdaa9e906c7939536@mail.gmail.com> Message-ID: <4520F707.9030002@gmail.com> William Hunter a ?crit : >Can't you make use of a deb or rpm instead of building it yourself, perhaps? > > 1) I build all my own python (2.4) stuff in /usr/local. 2) There is no python2.4-opengl deb package (or source) for my sarge. -- ? Python, c'est un peu comme la ba?onnette en 14, faut toujours sortir avec, et pas h?siter d'en mettre un coup en cas de besoin. ? J.-Ph. D. From cimrman3 at ntc.zcu.cz Mon Oct 2 09:13:05 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 02 Oct 2006 15:13:05 +0200 Subject: [SciPy-user] Sparse: Efficient metod to convert to CSC format In-Reply-To: <8b3894bc0610020130n48bf6029sb8122ed92ecf2c5d@mail.gmail.com> References: <8b3894bc0610020130n48bf6029sb8122ed92ecf2c5d@mail.gmail.com> Message-ID: <45211061.8050806@ntc.zcu.cz> William Hunter wrote: > I've been bugging Robert C. and Ed S. (thanks guys) for a while about > this now, but I think it might be good if I share my experiences with > other users. > > When converting a sparse matrix to CSC format, one of the quickest > ways to get there is to give three vectors > containing; > > (1) [values]: the values, > (2) [rowind]: the row indices and > (3) [colptr]: the column pointer. > > If one times how long it takes to get your CSC matrix by doing > sparse.csc_matrix((values, rowind, colptr)) compared to the other ways > to get it, e.g., csc_matrix(dense_mtx), I've found that the first > method is faster (enough to be significant for me). > > One can solve a sparse system with , if your matrix > is in CSC format. And here's my question: Let's say I have a way (and > I might :-)) to construct those 3 vectors very quickly, how can I use > them directly as arguments in ? For example: > > solution = linsolve.spsolve([values],[rowind],[colptr],[RHS]) If you have already (values, rowind, colptr) triplet, creating a CSC matrix is virtually a no-operation (besides some sanity checks). Is it really so inconvenient to write: solution = linsolve.spsolve( csc_matrix( (values, rowind, colptr), shape ),[RHS])? On the other hand, the syntax solution = linsolve.spsolve( (values,rowind,colptr,'csc'), [RHS]) might be useful as a syntactic sugar, so I am not against adding it. Note however, that you must indicate somehow if your triplet is CSR or CSC or ... - it is not much shorter then writing the full CSC constructor. r. From lists.steve at arachnedesign.net Mon Oct 2 09:40:54 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Mon, 2 Oct 2006 09:40:54 -0400 Subject: [SciPy-user] matplotlib on osx In-Reply-To: References: <20061001023457.GA14026@clipper.ens.fr> Message-ID: <184E8FF9-36DC-413C-A565-EA904256F79B@arachnedesign.net> Hi John, > Has anyone gotten matplotlib running on OSX recently? I tried > upgrading to 10.4.8, and building Python 2.5, numpy, scipy, libpng, > and freetype from source. Everything went fine, even matplotlib > installed without an error. But when I import matplotlib I get Mine seems to be working well. I've installed python and the various dependencies of matplotplib via MacPorts. The dependencies for py-matplotlib are: python24, freetype, libpng, antigraingeometry, py-numarray, py- numeric, py-dateutil, py-tz I then reinstalled matplotlib myself via recent svn checkouts since I wanted it to be working with the latest svn numpy as well. So I have some extra stuff lying around since the ports installations depend on them, but it all seems to work fine for me. If you want to use python 2.5, however, MacPorts won't do you any good. HTH, -steve From willemjagter at gmail.com Mon Oct 2 10:15:33 2006 From: willemjagter at gmail.com (William Hunter) Date: Mon, 2 Oct 2006 16:15:33 +0200 Subject: [SciPy-user] Sparse: Efficient method to convert to CSC format Message-ID: <8b3894bc0610020715m4efd4565m7c36a1c040f390a9@mail.gmail.com> On 02/10/06, Robert Cimrman wrote: > William Hunter wrote: > > I've been bugging Robert C. and Ed S. (thanks guys) for a while about > > this now, but I think it might be good if I share my experiences with > > other users. > > > > When converting a sparse matrix to CSC format, one of the quickest > > ways to get there is to give three vectors > > containing; > > > > (1) [values]: the values, > > (2) [rowind]: the row indices and > > (3) [colptr]: the column pointer. > > > > If one times how long it takes to get your CSC matrix by doing > > sparse.csc_matrix((values, rowind, colptr)) compared to the other ways > > to get it, e.g., csc_matrix(dense_mtx), I've found that the first > > method is faster (enough to be significant for me). > > > > One can solve a sparse system with , if your matrix > > is in CSC format. And here's my question: Let's say I have a way (and > > I might :-)) to construct those 3 vectors very quickly, how can I use > > them directly as arguments in ? For example: > > > > solution = linsolve.spsolve([values],[rowind],[colptr],[RHS]) > > If you have already (values, rowind, colptr) triplet, creating a CSC > matrix is virtually a no-operation (besides some sanity checks). I suppose I'm really interested (curious) to know what happens behind the scenes and why (except for the checks). I had a look at the source, but gave up after a while. There's a method that creates an instance (?) and I think I understand why it is useful to group the vectors, but I can't see how this is done, so perhaps I shouldn't be reading there in any case, but I'd still like to know what happens if it doesn't require a lot of explanation. I know I shouldn't worry about it, but I just can't help myself... > Is it really so inconvenient to write: > solution = linsolve.spsolve( csc_matrix( (values, rowind, colptr), shape > ),[RHS])? > > On the other hand, the syntax > solution = linsolve.spsolve( (values,rowind,colptr,'csc'), [RHS]) > might be useful as a syntactic sugar, so I am not against adding it. No need, that's just syntax. > Note however, that you must indicate somehow if your triplet is CSR or > CSC or ... - it is not much shorter then writing the full CSC constructor. > > r. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bhendrix at enthought.com Mon Oct 2 11:29:01 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Mon, 02 Oct 2006 10:29:01 -0500 Subject: [SciPy-user] Pros and Cons of Python verses other array environments In-Reply-To: <20061001023457.GA14026@clipper.ens.fr> References: <20061001023457.GA14026@clipper.ens.fr> Message-ID: <4521303D.6030405@enthought.com> I've kept quite about this because I'm trying to solidify all the eggs for the next release. Now that the cat is out of the bag, you can find nightly builds of scipy and numpy, continuous builds of ETS, and lots of other eggs here: http://code.enthought.com/enstaller/eggs. The next release is very close, I'm just trying to hammer out the last critical bugs for the enstaller application. Bryce Gael Varoquaux wrote: > As for packages repository and dead-easy install for windows there is > light at the end of the tunnel. Eggs are starting to appear as a package > standard, and enthought in working on a installer for windows capable of > download egg from repositories (see > http://code.enthought.com/enthon/enstaller.shtml for an alpha version). > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From david at ar.media.kyoto-u.ac.jp Tue Oct 3 01:52:19 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 03 Oct 2006 14:52:19 +0900 Subject: [SciPy-user] Pros and Cons of Python verses other array environments In-Reply-To: <451ECCB1.4070301@gmail.com> References: <451BFBD7.4080701@ieee.org> <51f97e530609281704n522fa5c8ge00571acec53162e@mail.gmail.com> <451CD238.3040509@ar.media.kyoto-u.ac.jp> <451DE807.1010602@bigpond.net.au> <451E5ADE.3050404@ar.media.kyoto-u.ac.jp> <451ECCB1.4070301@gmail.com> Message-ID: <4521FA93.2080905@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: > > >> You have first an index on all top level functions, and you can dig it >> through as deep as you want. Notice how you know for a given function >> which call are called when and how often. I have no idea how difficult >> this would be to implement in python. I was told some months ago on the >> main python list that hotshot can give a per line profiling of python >> code, but this is not documented; also, it looks like it is possible to >> get the source code at runtime without too much difficulty in python. I >> would be really surprised if nobody tried to do something similar for >> python in general, because this is really useful. I have never found >> anything for python, but it may be just because I don't know the name >> for this kind of tools (I tried googling with terms such as "source >> profiling", without much success). >> > > One excellent tool for drilling through these results is a KDE application > called kcachegrind. It was written to visualize valgrind profiling results, but > the file format is generic enough that someone wrote a script hotshot2calltree > that converts hotshot results to it. I believe it comes with kcachegrind. > > http://kcachegrind.sourceforge.net/cgi-bin/show.cgi > > There is a new profiler the comes with 2.5 (but I believe is compatible with at > least 2.4) called cProfile (available separately as lsprof). It too has a > converter for kcachegrind. > > http://codespeak.net/svn/user/arigo/hack/misc/lsprof/ > http://www.gnome.org/~johan/lsprofcalltree.py > > Thanks for the tip: it looks like lsprof can gives you the information per child functions, which is the main weakness of previous python profilers IMHO. I cannot make it work right now on python2.4, will try at home with python 2.5 David From coatimundi at cox.net Tue Oct 3 01:59:59 2006 From: coatimundi at cox.net (Coatimundi) Date: Mon, 02 Oct 2006 22:59:59 -0700 Subject: [SciPy-user] Building Numpy with Visual Studio? Message-ID: As I read the build guidelines for Numpy at www.scipy.org/Installing_SciPy/Windows I see that the instructions suggest that Numpy can be built with the MSVC tools against an ATLAS lib built with Cygwin tools. At this point I am perplexed. I thought that Cygwin used ELF and that Windows used COFF (actually PE) for static libs. So it is not obvious to me why setup.py should be able to build with the MSVC tools if the atlas and lapack libs are built with Cygwin tools. Maybe the Microsoft linker understands both ELF and COFF? Or maybe Cygwin actually uses PE? Can someone help me understand this? Thanks. From fperez.net at gmail.com Tue Oct 3 03:43:56 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 3 Oct 2006 01:43:56 -0600 Subject: [SciPy-user] access to wavelets.scipy.org/svn/ In-Reply-To: <20060929194828.GB13399@swri16wm.electro.swri.edu> References: <20060929194828.GB13399@swri16wm.electro.swri.edu> Message-ID: Hi Glen, On 9/29/06, Glen W. Mabey wrote: > Hello, > > I don't know of a better place to post than this for a question > regarding pywt. > > Following the instructions found at http://wavelets.scipy.org/index.html for > checking out the development version: > > $$ svn co http://wavelets.scipy.org/svn/multiresolution/wavelets/trunk wavelets > svn: REPORT request failed on '/svn/multiresolution/!svn/vcc/default' > svn: REPORT of '/svn/multiresolution/!svn/vcc/default': 400 Bad Request (http://wavelets.scipy.org) > > > and the indicated path in > http://wavelets.scipy.org/svn/multiresolution/pywt/trunk/README.txt > yields a similar error. > > However, trying: > $$ svn co https://wavelets.scipy.org/svn/multiresolution/pywt/trunk/ > at least asks for a password. However, shouldn't there be (or, could > there please be) anonymous access to this tree? As others have pointed, this is an issue with your local proxy config. If you're still having problems, please let us know. Also, note that the brief history provided by Jarrod is correct. Filip's code has been moved over to the scipy site, and Filip has SVN commit access to it. I just did a quick update of the site to better reflect this, and turned it into a Moin wiki completely. This will remove me as the bottleneck for site handling, and Filip (or anyone else) can now contribute more up to date info. A pywavelets project is one of those things that everyone seems to need but nobody has the time to fully develop. Currently at least Filip is making progress, but my understanding is that his time is also limited for this project. If more users with expertise in this area and some time to spare are willing to help, there's plenty to do... Cheers, f From robert.kern at gmail.com Tue Oct 3 03:56:41 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 03 Oct 2006 02:56:41 -0500 Subject: [SciPy-user] Building Numpy with Visual Studio? In-Reply-To: References: Message-ID: <452217B9.1030607@gmail.com> Coatimundi wrote: > As I read the build guidelines for Numpy at > www.scipy.org/Installing_SciPy/Windows I see that the instructions > suggest that Numpy can be built with the MSVC tools against an ATLAS lib > built with Cygwin tools. > > At this point I am perplexed. I thought that Cygwin used ELF and that > Windows used COFF (actually PE) for static libs. > > So it is not obvious to me why setup.py should be able to build with the > MSVC tools if the atlas and lapack libs are built with Cygwin tools. > > Maybe the Microsoft linker understands both ELF and COFF? > Or maybe Cygwin actually uses PE? The latter. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hetland at tamu.edu Tue Oct 3 09:56:12 2006 From: hetland at tamu.edu (Rob Hetland) Date: Tue, 3 Oct 2006 08:56:12 -0500 Subject: [SciPy-user] matplotlib on osx In-Reply-To: <184E8FF9-36DC-413C-A565-EA904256F79B@arachnedesign.net> References: <20061001023457.GA14026@clipper.ens.fr> <184E8FF9-36DC-413C-A565-EA904256F79B@arachnedesign.net> Message-ID: I also got mpl to work using the i-Installer to install Freetype2 and libpng (they are both available as universal binaries). I have installed on my macbook pro and a number of student macbooks with no issues using this method, -Rob On Oct 2, 2006, at 8:40 AM, Steve Lianoglou wrote: > Hi John, > >> Has anyone gotten matplotlib running on OSX recently? I tried >> upgrading to 10.4.8, and building Python 2.5, numpy, scipy, libpng, >> and freetype from source. Everything went fine, even matplotlib >> installed without an error. But when I import matplotlib I get > > Mine seems to be working well. > > I've installed python and the various dependencies of matplotplib via > MacPorts. > > The dependencies for py-matplotlib are: > python24, freetype, libpng, antigraingeometry, py-numarray, py- > numeric, py-dateutil, py-tz > > I then reinstalled matplotlib myself via recent svn checkouts since I > wanted it to be working with the latest svn numpy as well. So I have > some extra stuff lying around since the ports installations depend on > them, but it all seems to work fine for me. > > If you want to use python 2.5, however, MacPorts won't do you any > good. > > HTH, > -steve > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From coatimundi at cox.net Tue Oct 3 10:03:00 2006 From: coatimundi at cox.net (Coatimundi) Date: Tue, 03 Oct 2006 07:03:00 -0700 Subject: [SciPy-user] Building Numpy with Visual Studio? In-Reply-To: <452217B9.1030607@gmail.com> References: <452217B9.1030607@gmail.com> Message-ID: Robert Kern wrote: > Coatimundi wrote: >> As I read the build guidelines for Numpy at >> www.scipy.org/Installing_SciPy/Windows I see that the instructions >> suggest that Numpy can be built with the MSVC tools against an ATLAS lib >> built with Cygwin tools. >> >> At this point I am perplexed. I thought that Cygwin used ELF and that >> Windows used COFF (actually PE) for static libs. >> >> So it is not obvious to me why setup.py should be able to build with the >> MSVC tools if the atlas and lapack libs are built with Cygwin tools. >> >> Maybe the Microsoft linker understands both ELF and COFF? >> Or maybe Cygwin actually uses PE? > > The latter. > Thanks. I scoured the Cygwin site and finally found a strong clue to this in the Cygwin entry at wikipedia: http://en.wikipedia.org/wiki/Cygwin#History From fccoelho at gmail.com Tue Oct 3 14:46:40 2006 From: fccoelho at gmail.com (Flavio Coelho) Date: Tue, 3 Oct 2006 15:46:40 -0300 Subject: [SciPy-user] gaussian_kde broken? Message-ID: Hi, I noticed a new bug in the combination Numpy1.0rc1/Scipy0.5.1: #generate a sample from a normal distribution: a=normal(0,1,10000) from scipy.stats.kde import gaussian_kde as kde d=kde(a) d.evaluate(arange(-1,1,0.1)) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/fccoelho/Documents/Papers/YFgame/ /usr/lib/python2.4/site-packages/scipy/stats/kde.py in evaluate(self, points) 116 diff = self.dataset - points[:,i,newaxis] 117 tdiff = dot(self.inv_cov, diff) --> 118 energy = sum(diff*tdiff,axis=0)/2.0 119 result[i] = sum(exp(-energy),axis=0) 120 TypeError: sum() takes no keyword arguments Are there any fixes available for this? thanks -- Fl?vio Code?o Coelho registered Linux user # 386432 --------------------------- "Laws are like sausages. It's better not to see them being made." Otto von Bismar -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Oct 3 14:51:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 03 Oct 2006 13:51:23 -0500 Subject: [SciPy-user] gaussian_kde broken? In-Reply-To: References: Message-ID: <4522B12B.70305@gmail.com> Flavio Coelho wrote: > Hi, > > I noticed a new bug in the combination Numpy1.0rc1/Scipy0.5.1: > > #generate a sample from a normal distribution: > a=normal(0,1,10000) > from scipy.stats.kde import gaussian_kde as kde > d=kde(a) > d.evaluate (arange(-1,1,0.1)) > --------------------------------------------------------------------------- > exceptions.TypeError Traceback (most > recent call last) > > /home/fccoelho/Documents/Papers/YFgame/ > > /usr/lib/python2.4/site-packages/scipy/stats/kde.py in evaluate(self, > points) > 116 diff = self.dataset - points[:,i,newaxis] > 117 tdiff = dot(self.inv_cov, diff) > --> 118 energy = sum(diff*tdiff,axis=0)/2.0 > 119 result[i] = sum(exp(-energy),axis=0) > 120 > > TypeError: sum() takes no keyword arguments > > Are there any fixes available for this? It's been fixed in SVN for a while now. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fccoelho at gmail.com Tue Oct 3 14:59:07 2006 From: fccoelho at gmail.com (Flavio Coelho) Date: Tue, 3 Oct 2006 15:59:07 -0300 Subject: [SciPy-user] gaussian_kde broken? In-Reply-To: <4522B12B.70305@gmail.com> References: <4522B12B.70305@gmail.com> Message-ID: Thanks Robert!! Now, which one should I update to SVN numpy or scipy? Thanks again! Fl?vio On 10/3/06, Robert Kern wrote: > > Flavio Coelho wrote: > > Hi, > > > > I noticed a new bug in the combination Numpy1.0rc1/Scipy0.5.1: > > > > #generate a sample from a normal distribution: > > a=normal(0,1,10000) > > from scipy.stats.kde import gaussian_kde as kde > > d=kde(a) > > d.evaluate (arange(-1,1,0.1)) > > > --------------------------------------------------------------------------- > > exceptions.TypeError Traceback (most > > recent call last) > > > > /home/fccoelho/Documents/Papers/YFgame/ > > > > /usr/lib/python2.4/site-packages/scipy/stats/kde.py in evaluate(self, > > points) > > 116 diff = self.dataset - points[:,i,newaxis] > > 117 tdiff = dot(self.inv_cov, diff) > > --> 118 energy = sum(diff*tdiff,axis=0)/2.0 > > 119 result[i] = sum(exp(-energy),axis=0) > > 120 > > > > TypeError: sum() takes no keyword arguments > > > > Are there any fixes available for this? > > It's been fixed in SVN for a while now. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though > it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Fl?vio Code?o Coelho registered Linux user # 386432 --------------------------- "Laws are like sausages. It's better not to see them being made." Otto von Bismark -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Oct 3 15:23:42 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 03 Oct 2006 14:23:42 -0500 Subject: [SciPy-user] gaussian_kde broken? In-Reply-To: References: <4522B12B.70305@gmail.com> Message-ID: <4522B8BE.2080705@gmail.com> Flavio Coelho wrote: > Thanks Robert!! > > Now, which one should I update to SVN numpy or scipy? You might probably need to do both just to keep versions in sync, but the problem was in scipy, and so is the fix. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredmfp at gmail.com Wed Oct 4 03:05:01 2006 From: fredmfp at gmail.com (fred) Date: Wed, 04 Oct 2006 09:05:01 +0200 Subject: [SciPy-user] [OT] displaying pictures on the wiki... Message-ID: <45235D1D.10106@gmail.com> Hi the list, Is it possible to displays pictures in a "array" (side-by-side) as this, on the wiki ? some text... --------- | x | x | --------- some text... --------- | x | x | --------- some text... ... Cheers, -- http://scipy.org/FredericPetit From david at ar.media.kyoto-u.ac.jp Wed Oct 4 04:34:20 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 04 Oct 2006 17:34:20 +0900 Subject: [SciPy-user] Is anybody working on a toolbox for Kalman filtering and derivative ? In-Reply-To: References: <451D36C7.6080507@ar.media.kyoto-u.ac.jp> Message-ID: <4523720C.1090509@ar.media.kyoto-u.ac.jp> Aaron Hoover wrote: > Hi David, > > Though it's not just a Kalman filtering toolbox, the open source > computer vision library, OpenCV contains implementations of Kalman > filtering. I've had good success using OpenCV's Python wrappers to do > a lot of > 2D vision and image processing very quickly with Python. If nothing > else, you could use their source as a point of comparison for your > implementation. > > Thanks for the tip, but I am mainly interested in non visual related problems, and I wish to be more general. Could be nice to have comparison, though, I agree, David From arserlom at gmail.com Wed Oct 4 07:18:48 2006 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Wed, 4 Oct 2006 13:18:48 +0200 Subject: [SciPy-user] Problems building numpy from SVN Message-ID: I've checked out numpy's source code from http://svn.scipy.org/svn/numpy/trunk/ and have tried to build it but it's giving me this error: error: Command "C:\MinGW\bin\g77.exe -g -Wall -mno-cygwin -shared build\temp.win 32-2.5\Release\numpy\linalg\lapack_litemodule.o "-LC:\Archivos de programa\Atlas " -LC:\MinGW\lib -LC:\MinGW\lib\gcc\mingw32\3.4.2 -LC:\Python25\libs -LC:\Python 25\PCBuild -llapack -llapack -lf77blas -lcblas -latlas -lpython25 -lg2c -lgcc -l msvcr71 -o build\lib.win32-2.5\numpy\linalg\lapack_lite.pyd" failed with exit st atus 1 I'm on Windows XP, python 2.5 and I'm using mingw 5.0.3 and these atlas files: "Pentium 4 / SSE2". The command I used to build numpy was: setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst Can anybody help? -------------- next part -------------- An HTML attachment was scrubbed... URL: From skokot at po.opole.pl Wed Oct 4 09:16:01 2006 From: skokot at po.opole.pl (Seweryn Kokot) Date: Wed, 04 Oct 2006 15:16:01 +0200 Subject: [SciPy-user] problem with unicode in matplotlib Message-ID: <874pukgr1q.fsf@poczta.po.opole.pl> Hello, I'm using python-matplotlib from debian unstable (vers. 0.87.5) and the plots doesn't display properly polish fonts, instead I get squares on graphs. Simple example #!/usr/bin/python # -*- coding: iso-8859-2 -*- from pylab import * plot([1, 2, 3], [1, 4, 2]) xlabel(unicode('Miesi?c maj','iso-8859-2')) ylabel(unicode('??????','iso-8859-2')) show() Any idea? Regards, Seweryn Kokot From djf at pdx.edu Wed Oct 4 10:20:55 2006 From: djf at pdx.edu (Daniel Fish) Date: Wed, 4 Oct 2006 07:20:55 -0700 Subject: [SciPy-user] leastsquare N>M ? Message-ID: <002701c6e7c0$4e831b70$6401a8c0@Fish> I receive the following error from scipy.optimize.leastsq() (in scipy version 0.5.0.2033, python 2.4 (windows)): # CODE-------------------------------------- 1. from scipy.optimize import * 2. def func(x): 3. return x[0]**2 4. x0=[1,0] 5. (y,msg)=leastsq(func,x0) 6. print y # OUTPUT---------------------------------------------------------------------------------------------------- File "C:/cygwin/Ksim/PythonCode/Database/test.py", line 5, in ? (y,msg)=leastsq(func,x0) File "C:\Python24\lib\site-packages\scipy\optimize\minpack.py", line 257, in leastsq raise errors[info][1], errors[info][0] TypeError: Improper input parameters. 2 1 1 0.000000 0.000000 0.000000 600 100.000000 # ----------------------------------------------------------------------------------------------------------------- But have no problem if I let x0=[1]. Does leastsq not work with more unknowns than equations, or am I doing something wrong? thanks, djoefish -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Oct 4 11:23:34 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 4 Oct 2006 17:23:34 +0200 Subject: [SciPy-user] leastsquare N>M ? In-Reply-To: <002701c6e7c0$4e831b70$6401a8c0@Fish> References: <002701c6e7c0$4e831b70$6401a8c0@Fish> Message-ID: <20061004152334.GO26999@mentat.za.net> On Wed, Oct 04, 2006 at 07:20:55AM -0700, Daniel Fish wrote: > I receive the following error from scipy.optimize.leastsq() (in scipy version > 0.5.0.2033, python 2.4 (windows)): > > # CODE-------------------------------------- > 1. from scipy.optimize import * > 2. def func(x): > 3. return x[0]**2 > 4. x0=[1,0] > 5. (y,msg)=leastsq(func,x0) > 6. print y > > # > OUTPUT---------------------------------------------------------------------------------------------------- > File "C:/cygwin/Ksim/PythonCode/Database/test.py", line 5, in ? > (y,msg)=leastsq(func,x0) > File "C:\Python24\lib\site-packages\scipy\optimize\minpack.py", line 257, in > leastsq raise errors[info][1], errors[info][0] > TypeError: Improper input parameters. > 2 1 1 0.000000 0.000000 0.000000 600 100.000000 > # > ----------------------------------------------------------------------------------------------------------------- > > But have no problem if I let x0=[1]. Does leastsq not work with more unknowns > than equations, or am I doing something wrong? Leastsq is used when you have more equations than unknowns, to provide you with an answer that minimises some norm. In your case, rather use 'fmin', i.e. y,msg = fmin(func,x0) to produce y = 6.22301527786e-61 Regards St?fan From jstano at interchange.ubc.ca Wed Oct 4 13:02:16 2006 From: jstano at interchange.ubc.ca (joe stano) Date: Wed, 04 Oct 2006 10:02:16 -0700 Subject: [SciPy-user] Scipy Minimizer Message-ID: <0J6M004HNFBS8Y@smtp.interchange.ubc.ca> Hi Does anyone know what the best scipy minimizer to use would be if I need to have upper and lower bounds on my list of decision variables, along with separate constraint functions? I have tried fmin_cobyla and fmin_lbfgsb but no luck. Currently I can solve my function using Excel Solver, but would rather implement in Python. Thanks for any help. joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangxj.uc at gmail.com Wed Oct 4 13:16:49 2006 From: wangxj.uc at gmail.com (Xiaojian Wang) Date: Wed, 4 Oct 2006 10:16:49 -0700 Subject: [SciPy-user] Scipy Minimizer In-Reply-To: <0J6M004HNFBS8Y@smtp.interchange.ubc.ca> References: <0J6M004HNFBS8Y@smtp.interchange.ubc.ca> Message-ID: joe I used fmin_cobyla, it works on my problem, what is the difference between Excel Solver and fmin_cobyla/fmin_lbfgsb ??, there is no general best minimizer, you may try different methods on your specific problem. xiaojian On 10/4/06, joe stano wrote: > > Hi > > Does anyone know what the best scipy minimizer to use would be if I need > to have upper and lower bounds on my list of decision variables, along with > separate constraint functions? I have tried fmin_cobyla and fmin_lbfgsb but > no luck. Currently I can solve my function using Excel Solver, but would > rather implement in Python. > > > > Thanks for any help. > > > > joe > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jstano at interchange.ubc.ca Wed Oct 4 13:26:34 2006 From: jstano at interchange.ubc.ca (joe stano) Date: Wed, 04 Oct 2006 10:26:34 -0700 Subject: [SciPy-user] Scipy Minimizer Message-ID: <0J6M004J7GGANS@smtp.interchange.ubc.ca> Thanks xiaojian I think solver uses a generalized reduced gradient optimization. My biggest problem is that the scipy optimizer constraint functions require the constraint to return a single number, whereas I need to return a list of numbers, all meeting the constraint (non negative). Maybe it is just my implementation but I am not sure. joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From hetland at tamu.edu Wed Oct 4 15:01:46 2006 From: hetland at tamu.edu (Rob Hetland) Date: Wed, 4 Oct 2006 14:01:46 -0500 Subject: [SciPy-user] matplotlib on osx In-Reply-To: References: <20061001023457.GA14026@clipper.ens.fr> <184E8FF9-36DC-413C-A565-EA904256F79B@arachnedesign.net> Message-ID: <3B22122A-7D99-4115-A5B9-098553131E6C@tamu.edu> OK. I would like to change my answer. I was just installing MPL on a student's new MacBook. He had installed Python 2.5 (only), and we tried to get the svn version of MPL working after a successful install of numpy. MPL segfaulted. Then we installed python 2.4.3, and the numpy and mpl packages available from macpython. These worked -- and also fixed the mpl that was installed in python 2.5! I didn't have time to dig around to see what was actually changed, but I thought this was strange, and interesting enough to report, -Rob On Oct 3, 2006, at 8:56 AM, Rob Hetland wrote: > > I also got mpl to work using the i-Installer to install Freetype2 and > libpng (they are both available as universal binaries). > > I have installed on my macbook pro and a number of student macbooks > with no issues using this method, > > -Rob > > On Oct 2, 2006, at 8:40 AM, Steve Lianoglou wrote: > >> Hi John, >> >>> Has anyone gotten matplotlib running on OSX recently? I tried >>> upgrading to 10.4.8, and building Python 2.5, numpy, scipy, libpng, >>> and freetype from source. Everything went fine, even matplotlib >>> installed without an error. But when I import matplotlib I get >> >> Mine seems to be working well. >> >> I've installed python and the various dependencies of matplotplib via >> MacPorts. >> >> The dependencies for py-matplotlib are: >> python24, freetype, libpng, antigraingeometry, py-numarray, py- >> numeric, py-dateutil, py-tz >> >> I then reinstalled matplotlib myself via recent svn checkouts since I >> wanted it to be working with the latest svn numpy as well. So I have >> some extra stuff lying around since the ports installations depend on >> them, but it all seems to work fine for me. >> >> If you want to use python 2.5, however, MacPorts won't do you any >> good. >> >> HTH, >> -steve >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > ---- > Rob Hetland, Associate Professor > Dept. of Oceanography, Texas A&M University > http://pong.tamu.edu/~rob > phone: 979-458-0096, fax: 979-845-6331 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From robert.kern at gmail.com Wed Oct 4 15:48:47 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 04 Oct 2006 14:48:47 -0500 Subject: [SciPy-user] Scipy Minimizer In-Reply-To: <0J6M004J7GGANS@smtp.interchange.ubc.ca> References: <0J6M004J7GGANS@smtp.interchange.ubc.ca> Message-ID: <4524101F.2010304@gmail.com> joe stano wrote: > Thanks xiaojian > > I think solver uses a generalized reduced gradient optimization. My > biggest problem is that the scipy optimizer constraint functions require > the constraint to return a single number, whereas I need to return a > list of numbers, all meeting the constraint (non negative). Maybe it is > just my implementation but I am not sure. fmin_cobyla takes a list of constraint functions. Take your single function that returns multiple numbers and make separate functions each returning one number. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From davidlinke at tiscali.de Wed Oct 4 19:38:11 2006 From: davidlinke at tiscali.de (David Linke) Date: Thu, 05 Oct 2006 01:38:11 +0200 Subject: [SciPy-user] [OT] displaying pictures on the wiki... In-Reply-To: <45235D1D.10106@gmail.com> References: <45235D1D.10106@gmail.com> Message-ID: <452445E3.4090500@tiscali.de> fred wrote: > Hi the list, > > Is it possible to displays pictures in a "array" (side-by-side) as this, > on the wiki ? > > some text... > > --------- > | x | x | > --------- > (...) Hi Fred, see my change on your page http://scipy.org/FredericPetit If anybody else is interested in the answer: To solve this use a table for side-by-side arrangement and the ImageLink-macro for adjusting the size. ImageLink is installed in the scipy wiki (see http://moinmoin.wikiwikiweb.de/MacroMarket/ImageLink for options of the macro). Regards, David From fredmfp at gmail.com Thu Oct 5 02:37:57 2006 From: fredmfp at gmail.com (fred) Date: Thu, 05 Oct 2006 08:37:57 +0200 Subject: [SciPy-user] [OT] displaying pictures on the wiki... In-Reply-To: <452445E3.4090500@tiscali.de> References: <45235D1D.10106@gmail.com> <452445E3.4090500@tiscali.de> Message-ID: <4524A845.1070300@gmail.com> David Linke a ?crit : > >Hi Fred, > >see my change on your page http://scipy.org/FredericPetit > > Eh, that's great ! :-) Thanks a lot. -- http://scipy.org/FredericPetit From mjakubik at ta3.sk Thu Oct 5 05:20:59 2006 From: mjakubik at ta3.sk (Marian Jakubik) Date: Thu, 5 Oct 2006 11:20:59 +0200 Subject: [SciPy-user] SuSE 10.1 installation problem Message-ID: <20061005112059.7513cfa5@jakubik.ta3.sk> Hi everyone, I'm a SciPy newbie and have problem with installing it on SuSE 10.1. I installed numpy-1.0rc1, then blas-3.0-926.i586.rpm & lapack-3.0-926.i586.rpm and then scipy-0.5.1... I tried to run this very simple script: ... from scipy import * p = poly1d([3,4,5]) print p ... and I've got this error: Traceback (most recent call last): File "1.py", line 1, in ? from scipy import * File "/usr/local/lib/python2.4/site-packages/scipy/signal/__init__.py", line 9, in ? from bsplines import * File "/usr/local/lib/python2.4/site-packages/scipy/signal/bsplines.py", line 3, in ? import scipy.special File "/usr/local/lib/python2.4/site-packages/scipy/special/__init__.py", line 10, in ? import orthogonal File "/usr/local/lib/python2.4/site-packages/scipy/special/orthogonal.py", line 67, in ? from scipy.linalg import eig File "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", line 17, in ? from lapack import get_lapack_funcs File "/usr/local/lib/python2.4/site-packages/scipy/linalg/lapack.py", line 17, in ? from scipy.linalg import flapack ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename I think the problem is in "from scipy import *" statement and is linked to blas & lapack... Can anyone help me, pls? Best regards, Marian Jakubik From ckkart at hoc.net Thu Oct 5 05:21:49 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Thu, 05 Oct 2006 18:21:49 +0900 Subject: [SciPy-user] SuSE 10.1 installation problem In-Reply-To: <20061005112059.7513cfa5@jakubik.ta3.sk> References: <20061005112059.7513cfa5@jakubik.ta3.sk> Message-ID: <4524CEAD.2020804@hoc.net> Marian Jakubik wrote: > Hi everyone, > > I'm a SciPy newbie and have problem with installing it on SuSE 10.1. > > I installed numpy-1.0rc1, then blas-3.0-926.i586.rpm & > lapack-3.0-926.i586.rpm and then scipy-0.5.1... > > I tried to run this very simple script: > ... > from scipy import * > p = poly1d([3,4,5]) > print p > ... > > and I've got this error: > > Traceback (most recent call last): > File "1.py", line 1, in ? > from scipy import * > File > "/usr/local/lib/python2.4/site-packages/scipy/signal/__init__.py", line > 9, in ? from bsplines import * File > "/usr/local/lib/python2.4/site-packages/scipy/signal/bsplines.py", line > 3, in ? import scipy.special File > "/usr/local/lib/python2.4/site-packages/scipy/special/__init__.py", > line 10, in ? import orthogonal File > "/usr/local/lib/python2.4/site-packages/scipy/special/orthogonal.py", > line 67, in ? from scipy.linalg import eig File > "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", line > 8, in ? from basic import * File > "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", line > 17, in ? from lapack import get_lapack_funcs File > "/usr/local/lib/python2.4/site-packages/scipy/linalg/lapack.py", line > 17, in ? from scipy.linalg import flapack > ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > > I think the problem is in "from scipy import *" statement and is linked to blas & lapack... > SuSE blas and lapack rpms are buggy. You have to build them yourself. Christian From mjakubik at ta3.sk Thu Oct 5 06:21:33 2006 From: mjakubik at ta3.sk (Marian Jakubik) Date: Thu, 5 Oct 2006 12:21:33 +0200 Subject: [SciPy-user] SuSE 10.1 installation problem In-Reply-To: <4524CEAD.2020804@hoc.net> References: <20061005112059.7513cfa5@jakubik.ta3.sk> <4524CEAD.2020804@hoc.net> Message-ID: <20061005122133.6319c3da@jakubik.ta3.sk> Could you help me with "building them myself"??? I tried to do this, but an error occured during the building from source codes. Could you post me some hints for building it? Best regards, Marian Jakubik On Thu, 05 Oct 2006 18:21:49 +0900 Christian Kristukat wrote: > Marian Jakubik wrote: > > Hi everyone, > > > > I'm a SciPy newbie and have problem with installing it on SuSE 10.1. > > > > I installed numpy-1.0rc1, then blas-3.0-926.i586.rpm & > > lapack-3.0-926.i586.rpm and then scipy-0.5.1... > > > > I tried to run this very simple script: > > ... > > from scipy import * > > p = poly1d([3,4,5]) > > print p > > ... > > > > and I've got this error: > > > > Traceback (most recent call last): > > File "1.py", line 1, in ? > > from scipy import * > > File > > "/usr/local/lib/python2.4/site-packages/scipy/signal/__init__.py", line > > 9, in ? from bsplines import * File > > "/usr/local/lib/python2.4/site-packages/scipy/signal/bsplines.py", line > > 3, in ? import scipy.special File > > "/usr/local/lib/python2.4/site-packages/scipy/special/__init__.py", > > line 10, in ? import orthogonal File > > "/usr/local/lib/python2.4/site-packages/scipy/special/orthogonal.py", > > line 67, in ? from scipy.linalg import eig File > > "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", line > > 8, in ? from basic import * File > > "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", line > > 17, in ? from lapack import get_lapack_funcs File > > "/usr/local/lib/python2.4/site-packages/scipy/linalg/lapack.py", line > > 17, in ? from scipy.linalg import flapack > > ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > > > > I think the problem is in "from scipy import *" statement and is linked to blas & lapack... > > > > SuSE blas and lapack rpms are buggy. > You have to build them yourself. > Christian > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From ckkart at hoc.net Thu Oct 5 06:28:25 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Thu, 05 Oct 2006 19:28:25 +0900 Subject: [SciPy-user] SuSE 10.1 installation problem In-Reply-To: <20061005122133.6319c3da@jakubik.ta3.sk> References: <20061005112059.7513cfa5@jakubik.ta3.sk> <4524CEAD.2020804@hoc.net> <20061005122133.6319c3da@jakubik.ta3.sk> Message-ID: <4524DE49.9040105@hoc.net> Marian Jakubik wrote: > Could you help me with "building them myself"??? > > I tried to do this, but an error occured during the building from source codes. > > Could you post me some hints for building it? > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 Following these instructions worked for me on SuSE 10.1 using gcc4/gfortran to build lapack and blas. I remember that the lapack tests failed but the lib was built anyway. Christian From nwagner at iam.uni-stuttgart.de Thu Oct 5 06:44:25 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 05 Oct 2006 12:44:25 +0200 Subject: [SciPy-user] SuSE 10.1 installation problem In-Reply-To: <4524DE49.9040105@hoc.net> References: <20061005112059.7513cfa5@jakubik.ta3.sk> <4524CEAD.2020804@hoc.net> <20061005122133.6319c3da@jakubik.ta3.sk> <4524DE49.9040105@hoc.net> Message-ID: <4524E209.404@iam.uni-stuttgart.de> Christian Kristukat wrote: > Marian Jakubik wrote: > >> Could you help me with "building them myself"??? >> >> I tried to do this, but an error occured during the building from source codes. >> >> Could you post me some hints for building it? >> >> > > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > > Following these instructions worked for me on SuSE 10.1 using gcc4/gfortran to > build lapack and blas. I remember that the lapack tests failed but the lib was > built anyway. > > Christian > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Is gfortran really stable/reliable ? Nils From mjakubik at ta3.sk Thu Oct 5 07:09:15 2006 From: mjakubik at ta3.sk (Marian Jakubik) Date: Thu, 5 Oct 2006 13:09:15 +0200 Subject: [SciPy-user] SuSE 10.1 installation problem In-Reply-To: <4524E209.404@iam.uni-stuttgart.de> References: <20061005112059.7513cfa5@jakubik.ta3.sk> <4524CEAD.2020804@hoc.net> <20061005122133.6319c3da@jakubik.ta3.sk> <4524DE49.9040105@hoc.net> <4524E209.404@iam.uni-stuttgart.de> Message-ID: <20061005130915.02134612@jakubik.ta3.sk> Christian, thanks a lot.. It works...!!! Marian On Thu, 05 Oct 2006 12:44:25 +0200 Nils Wagner wrote: > Christian Kristukat wrote: > > Marian Jakubik wrote: > > > >> Could you help me with "building them myself"??? > >> > >> I tried to do this, but an error occured during the building from source codes. > >> > >> Could you post me some hints for building it? > >> > >> > > > > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > > > > Following these instructions worked for me on SuSE 10.1 using gcc4/gfortran to > > build lapack and blas. I remember that the lapack tests failed but the lib was > > built anyway. > > > > Christian > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > Is gfortran really stable/reliable ? > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From jbednar at inf.ed.ac.uk Thu Oct 5 09:37:18 2006 From: jbednar at inf.ed.ac.uk (James A. Bednar) Date: Thu, 5 Oct 2006 14:37:18 +0100 Subject: [SciPy-user] weave broken with GCC 4.1.x Message-ID: <17701.2702.582663.583713@lodestar.inf.ed.ac.uk> Hi all, I've looked in the mailing-list archives and Trac for an answer to this question, but couldn't find anything relevant: - Is there a standalone version of weave (not requiring the rest of SciPy) that works with GCC 4.1.x? We've been using weave happily for our Topographica simulator (www.topographica.org) for a couple of years now, and it's been very helpful. However, we are not currently interested in using the rest of SciPy, because SciPy has so many external dependencies on libraries that we would not be able to supply with our code. Instead, we just installed Scipy_core-0.3.0_108.1820, which provides weave by itself. This has been working fine with GCC 3.x, 4.0.3, and 4.04. However, this version of weave doesn't work anymore with GCC 4.1.1 -- apparently GCC has added even more checks that find errors in the weave code, such as: /disk/scratch/topographica-0.9.1/lib/python2.4/site-packages/weave/scxx/object.h:600: error: extra qualification ?py::object::? on member I tried upgrading Scipy_core to 0.3.2, the last release where I could find a separate weave distribution, but now I just get coredumps whenever weave tries to compile and run something, which is even worse. I could certainly generate extensive bug reports for these two cases, but I suspect that weave has since been updated in many ways, so that such an effort might not be worthwhile. So I'd like to use the latest weave distribution, but there doesn't seem to be a separate package anymore. I did try using the weave subdirectory of the current package, as suggested in some of the documentation, but that results in 'ImportError: No module named numpy.distutils.core'. So it doesn't look like weave can be used without the rest of SciPy right now. (As a side note, I'm reluctant to replace Numeric with NumPy in my simulator, because the restricted license for the documentation, means that I can't provide documentation of the array operations for my own simulator!). So, is it possible to use the latest version of weave without the rest of SciPy? weave seems very useful to the entire Python community, regardless of whether they are doing scientific computing, so it seems tragic if its use is limited because it cannot be extricated from unrelated requirements for NumPy, LAPACK, ATLAS, etc. Alternatively, is there some older version of a standalone weave that at least works with GCC 4.1.x? That's all we really need for right now; without it our simulator won't work on recent Linux distributions! Thanks, Jim From oliphant at ee.byu.edu Thu Oct 5 09:41:41 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 05 Oct 2006 07:41:41 -0600 Subject: [SciPy-user] weave broken with GCC 4.1.x In-Reply-To: <17701.2702.582663.583713@lodestar.inf.ed.ac.uk> References: <17701.2702.582663.583713@lodestar.inf.ed.ac.uk> Message-ID: <45250B95.1020908@ee.byu.edu> James A. Bednar wrote: >Hi all, > >I've looked in the mailing-list archives and Trac for an answer to >this question, but couldn't find anything relevant: > >- Is there a standalone version of weave (not requiring the rest of > SciPy) that works with GCC 4.1.x? > > Just check weave out from svn by itself and install it using python setup.py install This should work. -Travis From jbednar at inf.ed.ac.uk Thu Oct 5 09:48:42 2006 From: jbednar at inf.ed.ac.uk (James A. Bednar) Date: Thu, 5 Oct 2006 14:48:42 +0100 Subject: [SciPy-user] weave broken with GCC 4.1.x In-Reply-To: <45250B95.1020908@ee.byu.edu> References: <17701.2702.582663.583713@lodestar.inf.ed.ac.uk> <45250B95.1020908@ee.byu.edu> Message-ID: <17701.3386.777308.640011@lodestar.inf.ed.ac.uk> | From: Travis Oliphant | Date: Oct 5 07:41:41 2006 -0600 | | James A. Bednar wrote: | | >Hi all, | > | >I've looked in the mailing-list archives and Trac for an answer to | >this question, but couldn't find anything relevant: | > | >- Is there a standalone version of weave (not requiring the rest of | > SciPy) that works with GCC 4.1.x? | > | > | | Just check weave out from svn by itself and install it using | | python setup.py install | | This should work. Thanks for the quick reply! However, that won't work unless you have numpy installed already: $ svn co http://svn.scipy.org/svn/scipy/trunk/Lib/weave weave A weave/converters.py ... A weave/examples/vq.py Checked out revision 2241. $ cd weave/ $/weave> named numpy.distutils.core $/weave> python setup.py install Traceback (most recent call last): File "setup.py", line 14, in ? from numpy.distutils.core import setup ImportError: No module named numpy.distutils.core $ Jim From ckkart at hoc.net Thu Oct 5 11:03:33 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 06 Oct 2006 00:03:33 +0900 Subject: [SciPy-user] SuSE 10.1 installation problem In-Reply-To: <4524E209.404@iam.uni-stuttgart.de> References: <20061005112059.7513cfa5@jakubik.ta3.sk> <4524CEAD.2020804@hoc.net> <20061005122133.6319c3da@jakubik.ta3.sk> <4524DE49.9040105@hoc.net> <4524E209.404@iam.uni-stuttgart.de> Message-ID: <45251EC5.3070107@hoc.net> Nils Wagner wrote: >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > Is gfortran really stable/reliable ? I don't know. I once experienced some weirdness with gfortran 4.0.x but none so far with 4.1.0, which ships with SuSE 10.1. Christian From coughlan at ski.org Thu Oct 5 16:34:35 2006 From: coughlan at ski.org (James Coughlan) Date: Thu, 05 Oct 2006 13:34:35 -0700 Subject: [SciPy-user] can't get weave to run Message-ID: <45256C5B.6080803@ski.org> When I try to run a simple weave example: from scipy import weave a=1 weave.inline('printf("%d",a);',['a']) I get this error message: Traceback (most recent call last): File "", line 1, in ? File "C:\PYTHON24\lib\site-packages\scipy\weave\inline_tools.py", line 325, in inline results = attempt_function_call(code,local_dict,global_dict) File "C:\PYTHON24\lib\site-packages\scipy\weave\inline_tools.py", line 375, in attempt_function_call function_list = function_catalog.get_functions(code,module_dir) File "C:\PYTHON24\lib\site-packages\scipy\weave\catalog.py", line 593, in get_functions function_list = self.get_cataloged_functions(code) File "C:\PYTHON24\lib\site-packages\scipy\weave\catalog.py", line 506, in get_cataloged_functions cat = get_catalog(path,mode) File "C:\PYTHON24\lib\site-packages\scipy\weave\catalog.py", line 275, in get_catalog if (dumb and os.path.exists(catalog_file+'.dat')) \ File "C:\Python24\lib\ntpath.py", line 256, in exists st = os.stat(path) TypeError: coercing to Unicode: need string or buffer, NoneType found I am running Python Enthought Edition--Python 2.4.3 for Windows Enhanced Python Distribution, which includes SciPy version 0.5.0.2033. (Weave used to work on my computer, back when I was using Python 2.3.) Does anybody have some suggestions? Thanks, James PS When I run weave.test() I get some successes and some error messages: Found 16 tests for scipy.weave.slice_handler Found 0 tests for scipy.weave.c_spec Found 2 tests for scipy.weave.blitz_tools building extensions here: c:\docume~1\james\locals~1\temp\James\python24_compiled\m0 Found 1 tests for scipy.weave.ext_tools Found 74 tests for scipy.weave.size_check Found 9 tests for scipy.weave.build_tools Found 0 tests for scipy.weave.inline_tools Found 1 tests for scipy.weave.ast_tools Found 0 tests for scipy.weave.wx_spec Found 3 tests for scipy.weave.standard_array_spec Found 26 tests for scipy.weave.catalog Found 0 tests for __main__ .............................................................................................warning: specified build_dir '_bad_pa th_' does not exist or is not writable. Trying default locations .....warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ................................removing 'c:\docume~1\james\locals~1\temp\tmpgr3vnecat_test' (and everything under it) error removing c:\docume~1\james\locals~1\temp\tmpgr3vnecat_test: c:\docume~1\james\locals~1\temp\tmpgr3vnecat_test\win3224compile d_catalog: Permission denied error removing c:\docume~1\james\locals~1\temp\tmpgr3vnecat_test: c:\docume~1\james\locals~1\temp\tmpgr3vnecat_test: Directory not empty .removing 'c:\docume~1\james\locals~1\temp\tmpfw6kvscat_test' (and everything under it) . ---------------------------------------------------------------------- Ran 132 tests in 3.264s OK From ckkart at hoc.net Fri Oct 6 00:39:01 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 06 Oct 2006 13:39:01 +0900 Subject: [SciPy-user] SuSE 10.1 installation problem In-Reply-To: <20061005130915.02134612@jakubik.ta3.sk> References: <20061005112059.7513cfa5@jakubik.ta3.sk> <4524CEAD.2020804@hoc.net> <20061005122133.6319c3da@jakubik.ta3.sk> <4524DE49.9040105@hoc.net> <4524E209.404@iam.uni-stuttgart.de> <20061005130915.02134612@jakubik.ta3.sk> Message-ID: <4525DDE5.8010209@hoc.net> Marian Jakubik wrote: > Christian, > > thanks a lot.. It works...!!! Great. Just in case: I've put latest numpy/scipy rpms, built on SuSE10.1, Intel Celeron with blas/lapack here: http://rosa.physik.tu-berlin.de/~semmel/suse10.1 Christian From willemjagter at gmail.com Fri Oct 6 03:22:22 2006 From: willemjagter at gmail.com (William Hunter) Date: Fri, 6 Oct 2006 09:22:22 +0200 Subject: [SciPy-user] matplotlib: pylab doesn't import (backends problem) Message-ID: <8b3894bc0610060022u4584a855q5798c08b6473b20d@mail.gmail.com> These are installed: scipy.__version__ = '0.5.1' numpy.__version__ = '1.0rc1' matplotlib.__version__ = '0.87.6' My options in setup.py for matplotlib is set to build Agg and GTKAgg ( = 1), the others all = 0. If I leave them at 'auto' as per default, matplotlib doesn't build (?). I use Ubuntu Dapper - AMD64. Here's what happens if I try to import pylab (in a new terminal): In [1]: import pylab --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/william/ /usr/lib/python2.4/site-packages/pylab.py ----> 1 from matplotlib.pylab import * /usr/lib/python2.4/site-packages/matplotlib/pylab.py 200 201 from axes import Axes, PolarAxes --> 202 import backends 203 from cbook import flatten, is_string_like, exception_to_str, popd, \ 204 silent_list, iterable, enumerate /usr/lib/python2.4/site-packages/matplotlib/backends/__init__.py 52 # a hack to keep old versions of ipython working with mpl after bug 53 # fix #1209354 54 if 'IPython.Shell' in sys.modules: ---> 55 new_figure_manager, draw_if_interactive, show = pylab_setup() 56 /usr/lib/python2.4/site-packages/matplotlib/backends/__init__.py in pylab_setup() 21 backend_name = 'backend_'+backend.lower() 22 backend_mod = __import__('matplotlib.backends.'+backend_name, ---> 23 globals(),locals(),[backend_name]) 24 25 # Things we pull in from all backends /usr/lib/python2.4/site-packages/matplotlib/backends/backend_gtkagg.py 8 from matplotlib.figure import Figure 9 from backend_agg import FigureCanvasAgg ---> 10 from backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\ 11 show, draw_if_interactive,\ 12 error_msg_gtk, NavigationToolbar, PIXELS_PER_INCH, backend_version, \ /usr/lib/python2.4/site-packages/matplotlib/backends/backend_gtk.py 19 from matplotlib.backend_bases import RendererBase, GraphicsContextBase, \ 20 FigureManagerBase, FigureCanvasBase, NavigationToolbar2, cursors ---> 21 from matplotlib.backends.backend_gdk import RendererGDK, FigureCanvasGDK 22 from matplotlib.cbook import is_string_like, enumerate 23 from matplotlib.colors import colorConverter /usr/lib/python2.4/site-packages/matplotlib/backends/backend_gdk.py 33 from matplotlib.backends._nc_backend_gdk import pixbuf_get_pixels_array 34 else: ---> 35 from matplotlib.backends._ns_backend_gdk import pixbuf_get_pixels_array 36 37 ImportError: No module named _ns_backend_gdk I had matplotlib 0.87.4 running like a brand new Mercedes-Benz previously, with older numpy/scipy. Any ideas how to fix this? -- wh From lorrmann at physik.uni-wuerzburg.de Fri Oct 6 03:47:09 2006 From: lorrmann at physik.uni-wuerzburg.de (Volker Lorrmann) Date: Fri, 06 Oct 2006 09:47:09 +0200 Subject: [SciPy-user] Solving Two boundary value problems with Python/Scipy In-Reply-To: References: Message-ID: <452609FD.4080702@physik.uni-wuerzburg.de> Hi list, i?m going to write a simulation, in which it would be neccessary to solve an boundary value ode. My question is, would it be possible to solve such ode?s with any solver in scipy? Otherwise i have to choose another language for this simulation (scilab,rlab). Many thanks Volker -- ----------------------- Volker Lorrmann Universit?t W?rzburg Experimentelle Physik 6 Am Hubland 97074 Wuerzburg Germany Tel: +49 931 888 5770 volker.lorrmann at physik.uni-wuerzburg.de From hasslerjc at adelphia.net Fri Oct 6 09:32:36 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Fri, 06 Oct 2006 09:32:36 -0400 Subject: [SciPy-user] Solving Two boundary value problems with Python/Scipy In-Reply-To: <452609FD.4080702@physik.uni-wuerzburg.de> References: <452609FD.4080702@physik.uni-wuerzburg.de> Message-ID: <45265AF4.5030200@adelphia.net> I don't think that there's a function specifically for boundary value ode problems in SciPy, but it is pretty simple to combine an ode solver with a zero finder to use the "shooting method." I'm not familiar with how rlab or Matlab does it, but the Scilab "bvode" uses a finite difference method. If the problem isn't too complex, the shooting method is easy to set up, and works as well as the finite difference method. (In my vast experience ... well, ok, two applications ... the shooting method was more forgiving and less brittle than the finite difference method.) What sort of bvp do you have? john Volker Lorrmann wrote: > Hi list, > > i?m going to write a simulation, in which it would be neccessary to > solve an boundary value ode. > My question is, would it be possible to solve such ode?s with any solver > in scipy? Otherwise i have to choose another language for this > simulation (scilab,rlab). > > Many thanks > > Volker > > From jdhunter at ace.bsd.uchicago.edu Fri Oct 6 09:56:34 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Fri, 06 Oct 2006 08:56:34 -0500 Subject: [SciPy-user] matplotlib: pylab doesn't import (backends problem) In-Reply-To: <8b3894bc0610060022u4584a855q5798c08b6473b20d@mail.gmail.com> ("William Hunter"'s message of "Fri, 6 Oct 2006 09:22:22 +0200") References: <8b3894bc0610060022u4584a855q5798c08b6473b20d@mail.gmail.com> Message-ID: <87r6xl1ral.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "William" == William Hunter writes: William> These are installed: scipy.__version__ = '0.5.1' William> numpy.__version__ = '1.0rc1' matplotlib.__version__ = William> '0.87.6' William> My options in setup.py for matplotlib is set to build Agg William> and GTKAgg ( = 1), the others all = 0. If I leave them at William> 'auto' as per default, matplotlib doesn't build (?). I William> use Ubuntu Dapper - AMD64. William> Here's what happens if I try to import pylab (in a new William> terminal): William> /usr/lib/python2.4/site-packages/matplotlib/backends/backend_gdk.py William> 33 from matplotlib.backends._nc_backend_gdk import William> pixbuf_get_pixels_array 34 else: ---> 35 from matplotlib.backends._ns_backend_gdk import William> pixbuf_get_pixels_array 36 37 William> ImportError: No module named _ns_backend_gdk This question is better addressed on matplotlib-users.... It looks like either you do not have numpy or pygtk properly installed. Note also you need to eh gtk-devel packages to compile the gtk extensions, as described at http://matplotlib.sourceforge.net/installing.html As a quick check for numpy and pygtk, you can do >>> import numpy >>> import gtk but you'll still need the gtk-devel packages If it still doesn't work, > rm -rf build > python setup.py build >& build.out and post output to the matplotlib-devel list. JDH From willemjagter at gmail.com Fri Oct 6 10:03:57 2006 From: willemjagter at gmail.com (William Hunter) Date: Fri, 6 Oct 2006 16:03:57 +0200 Subject: [SciPy-user] matplotlib: pylab doesn't import (backends problem) In-Reply-To: <87r6xl1ral.fsf@peds-pc311.bsd.uchicago.edu> References: <8b3894bc0610060022u4584a855q5798c08b6473b20d@mail.gmail.com> <87r6xl1ral.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <8b3894bc0610060703m2d3c1136g60c918644c706685@mail.gmail.com> > This question is better addressed on matplotlib-users.... OK, I'll post there, thanks. From peridot.faceted at gmail.com Fri Oct 6 11:12:55 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Fri, 6 Oct 2006 11:12:55 -0400 Subject: [SciPy-user] Solving Two boundary value problems with Python/Scipy In-Reply-To: <45265AF4.5030200@adelphia.net> References: <452609FD.4080702@physik.uni-wuerzburg.de> <45265AF4.5030200@adelphia.net> Message-ID: On 06/10/06, John Hassler wrote: > I don't think that there's a function specifically for boundary value > ode problems in SciPy, but it is pretty simple to combine an ode solver > with a zero finder to use the "shooting method." I'm not familiar with > how rlab or Matlab does it, but the Scilab "bvode" uses a finite > difference method. If the problem isn't too complex, the shooting > method is easy to set up, and works as well as the finite difference > method. (In my vast experience ... well, ok, two applications ... the > shooting method was more forgiving and less brittle than the finite > difference method.) > > What sort of bvp do you have? > > john It seems to me it would be useful for scipy to have an "odetools" package, providing convenience features on top of our existing ODE solvers (as well as a better fortran ODE solver, namely lsodar, but that's beside the point). Features could include: * Implementation of the shooting method * class ODESolution, which provides an interpolated function object * An implementation (necessarily slow and limited) of the finite difference method * Tools for working with linear ODEs (generate a complete set of solutions, say) * Tools for solving eigenvalue problems Any other suggestions? I've started thinking about how to do this, and written some prototype code, but it would be a moderately big project. It should probably mostly in python, because I imagine people will need to extend and modify all the classes for their particular problems. A. M. Archibald From rclewley at cam.cornell.edu Fri Oct 6 12:19:18 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Fri, 6 Oct 2006 12:19:18 -0400 (EDT) Subject: [SciPy-user] Solving Two boundary value problems with Python/Scipy In-Reply-To: References: <452609FD.4080702@physik.uni-wuerzburg.de> <45265AF4.5030200@adelphia.net> Message-ID: > It seems to me it would be useful for scipy to have an "odetools" > package, providing convenience features on top of our existing ODE > solvers (as well as a better fortran ODE solver, namely lsodar, but > that's beside the point). Sure, and I've said before that the SciPy powers-that-be are welcome to use any parts of our PyDSTool package as the basis of such a package. e.g., our SWIG-based interfaces to two great integrators (Dopri and Radau). Our interface includes events at the C-level and external inputs interpolated from arrays. Our ODEGenerator classes provide a lot of convenience features on top of the solvers (including the existing Scipy VODE solver). > > Features could include: > * Implementation of the shooting method Yes, that would be useful. > * class ODESolution, which provides an interpolated function object That's exactly what our Variable and Trajectory classes do, built over scipy's interp1d. > * An implementation (necessarily slow and limited) of the finite > difference method Our diff function provides that with various options for functions from R^N -> R^M. Here's the signature and docstring from PyDSTool/common.py: def diff(func, x0, vars=None, axes=None, dir=1, dx=None): """Numerical 1st derivative of R^N -> R^M function about x0 by finite differences. dir=1 uses finite forward difference. dir=-1 uses finite backward difference. List valued dx rescales finite differencing in each axis separately. vars argument specifies which elements of x0 are to be treated as variables for the purposes of taking the Jacobian. If axes argument is unused or set to be all axes, the Jacobian of the function evaluated at x0 with respect to the variables is returned, otherwise a sub-matrix of it is returned. If dx is not given an appropriate step size is chosen (proportional to sqrt(machine precision)). """ > * Tools for working with linear ODEs (generate a complete set of solutions, say) > * Tools for solving eigenvalue problems > > Any other suggestions? A set of phase plane tools, like finding nullclines, fixed points, stability.... I have a whole bunch of code working for this that will end up in the next release of PyDSTool (for Scipy 0.5.1, finally) and I'll be adding a few additional features like calculating stability of fixed points. The port to Numpy and new Scipy is will be done in the next few weeks so code will be available for people to clean up to their liking for use in SciPy. -Rob From peridot.faceted at gmail.com Fri Oct 6 16:20:55 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Fri, 6 Oct 2006 16:20:55 -0400 Subject: [SciPy-user] Solving Two boundary value problems with Python/Scipy In-Reply-To: References: <452609FD.4080702@physik.uni-wuerzburg.de> <45265AF4.5030200@adelphia.net> Message-ID: On 06/10/06, Robert Clewley wrote: > > It seems to me it would be useful for scipy to have an "odetools" > > package, providing convenience features on top of our existing ODE > > solvers (as well as a better fortran ODE solver, namely lsodar, but > > that's beside the point). > > Sure, and I've said before that the SciPy powers-that-be are welcome to > use any parts of our PyDSTool package as the basis of such a package. > e.g., our SWIG-based interfaces to two great integrators (Dopri and > Radau). Our interface includes events at the C-level and external inputs > interpolated from arrays. Our ODEGenerator classes provide a lot of > convenience features on top of the solvers (including the existing Scipy > VODE solver). I confess I hadn't really examined PyDSTool very closely; it sounds like it addresses exactly what I was suggesting. Perhaps there's no need for such a scipy package, if people can just use PyDSTool. Do you think there is any point in moving some generic functionality into scipy? I'd at least like to have an ODE solver that can handle constraints on the dependent variable. > > * class ODESolution, which provides an interpolated function object > > That's exactly what our Variable and Trajectory classes do, built over > scipy's interp1d. Wait, interp1d is linear-interpolation-only! I had in mind a higher-order spline that took into account not the bending energy at each knot but the known derivatives at each point (and possibly the order used internally by the integrator) to obtain a solution that followed the real solution even more closely at little extra cost. > > * An implementation (necessarily slow and limited) of the finite > > difference method > > Our diff function provides that with various options for > functions from R^N -> R^M. Here's the signature and docstring > from PyDSTool/common.py: Er, I suppose I meant the relaxation method of solving boundary-value problems. > A set of phase plane tools, like finding nullclines, fixed points, > stability.... I have a whole bunch of code working for this > that will end up in the next release of PyDSTool (for Scipy 0.5.1, > finally) and I'll be adding a few additional features like calculating > stability of fixed points. These more advanced applications, particularly if you've got code in progress, maybe do belong in PyDSTool; I had in mind just some relatively basic things (that I have needed). > The port to Numpy and new Scipy is will be done in the next few weeks so > code will be available for people to clean up to their liking for use in > SciPy. I'll take a closer look at what PyDSTool can do. Thanks, and I apologize for not having taken a closer look at it before making my suggestions, A. M. Archibald From rclewley at cam.cornell.edu Fri Oct 6 16:42:35 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Fri, 6 Oct 2006 16:42:35 -0400 (EDT) Subject: [SciPy-user] Solving Two boundary value problems with Python/Scipy In-Reply-To: References: <452609FD.4080702@physik.uni-wuerzburg.de> <45265AF4.5030200@adelphia.net> Message-ID: > I confess I hadn't really examined PyDSTool very closely; it sounds > like it addresses exactly what I was suggesting. Perhaps there's no > need for such a scipy package, if people can just use PyDSTool. Do you > think there is any point in moving some generic functionality into > scipy? Not necessarily, and besides I don't know that my code is really of high enough quality for it to belong in there. I don't have the time to completely rewrite it with unit tests, etc., although it *does* work reliably on the tests for which we can compare output from other packages, for instance. However, my guess is that there's sufficient general applicability of our numeric Interval, Point and Pointset classes (and possibly our Curve a.k.a. Trajectory classes) for other scientific applications that might make it worthwhile for those to be incorporated, at least. > I'd at least like to have an ODE solver that can handle > constraints on the dependent variable. Indeed, that's what drove us to interfacing with Radau, which does that nicely. > Wait, interp1d is linear-interpolation-only! I had in mind a > higher-order spline that took into account not the bending energy at > each knot but the known derivatives at each point (and possibly the > order used internally by the integrator) to obtain a solution that > followed the real solution even more closely at little extra cost. Absolutely. From what I remember, at the time that I wrote those classes I there was only that 1d interpolator around in scipy that suited our needs. Well, anyway, the classes are not really hardwired to it and other interpolators could be put in very easily. I plan to make the option more accessible with our Numpy-based release. For everyday "normal" purposes just using small enough time steps in the integration makes linear interpolation fine. Plus there's already a "refine" option in the C integrators which uses high-order interpolation to provide a fine mesh before it reaches the interpolator. >> Our diff function provides that with various options for >> functions from R^N -> R^M. Here's the signature and docstring >> from PyDSTool/common.py: > > Er, I suppose I meant the relaxation method of solving boundary-value problems. Ah, right... yes. Sorry, I was asleep at the wheel when I included that! > Thanks, and I apologize for not having taken a closer look at it > before making my suggestions, No need to apologise. Exploring the vast range of what's out there and discussing your ideas for new features is what these lists are for. You've encouraged me to write some classes to help PyDSTool users do BVPs! -Rob From pav at iki.fi Fri Oct 6 19:55:53 2006 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 07 Oct 2006 02:55:53 +0300 Subject: [SciPy-user] COLNEW the BVP solver for scipy [Was: Solving Two boundary value problems ...] References: <452609FD.4080702@physik.uni-wuerzburg.de> <45265AF4.5030200@adelphia.net> Message-ID: John Hassler wrote: > I don't think that there's a function specifically for boundary value > ode problems in SciPy, but it is pretty simple to combine an ode solver > with a zero finder to use the "shooting method." I'm not familiar with > how rlab or Matlab does it, but the Scilab "bvode" uses a finite > difference method. If the problem isn't too complex, the shooting > method is easy to set up, and works as well as the finite difference > method. (In my vast experience ... well, ok, two applications ... the > shooting method was more forgiving and less brittle than the finite > difference method.) Hi all, I've recently written a Python wrapper for a modified version of the COLNEW bvp solver (Scilab's bvode is also based on COLNEW). The code can be found from:: http://www.elisanet.fi/ptvirtan/bvp/ In addition to wrapping COLNEW, the package has the following features * I vectorized COLNEW over mesh points. (Vectorized code is approx 10x faster than non-vectorized, at least in my small tests.) * There are simple estimators for the Jacobians so that they need not be explicitly provided. They appear to work well despite their simplicity. This is still in development -- the code has so far not been in real use. However, I made a test suite of the test problems for COLSYS, COLNEW and MUS plus some others, and the code solves them correctly. About usage and syntax: For example solving Bessel's equation plus another one with some boundary conditions, u''(x) = -u'(x) / x + (nu**2/x**2 - 1) * u(x) for 1 <= x <= 10 v'(x) = x**(nu+1) * u(x) for 1 <= x <= 10 u(1) = J_{nu}(1) u(10) = J_{nu}(10) v(5) = 5**(nu+1) * J_{nu+1}(5) looks like this (yes, it is vectorized) import scipy as N N.pkgload('special') import bvp nu = 3.4123 degrees = [2, 1] def fsub(x, z): """The equations""" u, du, v = z # it's neat to name the variables return N.array([-du/x + (nu**2/x**2 - 1)*u, x**(nu+1) * u]) def gsub(z): """The boundary conditions""" u, du, v = z return N.array([u[0] - N.special.jv(nu, 1), v[1] - 5**(nu+1) * N.special.jv(nu+1, 5), u[2] - N.special.jv(nu, 10)]) boundary_points = [1, 5, 10] tol = [1e-5, 0, 1e-5] solution = bvp.colnew.solve( boundary_points, degrees, fsub, gsub, is_linear=True, tolerances=tol, vectorized=True, maximum_mesh_size=300) solution_values = solution([2,3,4,5,6]) I hope this package will be of some use to people. I'd be interested to hear if it proves (un)useful or any other suggestions the people on the list might have. Pauli Virtanen From R.Springuel at umit.maine.edu Fri Oct 6 20:21:23 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Fri, 06 Oct 2006 20:21:23 -0400 Subject: [SciPy-user] Installation problems on Mac OS X Message-ID: <4526F303.6000301@umit.maine.edu> I just migrated to a Mac OS X environment and am trying to install numpy from the source files (a first for this former Windows user). However, when I do so, I get the following error: Running from numpy source directory. F2PY Version 2_3198 blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] running install running build running config_fc running build_src building py_modules sources building extension "numpy.core.multiarray" sources Generating build/src.macosx-10.3-fat-2.5/numpy/core/config.h customize NAGFCompiler customize AbsoftFCompiler customize IbmFCompiler Could not locate executable g77 Could not locate executable f77 Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize Gnu95FCompiler customize G95FCompiler customize GnuFCompiler customize Gnu95FCompiler customize NAGFCompiler customize NAGFCompiler using config C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -Inumpy/core/src -Inumpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c' gcc: _configtest.c sh: line 1: gcc: command not found sh: line 1: gcc: command not found failure. removing: _configtest.c _configtest.o numpy/core/setup.py:50: DeprecationWarning: raising a string exception is deprecated raise "ERROR: Failed to test configuration" Traceback (most recent call last): File "setup.py", line 89, in setup_package() File "setup.py", line 82, in setup_package configuration=configuration ) File "/Users/RPS/Desktop/numpy-1.0rc1/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/Users/RPS/Desktop/numpy-1.0rc1/numpy/distutils/command/install.py", line 16, in run r = old_install.run(self) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/command/install.py", line 506, in run self.run_command('build') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/Users/RPS/Desktop/numpy-1.0rc1/numpy/distutils/command/build_src.py", line 87, in run self.build_sources() File "/Users/RPS/Desktop/numpy-1.0rc1/numpy/distutils/command/build_src.py", line 106, in build_sources self.build_extension_sources(ext) File "/Users/RPS/Desktop/numpy-1.0rc1/numpy/distutils/command/build_src.py", line 212, in build_extension_sources sources = self.generate_sources(sources, ext) File "/Users/RPS/Desktop/numpy-1.0rc1/numpy/distutils/command/build_src.py", line 270, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 50, in generate_config_h raise "ERROR: Failed to test configuration" ERROR: Failed to test configuration Having not done a source file installation before, I don't know what to make of this. Can anybody help me out? -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: Wednesday 2-3pm From oliphant at ee.byu.edu Fri Oct 6 21:24:51 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 06 Oct 2006 19:24:51 -0600 Subject: [SciPy-user] Installation problems on Mac OS X In-Reply-To: <4526F303.6000301@umit.maine.edu> References: <4526F303.6000301@umit.maine.edu> Message-ID: <452701E3.8090403@ee.byu.edu> R. Padraic Springuel wrote: >I just migrated to a Mac OS X environment and am trying to install numpy >from the source files (a first for this former Windows user). However, >when I do so, I get the following error: > >Running from numpy source directory. >F2PY Version 2_3198 >blas_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-faltivec', >'-I/System/Library/Frameworks/vecLib.framework/Headers'] > >lapack_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-faltivec'] > >running install >running build >running config_fc >running build_src >building py_modules sources >building extension "numpy.core.multiarray" sources >Generating build/src.macosx-10.3-fat-2.5/numpy/core/config.h >customize NAGFCompiler >customize AbsoftFCompiler >customize IbmFCompiler >Could not locate executable g77 >Could not locate executable f77 >Could not locate executable gfortran >Could not locate executable f95 >customize GnuFCompiler >customize Gnu95FCompiler >customize G95FCompiler >customize GnuFCompiler >customize Gnu95FCompiler >customize NAGFCompiler >customize NAGFCompiler using config >C compiler: gcc -arch ppc -arch i386 -isysroot >/Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double >-no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 > >compile options: >'-I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 >-Inumpy/core/src -Inumpy/core/include >-I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c' >gcc: _configtest.c >sh: line 1: gcc: command not found >sh: line 1: gcc: command not found >failure. > > > [snip] >Having not done a source file installation before, I don't know what to >make of this. Can anybody help me out? > > > The important information of the error is just above the [snip]. Do you have a compiler installed? -Travis From lorrmann at physik.uni-wuerzburg.de Sat Oct 7 07:12:18 2006 From: lorrmann at physik.uni-wuerzburg.de (Volker Lorrmann) Date: Sat, 07 Oct 2006 13:12:18 +0200 Subject: [SciPy-user] Solving Two boundary value problems with Python/Scipy In-Reply-To: References: Message-ID: <45278B92.40801@physik.uni-wuerzburg.de> Hi John, i?ve already found less information about the shooting method. Do you have some more informations for me? Than i think i?ll give it a try. I?ll try now to describe the bvp. The ode describes "electrone/hole density" in the conducting and valence band of a polymer thin film. The boundary values are, that the density on the front and back have to be zero. Volker > > > I don't think that there's a function specifically for boundary value > ode problems in SciPy, but it is pretty simple to combine an ode > solver with a zero finder to use the "shooting method." I'm not > familiar with how rlab or Matlab does it, but the Scilab "bvode" uses > a finite difference method. If the problem isn't too complex, the > shooting method is easy to set up, and works as well as the finite > difference method. (In my vast experience ... well, ok, two > applications ... the shooting method was more forgiving and less > brittle than the finite difference method.) > > What sort of bvp do you have? > > john > > Volker Lorrmann wrote: >> Hi list, >> >> i?m going to write a simulation, in which it would be neccessary to >> solve an boundary value ode. >> My question is, would it be possible to solve such ode?s with any >> solver in scipy? Otherwise i have to choose another language for this >> simulation (scilab,rlab). >> >> Many thanks >> >> Volker >> >> From hasslerjc at adelphia.net Sat Oct 7 08:59:40 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Sat, 07 Oct 2006 08:59:40 -0400 Subject: [SciPy-user] Solving Two boundary value problems with Python/Scipy In-Reply-To: <45278B92.40801@physik.uni-wuerzburg.de> References: <45278B92.40801@physik.uni-wuerzburg.de> Message-ID: <4527A4BC.2030305@adelphia.net> An HTML attachment was scrubbed... URL: From lorrmann at physik.uni-wuerzburg.de Sat Oct 7 10:04:00 2006 From: lorrmann at physik.uni-wuerzburg.de (Volker Lorrmann) Date: Sat, 07 Oct 2006 16:04:00 +0200 Subject: [SciPy-user] COLNEW the BVP solver for scipy [Was: Solving Two boundary value problems ...] In-Reply-To: References: Message-ID: <4527B3D0.6090401@physik.uni-wuerzburg.de> That looks really amazing. Exactly what i?ve been looking for. Thanks!! Volker > Hi all, > > I've recently written a Python wrapper for a modified version of the COLNEW > bvp solver (Scilab's bvode is also based on COLNEW). The code can be found > from:: > > http://www.elisanet.fi/ptvirtan/bvp/ > > In addition to wrapping COLNEW, the package has the following features > > * I vectorized COLNEW over mesh points. (Vectorized code is approx 10x > faster than non-vectorized, at least in my small tests.) > * There are simple estimators for the Jacobians so that they need not be > explicitly provided. They appear to work well despite their simplicity. > > This is still in development -- the code has so far not been in real use. > However, I made a test suite of the test problems for COLSYS, COLNEW and > MUS plus some others, and the code solves them correctly. > > About usage and syntax: For example solving Bessel's equation plus another > one with some boundary conditions, > > u''(x) = -u'(x) / x + (nu**2/x**2 - 1) * u(x) for 1 <= x <= 10 > v'(x) = x**(nu+1) * u(x) for 1 <= x <= 10 > > u(1) = J_{nu}(1) > u(10) = J_{nu}(10) > v(5) = 5**(nu+1) * J_{nu+1}(5) > > looks like this (yes, it is vectorized) > > import scipy as N > N.pkgload('special') > import bvp > > nu = 3.4123 > degrees = [2, 1] > > def fsub(x, z): > """The equations""" > u, du, v = z # it's neat to name the variables > return N.array([-du/x + (nu**2/x**2 - 1)*u, x**(nu+1) * u]) > > def gsub(z): > """The boundary conditions""" > u, du, v = z > return N.array([u[0] - N.special.jv(nu, 1), > v[1] - 5**(nu+1) * N.special.jv(nu+1, 5), > u[2] - N.special.jv(nu, 10)]) > > boundary_points = [1, 5, 10] > tol = [1e-5, 0, 1e-5] > > solution = bvp.colnew.solve( > boundary_points, degrees, fsub, gsub, > is_linear=True, tolerances=tol, > vectorized=True, maximum_mesh_size=300) > > solution_values = solution([2,3,4,5,6]) > > I hope this package will be of some use to people. I'd be interested to hear > if it proves (un)useful or any other suggestions the people on the list > might have. > > Pauli Virtanen From nicolist at limare.net Sat Oct 7 13:20:15 2006 From: nicolist at limare.net (Nicolas Limare) Date: Sat, 07 Oct 2006 19:20:15 +0200 Subject: [SciPy-user] the scipy roundup bug tracker should be closed Message-ID: Hi, The scipy tracker for the roundup bts on scipy.net is only used by spammers. As all the bugs are now handled through trac, this tracker should be closed. http://www.scipy.net/roundup/scipy/ -- Nico From R.Springuel at umit.maine.edu Sat Oct 7 17:24:36 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Sat, 07 Oct 2006 17:24:36 -0400 Subject: [SciPy-user] Installation problems on Mac OS X In-Reply-To: References: Message-ID: <45281B14.6050706@umit.maine.edu> It's a brand new MacBook, so I'd only have a compiler if it came with one preinstalled. Given the error, I'm guessing that it didn't. So what kind of compiler should I be looking to get and where from? -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: Wednesday 2-3pm From lists.steve at arachnedesign.net Sat Oct 7 18:53:24 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Sat, 7 Oct 2006 18:53:24 -0400 Subject: [SciPy-user] Installation problems on Mac OS X In-Reply-To: <45281B14.6050706@umit.maine.edu> References: <45281B14.6050706@umit.maine.edu> Message-ID: <7CAC9179-1263-494E-BE15-72A4F012ACD0@arachnedesign.net> Hi, > It's a brand new MacBook, so I'd only have a compiler if it came with > one preinstalled. Given the error, I'm guessing that it didn't. So > what kind of compiler should I be looking to get and where from? > In the install discs, you should be able to find a way to install XCode/Developer Tools. If you can't find the installer, you can d/l it from here. http://developer.apple.com/tools/xcode/ Install those and you'll get gcc, etc. You may be able to install gcc w/o XCode, but I'm not sure. Or .. you can install MacPorts (http://macports.org) and then install gcc from there. -steve ps - I had mistakenly sent this email earlier from the wrong email address, so it didn't get through, sorry to list mods :-) From gnchen at cortechs.net Sat Oct 7 20:37:23 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Sat, 7 Oct 2006 17:37:23 -0700 Subject: [SciPy-user] matplotlib and python 2.5 on Intel OS X Message-ID: Hi! I am like to compile matplotlib for ython 2.5 on Intel OSX. Compilation went fine. But I got segmented faults when I tried to run some test scripts. Anyone ever successfully built it on python 2.5?? If did, can you give me some hints on setup.py? Gen -------------- next part -------------- An HTML attachment was scrubbed... URL: From cygnusx1 at mac.com Sat Oct 7 23:31:11 2006 From: cygnusx1 at mac.com (Tom Bridgman) Date: Sat, 7 Oct 2006 23:31:11 -0400 Subject: [SciPy-user] Building scipy 0.5.1 on Mac OS 10.4.7 Message-ID: <3F5A6101-0BE1-4D9B-97C7-46BC26226886@mac.com> I've finally achieved a successful build of scipy 0.5.1 & numpy-1.0rc1 on Mac OS 10.4.7 PPC. I was guided by the instructions at http://www.scipy.org/Installing_SciPy/Mac_OS_X However, there were some confusing parts in the description, for example: - I switched to gcc 3.3 to build fftw but had to switch back to gcc 4.0 to build numpy and later. - I had to set the link to libgcc.a just prior to the actual scipy build. - All numpy tests passed, but two scipy tests failed. Everything built okay, but when I tried to run a program that called scipy.integrate, it failed with: import scipy.integrate File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/integrate/__init__.py", line 9, in ? from quadrature import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/integrate/quadrature.py", line 8, in ? from scipy.special.orthogonal import p_roots File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/special/__init__.py", line 8, in ? from basic import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/special/basic.py", line 8, in ? from _cephes import * ImportError: Failure linking new module: /Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ special/_cephes.so: Symbol not found: _printf$LDBLStub Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ lib/python2.4/site-packages/scipy/special/_cephes.so Expected in: dynamic lookup Suggestions? Tom ======== GCC/G77 % sudo tar -xvf g77v3.4-bin.tar -C / % sudo gcc_select 3.3 FFTW Libraries % gnutar -xvzf fftw-3.1.2.tar.gz % cd fftw-3.1.2 % ./configure % make % sudo make install % sudo ln -s /usr/local/lib/libfftw3.a /usr/local/lib/libfftw.a % sudo ln -s /usr/local/lib/libfftw3.la /usr/local/lib/libfftw.la % sudo ln -s /usr/local/include/fftw3.h /usr/local/include/fftw.h numpy % sudo gcc_select 4.0 % gnutar xzf numpy-1.0rc1.tar.gz % sudo python setup.py install % python import numpy numpy.test(1,1) (ALL PASSED) % gnutar xzf scipy-0.5.1.tar.gz % cd scipy-0.5.1 sudo ln -s /usr/lib/gcc/powerpc-apple-darwin8/4.0.1/libgcc.a /usr/ local/lib/libcc_dynamic.a Some may encounter a linking error when building fftpack, which mentions something about undefined symbols from /usr/lib/ld. If so, you need to edit 'Lib/fftpack/setup.py' by changing: config.add_extension('_fftpack', sources=sources, libraries=['dfftpack'], extra_info = extra_info ) config.add_extension('convolve', sources = ['convolve.pyf','src/convolve.c'], libraries = ['dfftpack'], extra_info = extra_info ) to: config.add_extension('_fftpack', sources=sources, libraries=['dfftpack'], extra_info = extra_info, extra_link_args = ['-lSystemStubs'] ) config.add_extension('convolve', sources = ['convolve.pyf','src/convolve.c'], libraries = ['dfftpack'], extra_info = extra_info, extra_link_args = ['-lSystemStubs'] ) % python setup.py build % sudo python setup.py install % python >>> import scipy >>> scipy.test(level=1) ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ERROR: check_simple_todense (scipy.io.tests.test_mmio.test_mmio_coordinate) -- W.T. Bridgman, Ph.D. Physics & Astronomy From koara at atlas.cz Mon Oct 9 05:39:24 2006 From: koara at atlas.cz (koara at atlas.cz) Date: Mon, 09 Oct 2006 11:39:24 +0200 Subject: [SciPy-user] sparse behaviour? Message-ID: <9b88f1713e4d4a18b400c72d5f0726e3@atlas.cz> Hello all, i decided to represent some of my large matrices as sparse matrices. My initial assumption was that the exposed operations and interfaces are the same (only the inner representation changes), so that my code would require no changes. However my code broke down immediately; is this how things are and my assumtpion was incorrect, or is something wrong? an example (getting covariance matrix) of code that didn't make it: n1 = matrix([[0,0,1,0],[0,0,0,2]]) s1 = sparse.lil_matrix(n1) cov(n1) # OK cov(s1) # shape mismatch exception--------------------------------------- Nov? spoty P. ?tvrni?ka najdete na www.neuservis.cz. Prav? dnes! From jdhunter at ace.bsd.uchicago.edu Mon Oct 9 10:01:56 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon, 09 Oct 2006 09:01:56 -0500 Subject: [SciPy-user] matplotlib and python 2.5 on Intel OS X In-Reply-To: (Gennan Chen's message of "Sat, 7 Oct 2006 17:37:23 -0700") References: Message-ID: <87slhxoaej.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Gennan" == Gennan Chen writes: Gennan> Hi! I am like to compile matplotlib for ython 2.5 on Gennan> Intel OSX. Compilation went fine. But I got segmented Gennan> faults when I tried to run some test scripts. Anyone ever Gennan> successfully built it on python 2.5?? If did, can you Gennan> give me some hints on setup.py? matplotlib questions should be directed to matplotlib-users or matplotlib-devel. In the mean time, please take a look at the attached instructions for debugging segfaults. Once you've collected some of this, please post to the devel list. # How to diagnose where a segfault is occurring First thing to try is simply rm -rf the site-packages/matplotlib and build subdirs and get a clean install. Installing a new version over a pretty old version has been known to cause trouble, segfault, etc. Try importing these packages individually import matplotlib._image import matplotlib._transforms #one of these three depending on which numerix package you are using import matplotlib.backends._na_backend_agg # for numarray import matplotlib.backends._nc_backend_agg # for Numeric import matplotlib.backends._ns_backend_agg # for numpy import matplotlib.backends._tkagg import matplotlib._agg If the last two work and the others don't, it is likely you need to upgrade your gcc, because on some platforms (OS X for sure) old versions of gcc cannot compile new versions of pycxx, which matplotlib uses for building some but not all of it's extensions. Report back which if any work or segfault or raise tracebacks, If that shed additional light, again flush the build and install dirs, and try setting VERBOSE=True in setup.py before doing a clean install. The VERBOSE setting will generate lots of extra output and may help indicate where the segfault is occurring From jdhunter at ace.bsd.uchicago.edu Mon Oct 9 10:01:56 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon, 09 Oct 2006 09:01:56 -0500 Subject: [SciPy-user] matplotlib and python 2.5 on Intel OS X In-Reply-To: (Gennan Chen's message of "Sat, 7 Oct 2006 17:37:23 -0700") References: Message-ID: <87slhxoaej.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Gennan" == Gennan Chen writes: Gennan> Hi! I am like to compile matplotlib for ython 2.5 on Gennan> Intel OSX. Compilation went fine. But I got segmented Gennan> faults when I tried to run some test scripts. Anyone ever Gennan> successfully built it on python 2.5?? If did, can you Gennan> give me some hints on setup.py? matplotlib questions should be directed to matplotlib-users or matplotlib-devel. In the mean time, please take a look at the attached instructions for debugging segfaults. Once you've collected some of this, please post to the devel list. # How to diagnose where a segfault is occurring First thing to try is simply rm -rf the site-packages/matplotlib and build subdirs and get a clean install. Installing a new version over a pretty old version has been known to cause trouble, segfault, etc. Try importing these packages individually import matplotlib._image import matplotlib._transforms #one of these three depending on which numerix package you are using import matplotlib.backends._na_backend_agg # for numarray import matplotlib.backends._nc_backend_agg # for Numeric import matplotlib.backends._ns_backend_agg # for numpy import matplotlib.backends._tkagg import matplotlib._agg If the last two work and the others don't, it is likely you need to upgrade your gcc, because on some platforms (OS X for sure) old versions of gcc cannot compile new versions of pycxx, which matplotlib uses for building some but not all of it's extensions. Report back which if any work or segfault or raise tracebacks, If that shed additional light, again flush the build and install dirs, and try setting VERBOSE=True in setup.py before doing a clean install. The VERBOSE setting will generate lots of extra output and may help indicate where the segfault is occurring From R.Springuel at umit.maine.edu Mon Oct 9 10:42:53 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Mon, 09 Oct 2006 10:42:53 -0400 Subject: [SciPy-user] Installation Problems on Mac OS X Message-ID: <452A5FED.3090504@umit.maine.edu> Okay. I got numpy to install okay, but now I'm having a problem with scipy. Now, the full output of the install command is really long by the time it hits the error, but I believe that these lines at the end specify what the error is: sh: line 1: f95: command not found sh: line 1: f95: command not found error: Command "f95 -fixed -O4 -target=native -c -c Lib/fftpack/dfftpack/dcosqb.f -o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/dfftpack/dcosqb.o" failed with exit status 127 Can some one explain to me what this error means and how to go about correcting it? -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: Wednesday 2-3pm From ahoover at eecs.berkeley.edu Mon Oct 9 11:14:33 2006 From: ahoover at eecs.berkeley.edu (Aaron Hoover) Date: Mon, 9 Oct 2006 08:14:33 -0700 Subject: [SciPy-user] Installation Problems on Mac OS X In-Reply-To: <452A5FED.3090504@umit.maine.edu> References: <452A5FED.3090504@umit.maine.edu> Message-ID: <1B99FD32-54D6-484E-91EB-CECBC9803478@eecs.berkeley.edu> > > sh: line 1: f95: command not found > sh: line 1: f95: command not found > error: Command "f95 -fixed -O4 -target=native -c -c > Lib/fftpack/dfftpack/dcosqb.f -o > build/temp.macosx-10.3-fat-2.5/Lib/fftpack/dfftpack/dcosqb.o" failed > with exit status 127 You are missing a Fortran 95 compiler. There is a binary version of the compiler here: http://prdownloads.sourceforge.net/hpc/gcc-intel- bin.tar.gz?download. Also, you can find a nice guide to setting up SciPy on OS X on the SciPy wiki here: http://www.scipy.org/Installing_SciPy/Mac_OS_X. Cheers, Aaron From pspuhler at ball.com Mon Oct 9 12:17:28 2006 From: pspuhler at ball.com (Spuhler, Peter) Date: Mon, 9 Oct 2006 10:17:28 -0600 Subject: [SciPy-user] howto call optimize.brute() Message-ID: <7575324F915A05448800386F6CEAF3670454C604@AEROMSG2.AERO.BALL.COM> I can't seem to get optimize.brute working The following error function works for optimize.fmin_powell for example Def _error(x,offset): Error = x[0]**2 + x[1]**2 + offset Return error Optimize.fmin(_error,x0=[1,1],args=(10,)) But if I now try to use optimize.brute Bounds = ((-10,10,0.1),(-10,10,0.1)) Optimize.brute(_error,ranges=bounds,args=(10,)) I get the following error. What am I missing? C:\Python24\lib\site-packages\scipy\optimize\optimize.py in brute(func, ranges, args, Ns, full_output, finish) 1577 if (N==1): 1578 grid = (grid,) -> 1579 Jout = vecfunc(*grid) 1580 Nshape = shape(Jout) 1581 indx = argmin(Jout.ravel()) C:\Python24\lib\site-packages\numpy\lib\function_base.py in __call__(self, *args ) 619 nargs = len(args) 620 if (nargs > self.nin) or (nargs < self.nin_wo_defaults): --> 621 raise ValueError, "mismatch between python function inputs"\ 622 " and received arguments" 623 if self.nout is None or self.otypes == '': ValueError: mismatch between python function inputs and received arguments Thanks -Peter From nwagner at iam.uni-stuttgart.de Mon Oct 9 13:30:47 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 09 Oct 2006 19:30:47 +0200 Subject: [SciPy-user] howto call optimize.brute() In-Reply-To: <7575324F915A05448800386F6CEAF3670454C604@AEROMSG2.AERO.BALL.COM> References: <7575324F915A05448800386F6CEAF3670454C604@AEROMSG2.AERO.BALL.COM> Message-ID: On Mon, 9 Oct 2006 10:17:28 -0600 "Spuhler, Peter" wrote: > I can't seem to get optimize.brute working > > The following error function works for >optimize.fmin_powell for example > Def _error(x,offset): > Error = x[0]**2 + x[1]**2 + offset > Return error > Optimize.fmin(_error,x0=[1,1],args=(10,)) > > But if I now try to use optimize.brute > Bounds = ((-10,10,0.1),(-10,10,0.1)) > Optimize.brute(_error,ranges=bounds,args=(10,)) > I get the following error. What am I missing? > > C:\Python24\lib\site-packages\scipy\optimize\optimize.py >in brute(func, > ranges, > args, Ns, full_output, finish) > 1577 if (N==1): > 1578 grid = (grid,) > -> 1579 Jout = vecfunc(*grid) > 1580 Nshape = shape(Jout) > 1581 indx = argmin(Jout.ravel()) > > C:\Python24\lib\site-packages\numpy\lib\function_base.py >in > __call__(self, *args > ) > 619 nargs = len(args) > 620 if (nargs > self.nin) or (nargs < >self.nin_wo_defaults): > --> 621 raise ValueError, "mismatch between >python function > inputs"\ > > 622 " and received arguments" > 623 if self.nout is None or self.otypes == >'': > > ValueError: mismatch between python function inputs and >received > arguments > > Thanks > -Peter > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user I cannot reproduce your problem. I am using >>> scipy.__version__ '0.5.2.dev2247' >>> import numpy >>> numpy.__version__ '1.0.dev3292' Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_brute.py Type: text/x-python Size: 257 bytes Desc: not available URL: From andrew.young at sri.com Mon Oct 9 13:39:25 2006 From: andrew.young at sri.com (Andrew B. Young) Date: Mon, 09 Oct 2006 10:39:25 -0700 Subject: [SciPy-user] Problem installing scipy-0.5.1 on FC5 Message-ID: <452A894D.4000407@sri.com> I would like to use splrep() splev() for matplotlib plot smoothing similar to gnuplot smoothing (http://mail.python.org/pipermail/python-list/2005-April/275568.html) but the install fails. I followed the instructions for FC5 on http://www.scipy.org/Installing_SciPy/Linux?highlight=%28%28----%28-%2A%29%28%5Cr%29%3F%5Cn%29%28.%2A%29CategoryInstallation%5Cb%29 sudo yum install numpy Installed: numpy.i386 0:0.9.8-1.fc5 Dependency Installed: atlas.i386 0:3.6.0-10.fc5 sudo yum install lapack-devel blas-devel Installed: blas-devel.i386 0:3.0-37.fc5 lapack-devel.i386 0:3.0-37.fc5 Dependency Installed: blas.i386 0:3.0-37.fc5 lapack.i386 0:3.0-37.fc5 Downloaded scipy-0.5.1 cd /usr/src/scipy-0.5.1 python setup.py install, and... File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 655, in _get_configuration_from_setup_py ('.py', 'U', 1)) File "Lib/ndimage/setup.py", line 3, in ? from numpy.numarray import get_numarray_include_dirs Then in installed numarray-1.5.2, but that went into site-packages/numarray. Then I tried a symlink from numpy/numarray -> ../numarry, but more errors Any help? Thanks, Andre From pecora at anvil.nrl.navy.mil Mon Oct 9 13:45:46 2006 From: pecora at anvil.nrl.navy.mil (Louis Pecora) Date: Mon, 09 Oct 2006 13:45:46 -0400 Subject: [SciPy-user] Problem loading Scipy on MacOS X, missing module. Message-ID: <452A8ACA.8020205@anvil.nrl.navy.mil> I recently loaded the SuperPack for Scipy 0.5.1 on the MacOS 10.4 which included MacPython 2.4. All packages were loaded: scipy-0.5.0.2095-py2.4-macosx10.4.mpkg PyMC-1.0_beta-py2.4-macosx10.4.mpkg numpy-1.1.2881-py2.4-macosx10.4.mpkg matplotlib-0.87.4_r2592-py2.4-macosx10.4.mpkg ipython-0.7.2-py2.4-macosx10.4.mpkg My problem. When I run pythonw in the Terminal and import scipy I get the following error message: >>> import scipy import linsolve.umfpack -> failed: Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparsetools.so: Library not loaded: /usr/local/lib/libg2c.0.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparsetools.so Reason: image not found So, I guess it can't find a module (so it says), but I'm a bit lost in the verbosity of the error message. Can anyone tell me what the problem is? Thanks for any help. -- Cheers, Lou Pecora Code 6362 Naval Research Lab Washington, DC 20375 USA Ph: +202-767-6002 email: pecora at anvil.nrl.navy.mil From R.Springuel at umit.maine.edu Mon Oct 9 13:53:18 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Mon, 09 Oct 2006 13:53:18 -0400 Subject: [SciPy-user] Installation problems on Mac OS X Message-ID: <452A8C8E.4000601@umit.maine.edu> Okay, so I installed the Fortran compiler using the directions and link on the wiki, but I'm still getting the same error. Now what? -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: Wednesday 2-3pm From oliphant.travis at ieee.org Mon Oct 9 14:15:56 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 09 Oct 2006 12:15:56 -0600 Subject: [SciPy-user] Problem installing scipy-0.5.1 on FC5 In-Reply-To: <452A894D.4000407@sri.com> References: <452A894D.4000407@sri.com> Message-ID: <452A91DC.1050604@ieee.org> Andrew B. Young wrote: > I would like to use splrep() splev() for matplotlib plot smoothing > similar to gnuplot smoothing > (http://mail.python.org/pipermail/python-list/2005-April/275568.html) > but the install fails. I followed the instructions for FC5 on > http://www.scipy.org/Installing_SciPy/Linux?highlight=%28%28----%28-%2A%29%28%5Cr%29%3F%5Cn%29%28.%2A%29CategoryInstallation%5Cb%29 > > sudo yum install numpy > Installed: numpy.i386 0:0.9.8-1.fc5 > Dependency Installed: atlas.i386 0:3.6.0-10.fc5 > sudo yum install lapack-devel blas-devel > Installed: blas-devel.i386 0:3.0-37.fc5 lapack-devel.i386 0:3.0-37.fc5 > Dependency Installed: blas.i386 0:3.0-37.fc5 lapack.i386 0:3.0-37.fc5 > > Downloaded scipy-0.5.1 > cd /usr/src/scipy-0.5.1 > python setup.py install, and... > File > "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line > 655, in _get_configuration_from_setup_py > ('.py', 'U', 1)) > File "Lib/ndimage/setup.py", line 3, in ? > from numpy.numarray import get_numarray_include_dirs > > The dependencies are wrong. SciPy 0.5.1 requires numpy 1.0 not numpy-0.9.8. The required numarray compatibility layer is in numpy 1.0 not in numarray. You don't need to install numarray at all. Install numpy 1.0 -Travis From pspuhler at ball.com Mon Oct 9 15:00:13 2006 From: pspuhler at ball.com (Spuhler, Peter) Date: Mon, 9 Oct 2006 13:00:13 -0600 Subject: [SciPy-user] howto call optimize.brute() Message-ID: <7575324F915A05448800386F6CEAF3670454C60C@AEROMSG2.AERO.BALL.COM> On Mon, 9 Oct 2006 10:17:28 -0600 "Spuhler, Peter" ball.com> wrote: > I can't seem to get optimize.brute working > > The following error function works for >optimize.fmin_powell for example > Def _error(x,offset): > Error = x[0]**2 + x[1]**2 + offset > Return error > Optimize.fmin(_error,x0=[1,1],args=(10,)) > > But if I now try to use optimize.brute > Bounds = ((-10,10,0.1),(-10,10,0.1)) > Optimize.brute(_error,ranges=bounds,args=(10,)) > I get the following error. What am I missing? > > C:\Python24\lib\site-packages\scipy\optimize\optimize.py >in brute(func, > ranges, > args, Ns, full_output, finish) > 1577 if (N==1): > 1578 grid = (grid,) > -> 1579 Jout = vecfunc(*grid) > 1580 Nshape = shape(Jout) > 1581 indx = argmin(Jout.ravel()) > > C:\Python24\lib\site-packages\numpy\lib\function_base.py >in > __call__(self, *args > ) > 619 nargs = len(args) > 620 if (nargs > self.nin) or (nargs < >self.nin_wo_defaults): > --> 621 raise ValueError, "mismatch between >python function > inputs"\ > > 622 " and received arguments" > 623 if self.nout is None or self.otypes == >'': > > ValueError: mismatch between python function inputs and >received > arguments > > Thanks > -Peter > > > _______________________________________________ > SciPy-user mailing list > SciPy-user scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user I cannot reproduce your problem. I am using >>> scipy.__version__ '0.5.2.dev2247' >>> import numpy >>> numpy.__version__ '1.0.dev3292' Nils I'm using enthought python dist Numpy 0.9.9.2706 Scipy 0.5.0.2033 Is this a bug in this version of scipy? From andrew.young at sri.com Mon Oct 9 17:11:06 2006 From: andrew.young at sri.com (Andrew B. Young) Date: Mon, 09 Oct 2006 14:11:06 -0700 Subject: [SciPy-user] Problem installing scipy-0.5.1 on FC5 In-Reply-To: <452A91DC.1050604@ieee.org> References: <452A894D.4000407@sri.com> <452A91DC.1050604@ieee.org> Message-ID: <452ABAEA.6080204@sri.com> That did it! Thanks Travis. Travis Oliphant wrote: > Andrew B. Young wrote: > >> I would like to use splrep() splev() for matplotlib plot smoothing >> similar to gnuplot smoothing >> (http://mail.python.org/pipermail/python-list/2005-April/275568.html) >> but the install fails. I followed the instructions for FC5 on >> http://www.scipy.org/Installing_SciPy/Linux?highlight=%28%28----%28-%2A%29%28%5Cr%29%3F%5Cn%29%28.%2A%29CategoryInstallation%5Cb%29 >> >> sudo yum install numpy >> Installed: numpy.i386 0:0.9.8-1.fc5 >> Dependency Installed: atlas.i386 0:3.6.0-10.fc5 >> sudo yum install lapack-devel blas-devel >> Installed: blas-devel.i386 0:3.0-37.fc5 lapack-devel.i386 0:3.0-37.fc5 >> Dependency Installed: blas.i386 0:3.0-37.fc5 lapack.i386 0:3.0-37.fc5 >> >> Downloaded scipy-0.5.1 >> cd /usr/src/scipy-0.5.1 >> python setup.py install, and... >> File >> "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line >> 655, in _get_configuration_from_setup_py >> ('.py', 'U', 1)) >> File "Lib/ndimage/setup.py", line 3, in ? >> from numpy.numarray import get_numarray_include_dirs >> >> >> > The dependencies are wrong. SciPy 0.5.1 requires numpy 1.0 not > numpy-0.9.8. > > The required numarray compatibility layer is in numpy 1.0 not in > numarray. You don't need to install numarray at all. > > Install numpy 1.0 > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From stefan at sun.ac.za Mon Oct 9 19:48:53 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 10 Oct 2006 01:48:53 +0200 Subject: [SciPy-user] sparse behaviour? In-Reply-To: <9b88f1713e4d4a18b400c72d5f0726e3@atlas.cz> References: <9b88f1713e4d4a18b400c72d5f0726e3@atlas.cz> Message-ID: <20061009234853.GF4215@mentat.za.net> On Mon, Oct 09, 2006 at 11:39:24AM +0200, koara at atlas.cz wrote: > i decided to represent some of my large matrices as sparse matrices. My initial assumption was that the exposed operations and interfaces are the same (only the inner representation changes), so that my code would require no changes. However my code broke down immediately; is this how things are and my assumtpion was incorrect, or is something wrong? > > an example (getting covariance matrix) of code that didn't make it: > > n1 = matrix([[0,0,1,0],[0,0,0,2]]) > s1 = sparse.lil_matrix(n1) > cov(n1) # OK > cov(s1) # shape mismatch > exception--------------------------------------- Currently, scipy.cov calls numpy.cov, which knows nothing about sparse matrices. I don't think there is any reason why it can't be done, it's just that no one has taken the time to implement it. Regards St?fan From peridot.faceted at gmail.com Mon Oct 9 23:29:32 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Mon, 9 Oct 2006 23:29:32 -0400 Subject: [SciPy-user] sparse behaviour? In-Reply-To: <20061009234853.GF4215@mentat.za.net> References: <9b88f1713e4d4a18b400c72d5f0726e3@atlas.cz> <20061009234853.GF4215@mentat.za.net> Message-ID: On 09/10/06, Stefan van der Walt wrote: > Currently, scipy.cov calls numpy.cov, which knows nothing about sparse > matrices. I don't think there is any reason why it can't be done, > it's just that no one has taken the time to implement it. There are two ways to fix this: just convert the matrix into a dense matrix and call numpy.cov, or actually implement a covariance-matrix algorithm for sparse matrices. I would argue *against* doing the first: if a user is using sparse matrices, they have a reason, whether because dense matrices would overload memory, or because sparse algorithms are vastly faster, and secretly exploding their matrix to a dense one is likely to be confusing and unwanted. A. M. Archibald From ryanlists at gmail.com Tue Oct 10 02:26:50 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 10 Oct 2006 01:26:50 -0500 Subject: [SciPy-user] where bug Message-ID: I am having a problem running some slightly older code under a fairly new Scipy for the first time. The code uses where and something like this: In [23]: t=arange(0,1,0.01) In [24]: where(t>0.5) Out[24]: (array([51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]),) used to return just the array instead of the tuple with the array in it. Is this a feature or a bug? Can you get this behavior from a stray comma in the return line? Thanks, Ryan From a.u.r.e.l.i.a.n at gmx.net Tue Oct 10 02:36:47 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Tue, 10 Oct 2006 08:36:47 +0200 Subject: [SciPy-user] where bug In-Reply-To: References: Message-ID: <200610100836.47965.a.u.r.e.l.i.a.n@gmx.net> Am Dienstag, 10. Oktober 2006 08:26 schrieb Ryan Krauss: > I am having a problem running some slightly older code under a fairly > new Scipy for the first time. The code uses where and something like > this: > In [23]: t=arange(0,1,0.01) > > In [24]: where(t>0.5) > Out[24]: > (array([51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, > 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, > 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]),) > > used to return just the array instead of the tuple with the array in > it. Is this a feature or a bug? Feature. If you use n-d arrays, you will get back a tuple of size n, one index array for each axis. You ran into the special case n=1. AFAIK, nonzero returns just the array in 1d case. Though I would recommend to use fancy indexing (i.e. t[t>0.5] ) whenever possible, it is way faster. HTH, Johannes From oliphant.travis at ieee.org Tue Oct 10 02:55:37 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 10 Oct 2006 00:55:37 -0600 Subject: [SciPy-user] where bug In-Reply-To: References: Message-ID: <452B43E9.9080005@ieee.org> Ryan Krauss wrote: > I am having a problem running some slightly older code under a fairly > new Scipy for the first time. The code uses where and something like > this: > In [23]: t=arange(0,1,0.01) > > In [24]: where(t>0.5) > Out[24]: > (array([51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, > 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, > 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]),) > > used to return just the array instead of the tuple with the array in > it. Is this a feature or a bug? It's a feature (the old behavior was a bug...). The functionality of where is actually from NumPy and has changed to be consistent with Numarray. You can get the old behavior with flatnonzero(...) or of course where(...)[0]. -Travis From koara at atlas.cz Tue Oct 10 04:39:34 2006 From: koara at atlas.cz (koara at atlas.cz) Date: Tue, 10 Oct 2006 10:39:34 +0200 Subject: [SciPy-user] sparse behaviour? Message-ID: >-----Puvodni zprava----- >Od: Stefan van der Walt [mailto:stefan at sun.ac.za] >Odesl?no: 9. r?jna 2006 23:48 >Komu: scipy-user at scipy.org >Predmet: Re: [SciPy-user] sparse behaviour? > ... >Currently, scipy.cov calls numpy.cov, which knows nothing about sparse >matrices. I don't think there is any reason why it can't be done, >it's just that no one has taken the time to implement it. Thank you Stefan! So indeed my assumption was wrong. Is there a user-friendly list of which things work the same for both dense/sparse matrices without any additional code, so that i do not re-invent the wheel? Since indexing syntax is the same, I imagine some things might work even with a simple call to numpy, no? Just like i assumed cov() would... :> Well i guess there's a lot of debugging ahead either way. Cheers --------------------------------------- Nov? spoty P. Ctvrnicka najdete na www.neuservis.cz. Prave dnes! From cimrman3 at ntc.zcu.cz Tue Oct 10 06:06:09 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 10 Oct 2006 12:06:09 +0200 Subject: [SciPy-user] sparse behaviour? In-Reply-To: References: Message-ID: <452B7091.9080308@ntc.zcu.cz> koara at atlas.cz wrote: >> -----Puvodni zprava----- Od: Stefan van der Walt >> [mailto:stefan at sun.ac.za] Odesl?no: 9. r?jna 2006 23:48 Komu: >> scipy-user at scipy.org Predmet: Re: [SciPy-user] sparse behaviour? >> ... Currently, scipy.cov calls numpy.cov, which knows nothing about >> sparse matrices. I don't think there is any reason why it can't be >> done, it's just that no one has taken the time to implement it. > > > Thank you Stefan! > > So indeed my assumption was wrong. Is there a user-friendly list of > which things work the same for both dense/sparse matrices without any > additional code, so that i do not re-invent the wheel? Since indexing > syntax is the same, I imagine some things might work even with a > simple call to numpy, no? Just like i assumed cov() would... :> The short answer (and current status) is: 1. all functions in scipy.sparse and scipy.linsolve work with sparse matrices and expect sparse matrices (they were written for them) 2. all other functions do not understand sparse matrices. IMHO the problem here is that the sparse matrix class is not a subclass of numpy.ndarray. So far, asarray and friends - when used on a sparse matrix - make an object array with one item - the sparse matrix object. I would prefer this behaviour changed to: a) raise exception b) call .todense() method of the sparse matrix - here I am more for a) since using a dense-array algorithm on a sparse matrix will not work well - b) would cause many out-of-memory errors. And then, of course, sparse matrix algorithms should be continuously added to reflect the dense ones. (Here, for example, the CSparse library could help, if wrapped properly to Python.) cheers, r. From andrew.young at sri.com Tue Oct 10 11:13:03 2006 From: andrew.young at sri.com (Andrew B. Young) Date: Tue, 10 Oct 2006 08:13:03 -0700 Subject: [SciPy-user] Plotting an array Message-ID: <452BB87F.8000601@sri.com> I believe this is a rookie question but the I've not discovered the answer so easily. The code below, copied from a posting I found, produces a graph with a straight line across zero. Plotting just t, plot(t), produces a line either zero or one. I am sure it has something to do with plotting an array versus plotting a python list. I would be happy for a pointer to the reference material for me to read. http://www.scipy.org/Documentation has not yet lead me to the answer. t = arange(0, 2.0, 0.1) y = sin(2*pi*t) tck = interpolate.splrep(t, y, s=0) tnew = arange(0, 2.0, 0.01) ynew = interpolate.splev(tnew, tck, der=0) plot(t, y, 'o', tnew, ynew) show() Thanks, Andrew Young Python 2.4.3 (#1, Jun 13 2006, 11:46:08), [GCC 4.1.1 20060525 (Red Hat 4.1.1-1)] on linux2 numpy-1.0rc1 scipy-0.5.1 matplotlib-0.85 From elcorto at gmx.net Tue Oct 10 11:56:14 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 10 Oct 2006 17:56:14 +0200 Subject: [SciPy-user] import speed in a script Message-ID: <452BC29E.2070400@gmx.net> Hi This may be a dumb and/or more general Python question, but: What is more advisable in terms of speed: Say I call a script many times and inside I do nothing but from numpy import or import numpy as nu I tried both and found no speed difference (e.g. calling the script 500 times). I just wondered about this since there is scipy.pkgload(). In an interactive session "import scipy" doesn't import the whole scipy with all packages. Is this also true for an import statement in a script? -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From arserlom at gmail.com Tue Oct 10 12:01:33 2006 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Tue, 10 Oct 2006 18:01:33 +0200 Subject: [SciPy-user] How to calculate minors. Message-ID: Hello, is there any way to easily calculate minors of matrixes with scipy/numpy. For example if we have matrix a: [[1 2 3] [4 5 6] [7 8 9]] then minor(a,0) should be the same as det([[5,6],[8,9]]) and minor (a,1) should be the same as det([[1,3],[7,9]]). Armando. -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Tue Oct 10 12:15:21 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 10 Oct 2006 18:15:21 +0200 Subject: [SciPy-user] Plotting an array In-Reply-To: <452BB87F.8000601@sri.com> References: <452BB87F.8000601@sri.com> Message-ID: <452BC719.2030103@gmx.net> Andrew B. Young wrote: > I believe this is a rookie question but the I've not discovered the > answer so easily. The code below, copied from a posting I found, > produces a graph with a straight line across zero. Plotting just t, > plot(t), produces a line either zero or one. I am sure it has > something to do with plotting an array versus plotting a python list. Hmmm, t, y, tnew and ynew are in my case (try type(t) etc. to find out). The only list is tck, but is not plotted (it contains infos about the spline and is passed to splev). numpy 1.0rc1.dev3144 scipy 0.5.2.dev2197 matplotlib 0.87.5 Rev 2761 > I would be happy for a pointer to the reference material for me to > read. http://www.scipy.org/Documentation has not yet lead me to the > answer. > > > t = arange(0, 2.0, 0.1) > y = sin(2*pi*t) > tck = interpolate.splrep(t, y, s=0) > tnew = arange(0, 2.0, 0.01) > ynew = interpolate.splev(tnew, tck, der=0) > plot(t, y, 'o', tnew, ynew) > show() > PS: Rather unimportant note to the scipy guys: In [27]: len(tck) Out[27]: 3 In [28]: type(tck) Out[28]: The splrep docstring says tck is a len 3 tuple, not a list ... -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. From gael.varoquaux at normalesup.org Tue Oct 10 12:19:50 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 10 Oct 2006 18:19:50 +0200 Subject: [SciPy-user] Plotting an array In-Reply-To: <452BB87F.8000601@sri.com> References: <452BB87F.8000601@sri.com> Message-ID: <20061010161945.GA7916@clipper.ens.fr> Hi, Can you give a complete minimum example that does not work. If I put the following code in a file and run python on it, I get the expected result. from scipy import * from pylab import * t = arange(0, 2.0, 0.1) y = sin(2*pi*t) tck = interpolate.splrep(t, y, s=0) tnew = arange(0, 2.0, 0.01) ynew = interpolate.splev(tnew, tck, der=0) plot(t, y, 'o', tnew, ynew) show() Ga?l From ryanlists at gmail.com Tue Oct 10 12:23:00 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 10 Oct 2006 11:23:00 -0500 Subject: [SciPy-user] where bug In-Reply-To: <452B43E9.9080005@ieee.org> References: <452B43E9.9080005@ieee.org> Message-ID: Thanks to Travis and Johannes. As long as I know its a feature, I will adjust my code accordingly. Ryan On 10/10/06, Travis Oliphant wrote: > Ryan Krauss wrote: > > I am having a problem running some slightly older code under a fairly > > new Scipy for the first time. The code uses where and something like > > this: > > In [23]: t=arange(0,1,0.01) > > > > In [24]: where(t>0.5) > > Out[24]: > > (array([51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, > > 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, > > 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]),) > > > > used to return just the array instead of the tuple with the array in > > it. Is this a feature or a bug? > It's a feature (the old behavior was a bug...). The functionality of > where is actually from NumPy and has changed to be consistent with > Numarray. You can get the old behavior with flatnonzero(...) or of > course where(...)[0]. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From andrew.young at sri.com Tue Oct 10 13:30:59 2006 From: andrew.young at sri.com (Andrew B. Young) Date: Tue, 10 Oct 2006 10:30:59 -0700 Subject: [SciPy-user] Plotting an array In-Reply-To: <20061010161945.GA7916@clipper.ens.fr> References: <452BB87F.8000601@sri.com> <20061010161945.GA7916@clipper.ens.fr> Message-ID: <452BD8D3.2010500@sri.com> Gael, The following little program-- -- test.py ---------------------- | from scipy import arange, sin, pi, interpolate | from pylab import plot, show, savefig | t = arange(0, 2.0, 0.1) | plot(t) | savefig('test.png') Produces the plot at http://polar.sri.com/~ayoung/test.png Why does the plot "turn on" at one? If I, plot( [ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9] ) I get what I would expect: http://polar.sri.com/~ayoung/test2.png Thanks for you help. -andyy Gael Varoquaux wrote: > Hi, > > Can you give a complete minimum example that does not work. If I put the > following code in a file and run python on it, I get the expected result. > > from scipy import * > from pylab import * > t = arange(0, 2.0, 0.1) > y = sin(2*pi*t) > tck = interpolate.splrep(t, y, s=0) > tnew = arange(0, 2.0, 0.01) > ynew = interpolate.splev(tnew, tck, der=0) > plot(t, y, 'o', tnew, ynew) > show() > > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Tue Oct 10 13:32:24 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 10 Oct 2006 12:32:24 -0500 Subject: [SciPy-user] phase wrapping algorithm Message-ID: I am rethinking an algorithm for unwrapping phase data from arctan2 for a vector of complex numbers (coming from lti models of feedback control systems). I had written something a while ago using weave, but my old code causes seg faults with a recent scipy update. So, I am rethinking the algorithm. The problem occurs when either because of noise or just the definition of the inverse tangent, their are discontinuities in the phase (which is coming from arctan2(imag(vector), real(vector)). For example, in the attached plot and data file, there is a point where the phase passes through -180 degrees. arctan2 jumps to returning +180, +175 etc, when it makes a pretty plot and makes more sense to me to convert +175 to -185. What I had done previously is write a for loop that compares the current value of the phase to the value at the previous index (i.e. if phase[i]-phase[-1]>300 or if phase[i]-phase[i-1]<-300) to find the discontinuities. Is there a cleaner/faster way to do this? (I had done it in weave and will probably now do it in f2py for comparison on speeds). I have attached/included a first attempt at an algorithm that is more vectorized. But is there a fast, easy way to compare a vector to the same vector shift one element up or down? Right now I am doing this phase2=r_[phase[0:1],phase[0:-1]] to shift the vector (and reuse the first element so there isn't a jump from phase[-1] to phase[0] which may legitimately be far apart). Has anyone else already done this? Are there obvious bottlenecks with the algorithm? Thanks for your thoughts, Ryan ===================================== from scipy import arange, io, r_ from pylab import show, figure, subplot, clf, cla, semilogx, xlabel, ylabel, legend, xlim import copy data=io.read_array('phase_data.txt') f=data[:,0] phase=data[:,1] figure(1) clf() startphase=copy.copy(phase) semilogx(f,startphase) keepgoing=1 inds=arange(0,len(phase)) while keepgoing: phase2=r_[phase[0:1],phase[0:-1]]#artificially repeat the first element diff=phase-phase2 jumps=inds[abs(diff)>250] keepgoing=jumps.any() if not keepgoing: break ind=jumps.min() if diff[ind]>0: phase[ind:]-=360 else: phase[ind:]+=360 semilogx(f,phase,'r--') legend(['Before unwrapping','after unwrapping'],2) xlim([1,30]) show() -------------- next part -------------- 1.30952e+00 -9.60103e+01 1.33333e+00 -9.61655e+01 1.35714e+00 -9.63309e+01 1.38095e+00 -9.65099e+01 1.40476e+00 -9.67070e+01 1.42857e+00 -9.69287e+01 1.45238e+00 -9.71850e+01 1.47619e+00 -9.74911e+01 1.50000e+00 -9.78711e+01 1.52381e+00 -9.83629e+01 1.54762e+00 -9.90266e+01 1.57143e+00 -9.99476e+01 1.59524e+00 -1.01203e+02 1.61905e+00 -1.02674e+02 1.64286e+00 -1.03685e+02 1.66667e+00 -1.03370e+02 1.69048e+00 -1.02078e+02 1.71429e+00 -1.00795e+02 1.73810e+00 -9.98998e+01 1.76190e+00 -9.93507e+01 1.78571e+00 -9.90331e+01 1.80952e+00 -9.88608e+01 1.83333e+00 -9.87801e+01 1.85714e+00 -9.87585e+01 1.88095e+00 -9.87760e+01 1.90476e+00 -9.88203e+01 1.92857e+00 -9.88835e+01 1.95238e+00 -9.89604e+01 1.97619e+00 -9.90475e+01 2.00000e+00 -9.91423e+01 2.02381e+00 -9.92431e+01 2.04762e+00 -9.93486e+01 2.07143e+00 -9.94580e+01 2.09524e+00 -9.95705e+01 2.11905e+00 -9.96857e+01 2.14286e+00 -9.98032e+01 2.16667e+00 -9.99227e+01 2.19048e+00 -1.00044e+02 2.21429e+00 -1.00167e+02 2.23810e+00 -1.00291e+02 2.26190e+00 -1.00416e+02 2.28571e+00 -1.00543e+02 2.30952e+00 -1.00671e+02 2.33333e+00 -1.00800e+02 2.35714e+00 -1.00930e+02 2.38095e+00 -1.01061e+02 2.40476e+00 -1.01193e+02 2.42857e+00 -1.01326e+02 2.45238e+00 -1.01460e+02 2.47619e+00 -1.01595e+02 2.50000e+00 -1.01731e+02 2.52381e+00 -1.01867e+02 2.54762e+00 -1.02005e+02 2.57143e+00 -1.02143e+02 2.59524e+00 -1.02283e+02 2.61905e+00 -1.02423e+02 2.64286e+00 -1.02564e+02 2.66667e+00 -1.02706e+02 2.69048e+00 -1.02849e+02 2.71429e+00 -1.02993e+02 2.73810e+00 -1.03138e+02 2.76190e+00 -1.03284e+02 2.78571e+00 -1.03431e+02 2.80952e+00 -1.03579e+02 2.83333e+00 -1.03728e+02 2.85714e+00 -1.03877e+02 2.88095e+00 -1.04028e+02 2.90476e+00 -1.04180e+02 2.92857e+00 -1.04333e+02 2.95238e+00 -1.04487e+02 2.97619e+00 -1.04642e+02 3.00000e+00 -1.04798e+02 3.02381e+00 -1.04955e+02 3.04762e+00 -1.05113e+02 3.07143e+00 -1.05272e+02 3.09524e+00 -1.05433e+02 3.11905e+00 -1.05595e+02 3.14286e+00 -1.05758e+02 3.16667e+00 -1.05922e+02 3.19048e+00 -1.06087e+02 3.21429e+00 -1.06253e+02 3.23810e+00 -1.06421e+02 3.26190e+00 -1.06590e+02 3.28571e+00 -1.06761e+02 3.30952e+00 -1.06932e+02 3.33333e+00 -1.07105e+02 3.35714e+00 -1.07280e+02 3.38095e+00 -1.07456e+02 3.40476e+00 -1.07633e+02 3.42857e+00 -1.07812e+02 3.45238e+00 -1.07992e+02 3.47619e+00 -1.08173e+02 3.50000e+00 -1.08357e+02 3.52381e+00 -1.08541e+02 3.54762e+00 -1.08728e+02 3.57143e+00 -1.08916e+02 3.59524e+00 -1.09105e+02 3.61905e+00 -1.09297e+02 3.64286e+00 -1.09490e+02 3.66667e+00 -1.09684e+02 3.69048e+00 -1.09881e+02 3.71429e+00 -1.10079e+02 3.73810e+00 -1.10280e+02 3.76190e+00 -1.10482e+02 3.78571e+00 -1.10686e+02 3.80952e+00 -1.10892e+02 3.83333e+00 -1.11100e+02 3.85714e+00 -1.11310e+02 3.88095e+00 -1.11522e+02 3.90476e+00 -1.11736e+02 3.92857e+00 -1.11953e+02 3.95238e+00 -1.12172e+02 3.97619e+00 -1.12393e+02 4.00000e+00 -1.12616e+02 4.02381e+00 -1.12842e+02 4.04762e+00 -1.13070e+02 4.07143e+00 -1.13301e+02 4.09524e+00 -1.13534e+02 4.11905e+00 -1.13770e+02 4.14286e+00 -1.14009e+02 4.16667e+00 -1.14250e+02 4.19048e+00 -1.14494e+02 4.21429e+00 -1.14741e+02 4.23810e+00 -1.14991e+02 4.26190e+00 -1.15244e+02 4.28571e+00 -1.15500e+02 4.30952e+00 -1.15759e+02 4.33333e+00 -1.16022e+02 4.35714e+00 -1.16287e+02 4.38095e+00 -1.16556e+02 4.40476e+00 -1.16829e+02 4.42857e+00 -1.17105e+02 4.45238e+00 -1.17385e+02 4.47619e+00 -1.17669e+02 4.50000e+00 -1.17956e+02 4.52381e+00 -1.18247e+02 4.54762e+00 -1.18543e+02 4.57143e+00 -1.18842e+02 4.59524e+00 -1.19146e+02 4.61905e+00 -1.19454e+02 4.64286e+00 -1.19767e+02 4.66667e+00 -1.20084e+02 4.69048e+00 -1.20406e+02 4.71429e+00 -1.20733e+02 4.73810e+00 -1.21065e+02 4.76190e+00 -1.21402e+02 4.78571e+00 -1.21745e+02 4.80952e+00 -1.22092e+02 4.83333e+00 -1.22446e+02 4.85714e+00 -1.22805e+02 4.88095e+00 -1.23170e+02 4.90476e+00 -1.23541e+02 4.92857e+00 -1.23918e+02 4.95238e+00 -1.24301e+02 4.97619e+00 -1.24692e+02 5.00000e+00 -1.25089e+02 5.02381e+00 -1.25492e+02 5.04762e+00 -1.25903e+02 5.07143e+00 -1.26322e+02 5.09524e+00 -1.26748e+02 5.11905e+00 -1.27182e+02 5.14286e+00 -1.27623e+02 5.16667e+00 -1.28073e+02 5.19048e+00 -1.28532e+02 5.21429e+00 -1.28999e+02 5.23810e+00 -1.29475e+02 5.26190e+00 -1.29961e+02 5.28571e+00 -1.30455e+02 5.30952e+00 -1.30960e+02 5.33333e+00 -1.31474e+02 5.35714e+00 -1.31999e+02 5.38095e+00 -1.32534e+02 5.40476e+00 -1.33081e+02 5.42857e+00 -1.33638e+02 5.45238e+00 -1.34207e+02 5.47619e+00 -1.34787e+02 5.50000e+00 -1.35379e+02 5.52381e+00 -1.35984e+02 5.54762e+00 -1.36601e+02 5.57143e+00 -1.37231e+02 5.59524e+00 -1.37875e+02 5.61905e+00 -1.38531e+02 5.64286e+00 -1.39202e+02 5.66667e+00 -1.39887e+02 5.69048e+00 -1.40585e+02 5.71429e+00 -1.41299e+02 5.73810e+00 -1.42027e+02 5.76190e+00 -1.42770e+02 5.78571e+00 -1.43529e+02 5.80952e+00 -1.44303e+02 5.83333e+00 -1.45093e+02 5.85714e+00 -1.45898e+02 5.88095e+00 -1.46719e+02 5.90476e+00 -1.47556e+02 5.92857e+00 -1.48409e+02 5.95238e+00 -1.49278e+02 5.97619e+00 -1.50163e+02 6.00000e+00 -1.51063e+02 6.02381e+00 -1.51978e+02 6.04762e+00 -1.52909e+02 6.07143e+00 -1.53855e+02 6.09524e+00 -1.54815e+02 6.11905e+00 -1.55789e+02 6.14286e+00 -1.56776e+02 6.16667e+00 -1.57776e+02 6.19048e+00 -1.58787e+02 6.21429e+00 -1.59810e+02 6.23810e+00 -1.60842e+02 6.26190e+00 -1.61882e+02 6.28571e+00 -1.62930e+02 6.30952e+00 -1.63984e+02 6.33333e+00 -1.65042e+02 6.35714e+00 -1.66102e+02 6.38095e+00 -1.67164e+02 6.40476e+00 -1.68224e+02 6.42857e+00 -1.69282e+02 6.45238e+00 -1.70333e+02 6.47619e+00 -1.71378e+02 6.50000e+00 -1.72411e+02 6.52381e+00 -1.73433e+02 6.54762e+00 -1.74438e+02 6.57143e+00 -1.75425e+02 6.59524e+00 -1.76391e+02 6.61905e+00 -1.77333e+02 6.64286e+00 -1.78247e+02 6.66667e+00 -1.79130e+02 6.69048e+00 -1.79979e+02 6.71429e+00 1.79210e+02 6.73810e+00 1.78441e+02 6.76190e+00 1.77716e+02 6.78571e+00 1.77042e+02 6.80952e+00 1.76420e+02 6.83333e+00 1.75856e+02 6.85714e+00 1.75355e+02 6.88095e+00 1.74920e+02 6.90476e+00 1.74556e+02 6.92857e+00 1.74270e+02 6.95238e+00 1.74066e+02 6.97619e+00 1.73950e+02 7.00000e+00 1.73928e+02 7.02381e+00 1.74007e+02 7.04762e+00 1.74192e+02 7.07143e+00 1.74492e+02 7.09524e+00 1.74913e+02 7.11905e+00 1.75464e+02 7.14286e+00 1.76151e+02 7.16667e+00 1.76983e+02 7.19048e+00 1.77967e+02 7.21429e+00 1.79110e+02 7.23810e+00 -1.79580e+02 7.26190e+00 -1.78100e+02 7.28571e+00 -1.76444e+02 7.30952e+00 -1.74612e+02 7.33333e+00 -1.72605e+02 7.35714e+00 -1.70429e+02 7.38095e+00 -1.68091e+02 7.40476e+00 -1.65606e+02 7.42857e+00 -1.62992e+02 7.45238e+00 -1.60271e+02 7.47619e+00 -1.57469e+02 7.50000e+00 -1.54617e+02 7.52381e+00 -1.51745e+02 7.54762e+00 -1.48884e+02 7.57143e+00 -1.46063e+02 7.59524e+00 -1.43310e+02 7.61905e+00 -1.40648e+02 7.64286e+00 -1.38095e+02 7.66667e+00 -1.35666e+02 7.69048e+00 -1.33370e+02 7.71429e+00 -1.31213e+02 7.73810e+00 -1.29197e+02 7.76190e+00 -1.27321e+02 7.78571e+00 -1.25582e+02 7.80952e+00 -1.23975e+02 7.83333e+00 -1.22493e+02 7.85714e+00 -1.21130e+02 7.88095e+00 -1.19879e+02 7.90476e+00 -1.18732e+02 7.92857e+00 -1.17683e+02 7.95238e+00 -1.16723e+02 7.97619e+00 -1.15846e+02 8.00000e+00 -1.15045e+02 8.02381e+00 -1.14315e+02 8.04762e+00 -1.13651e+02 8.07143e+00 -1.13045e+02 8.09524e+00 -1.12495e+02 8.11905e+00 -1.11995e+02 8.14286e+00 -1.11542e+02 8.16667e+00 -1.11131e+02 8.19048e+00 -1.10759e+02 8.21429e+00 -1.10424e+02 8.23810e+00 -1.10121e+02 8.26190e+00 -1.09849e+02 8.28571e+00 -1.09605e+02 8.30952e+00 -1.09388e+02 8.33333e+00 -1.09194e+02 8.35714e+00 -1.09022e+02 8.38095e+00 -1.08871e+02 8.40476e+00 -1.08738e+02 8.42857e+00 -1.08623e+02 8.45238e+00 -1.08524e+02 8.47619e+00 -1.08440e+02 8.50000e+00 -1.08370e+02 8.52381e+00 -1.08313e+02 8.54762e+00 -1.08268e+02 8.57143e+00 -1.08234e+02 8.59524e+00 -1.08210e+02 8.61905e+00 -1.08196e+02 8.64286e+00 -1.08190e+02 8.66667e+00 -1.08193e+02 8.69048e+00 -1.08203e+02 8.71429e+00 -1.08221e+02 8.73810e+00 -1.08245e+02 8.76190e+00 -1.08276e+02 8.78571e+00 -1.08312e+02 8.80952e+00 -1.08354e+02 8.83333e+00 -1.08401e+02 8.85714e+00 -1.08452e+02 8.88095e+00 -1.08508e+02 8.90476e+00 -1.08568e+02 8.92857e+00 -1.08632e+02 8.95238e+00 -1.08699e+02 8.97619e+00 -1.08770e+02 9.00000e+00 -1.08843e+02 9.02381e+00 -1.08920e+02 9.04762e+00 -1.08999e+02 9.07143e+00 -1.09081e+02 9.09524e+00 -1.09166e+02 9.11905e+00 -1.09252e+02 9.14286e+00 -1.09340e+02 9.16667e+00 -1.09431e+02 9.19048e+00 -1.09523e+02 9.21429e+00 -1.09617e+02 9.23810e+00 -1.09712e+02 9.26190e+00 -1.09809e+02 9.28571e+00 -1.09907e+02 9.30952e+00 -1.10007e+02 9.33333e+00 -1.10108e+02 9.35714e+00 -1.10209e+02 9.38095e+00 -1.10312e+02 9.40476e+00 -1.10416e+02 9.42857e+00 -1.10520e+02 9.45238e+00 -1.10626e+02 9.47619e+00 -1.10732e+02 9.50000e+00 -1.10839e+02 9.52381e+00 -1.10946e+02 9.54762e+00 -1.11054e+02 9.57143e+00 -1.11162e+02 9.59524e+00 -1.11271e+02 9.61905e+00 -1.11381e+02 9.64286e+00 -1.11491e+02 9.66667e+00 -1.11601e+02 9.69048e+00 -1.11712e+02 9.71429e+00 -1.11822e+02 9.73810e+00 -1.11934e+02 9.76190e+00 -1.12045e+02 9.78571e+00 -1.12156e+02 9.80952e+00 -1.12268e+02 9.83333e+00 -1.12380e+02 9.85714e+00 -1.12492e+02 9.88095e+00 -1.12604e+02 9.90476e+00 -1.12717e+02 9.92857e+00 -1.12829e+02 9.95238e+00 -1.12941e+02 9.97619e+00 -1.13054e+02 1.00000e+01 -1.13166e+02 1.00238e+01 -1.13279e+02 1.00476e+01 -1.13391e+02 1.00714e+01 -1.13504e+02 1.00952e+01 -1.13616e+02 1.01190e+01 -1.13728e+02 1.01429e+01 -1.13841e+02 1.01667e+01 -1.13953e+02 1.01905e+01 -1.14065e+02 1.02143e+01 -1.14177e+02 1.02381e+01 -1.14289e+02 1.02619e+01 -1.14401e+02 1.02857e+01 -1.14512e+02 1.03095e+01 -1.14624e+02 1.03333e+01 -1.14735e+02 1.03571e+01 -1.14847e+02 1.03810e+01 -1.14958e+02 1.04048e+01 -1.15069e+02 1.04286e+01 -1.15180e+02 1.04524e+01 -1.15290e+02 1.04762e+01 -1.15401e+02 1.05000e+01 -1.15511e+02 1.05238e+01 -1.15621e+02 1.05476e+01 -1.15731e+02 1.05714e+01 -1.15841e+02 1.05952e+01 -1.15951e+02 1.06190e+01 -1.16060e+02 1.06429e+01 -1.16169e+02 1.06667e+01 -1.16278e+02 1.06905e+01 -1.16387e+02 1.07143e+01 -1.16495e+02 1.07381e+01 -1.16604e+02 1.07619e+01 -1.16712e+02 1.07857e+01 -1.16820e+02 1.08095e+01 -1.16927e+02 1.08333e+01 -1.17035e+02 1.08571e+01 -1.17142e+02 1.08810e+01 -1.17249e+02 1.09048e+01 -1.17356e+02 1.09286e+01 -1.17463e+02 1.09524e+01 -1.17569e+02 1.09762e+01 -1.17675e+02 1.10000e+01 -1.17781e+02 1.10238e+01 -1.17887e+02 1.10476e+01 -1.17992e+02 1.10714e+01 -1.18098e+02 1.10952e+01 -1.18203e+02 1.11190e+01 -1.18307e+02 1.11429e+01 -1.18412e+02 1.11667e+01 -1.18516e+02 1.11905e+01 -1.18620e+02 1.12143e+01 -1.18724e+02 1.12381e+01 -1.18828e+02 1.12619e+01 -1.18931e+02 1.12857e+01 -1.19034e+02 1.13095e+01 -1.19137e+02 1.13333e+01 -1.19240e+02 1.13571e+01 -1.19342e+02 1.13810e+01 -1.19445e+02 1.14048e+01 -1.19547e+02 1.14286e+01 -1.19648e+02 1.14524e+01 -1.19750e+02 1.14762e+01 -1.19851e+02 1.15000e+01 -1.19952e+02 1.15238e+01 -1.20053e+02 1.15476e+01 -1.20154e+02 1.15714e+01 -1.20254e+02 1.15952e+01 -1.20354e+02 1.16190e+01 -1.20454e+02 1.16429e+01 -1.20554e+02 1.16667e+01 -1.20654e+02 1.16905e+01 -1.20753e+02 1.17143e+01 -1.20852e+02 1.17381e+01 -1.20951e+02 1.17619e+01 -1.21049e+02 1.17857e+01 -1.21148e+02 1.18095e+01 -1.21246e+02 1.18333e+01 -1.21344e+02 1.18571e+01 -1.21441e+02 1.18810e+01 -1.21539e+02 1.19048e+01 -1.21636e+02 1.19286e+01 -1.21733e+02 1.19524e+01 -1.21830e+02 1.19762e+01 -1.21927e+02 1.20000e+01 -1.22023e+02 1.20238e+01 -1.22119e+02 1.20476e+01 -1.22215e+02 1.20714e+01 -1.22311e+02 1.20952e+01 -1.22406e+02 1.21190e+01 -1.22502e+02 1.21429e+01 -1.22597e+02 1.21667e+01 -1.22692e+02 1.21905e+01 -1.22786e+02 1.22143e+01 -1.22881e+02 1.22381e+01 -1.22975e+02 1.22619e+01 -1.23069e+02 1.22857e+01 -1.23163e+02 1.23095e+01 -1.23256e+02 1.23333e+01 -1.23350e+02 1.23571e+01 -1.23443e+02 1.23810e+01 -1.23536e+02 1.24048e+01 -1.23629e+02 1.24286e+01 -1.23721e+02 1.24524e+01 -1.23814e+02 1.24762e+01 -1.23906e+02 1.25000e+01 -1.23998e+02 1.25238e+01 -1.24090e+02 1.25476e+01 -1.24181e+02 1.25714e+01 -1.24273e+02 1.25952e+01 -1.24364e+02 1.26190e+01 -1.24455e+02 1.26429e+01 -1.24546e+02 1.26667e+01 -1.24636e+02 1.26905e+01 -1.24727e+02 1.27143e+01 -1.24817e+02 1.27381e+01 -1.24907e+02 1.27619e+01 -1.24997e+02 1.27857e+01 -1.25086e+02 1.28095e+01 -1.25176e+02 1.28333e+01 -1.25265e+02 1.28571e+01 -1.25354e+02 1.28810e+01 -1.25443e+02 1.29048e+01 -1.25532e+02 1.29286e+01 -1.25620e+02 1.29524e+01 -1.25709e+02 1.29762e+01 -1.25797e+02 1.30000e+01 -1.25885e+02 1.30238e+01 -1.25972e+02 1.30476e+01 -1.26060e+02 1.30714e+01 -1.26147e+02 1.30952e+01 -1.26235e+02 1.31190e+01 -1.26322e+02 1.31429e+01 -1.26409e+02 1.31667e+01 -1.26495e+02 1.31905e+01 -1.26582e+02 1.32143e+01 -1.26668e+02 1.32381e+01 -1.26754e+02 1.32619e+01 -1.26840e+02 1.32857e+01 -1.26926e+02 1.33095e+01 -1.27012e+02 1.33333e+01 -1.27097e+02 1.33571e+01 -1.27182e+02 1.33810e+01 -1.27267e+02 1.34048e+01 -1.27352e+02 1.34286e+01 -1.27437e+02 1.34524e+01 -1.27522e+02 1.34762e+01 -1.27606e+02 1.35000e+01 -1.27690e+02 1.35238e+01 -1.27775e+02 1.35476e+01 -1.27858e+02 1.35714e+01 -1.27942e+02 1.35952e+01 -1.28026e+02 1.36190e+01 -1.28109e+02 1.36429e+01 -1.28193e+02 1.36667e+01 -1.28276e+02 1.36905e+01 -1.28359e+02 1.37143e+01 -1.28441e+02 1.37381e+01 -1.28524e+02 1.37619e+01 -1.28606e+02 1.37857e+01 -1.28689e+02 1.38095e+01 -1.28771e+02 1.38333e+01 -1.28853e+02 1.38571e+01 -1.28935e+02 1.38810e+01 -1.29016e+02 1.39048e+01 -1.29098e+02 1.39286e+01 -1.29179e+02 1.39524e+01 -1.29260e+02 1.39762e+01 -1.29341e+02 1.40000e+01 -1.29422e+02 1.40238e+01 -1.29503e+02 1.40476e+01 -1.29584e+02 1.40714e+01 -1.29664e+02 1.40952e+01 -1.29744e+02 1.41190e+01 -1.29824e+02 1.41429e+01 -1.29904e+02 1.41667e+01 -1.29984e+02 1.41905e+01 -1.30064e+02 1.42143e+01 -1.30143e+02 1.42381e+01 -1.30223e+02 1.42619e+01 -1.30302e+02 1.42857e+01 -1.30381e+02 1.43095e+01 -1.30460e+02 1.43333e+01 -1.30539e+02 1.43571e+01 -1.30617e+02 1.43810e+01 -1.30696e+02 1.44048e+01 -1.30774e+02 1.44286e+01 -1.30853e+02 1.44524e+01 -1.30931e+02 1.44762e+01 -1.31009e+02 1.45000e+01 -1.31086e+02 1.45238e+01 -1.31164e+02 1.45476e+01 -1.31241e+02 1.45714e+01 -1.31319e+02 1.45952e+01 -1.31396e+02 1.46190e+01 -1.31473e+02 1.46429e+01 -1.31550e+02 1.46667e+01 -1.31627e+02 1.46905e+01 -1.31704e+02 1.47143e+01 -1.31780e+02 1.47381e+01 -1.31857e+02 1.47619e+01 -1.31933e+02 1.47857e+01 -1.32009e+02 1.48095e+01 -1.32085e+02 1.48333e+01 -1.32161e+02 1.48571e+01 -1.32237e+02 1.48810e+01 -1.32312e+02 1.49048e+01 -1.32388e+02 1.49286e+01 -1.32463e+02 1.49524e+01 -1.32538e+02 1.49762e+01 -1.32613e+02 1.50000e+01 -1.32688e+02 1.50238e+01 -1.32763e+02 1.50476e+01 -1.32838e+02 1.50714e+01 -1.32912e+02 1.50952e+01 -1.32987e+02 1.51190e+01 -1.33061e+02 1.51429e+01 -1.33135e+02 1.51667e+01 -1.33209e+02 1.51905e+01 -1.33283e+02 1.52143e+01 -1.33357e+02 1.52381e+01 -1.33431e+02 1.52619e+01 -1.33504e+02 1.52857e+01 -1.33578e+02 1.53095e+01 -1.33651e+02 1.53333e+01 -1.33724e+02 1.53571e+01 -1.33797e+02 1.53810e+01 -1.33870e+02 1.54048e+01 -1.33943e+02 1.54286e+01 -1.34016e+02 1.54524e+01 -1.34088e+02 1.54762e+01 -1.34161e+02 1.55000e+01 -1.34233e+02 1.55238e+01 -1.34306e+02 1.55476e+01 -1.34378e+02 1.55714e+01 -1.34450e+02 1.55952e+01 -1.34521e+02 1.56190e+01 -1.34593e+02 1.56429e+01 -1.34665e+02 1.56667e+01 -1.34736e+02 1.56905e+01 -1.34808e+02 1.57143e+01 -1.34879e+02 1.57381e+01 -1.34950e+02 1.57619e+01 -1.35021e+02 1.57857e+01 -1.35092e+02 1.58095e+01 -1.35163e+02 1.58333e+01 -1.35234e+02 1.58571e+01 -1.35305e+02 1.58810e+01 -1.35375e+02 1.59048e+01 -1.35445e+02 1.59286e+01 -1.35516e+02 1.59524e+01 -1.35586e+02 1.59762e+01 -1.35656e+02 1.60000e+01 -1.35726e+02 1.60238e+01 -1.35796e+02 1.60476e+01 -1.35865e+02 1.60714e+01 -1.35935e+02 1.60952e+01 -1.36005e+02 1.61190e+01 -1.36074e+02 1.61429e+01 -1.36143e+02 1.61667e+01 -1.36213e+02 1.61905e+01 -1.36282e+02 1.62143e+01 -1.36351e+02 1.62381e+01 -1.36419e+02 1.62619e+01 -1.36488e+02 1.62857e+01 -1.36557e+02 1.63095e+01 -1.36626e+02 1.63333e+01 -1.36694e+02 1.63571e+01 -1.36762e+02 1.63810e+01 -1.36831e+02 1.64048e+01 -1.36899e+02 1.64286e+01 -1.36967e+02 1.64524e+01 -1.37035e+02 1.64762e+01 -1.37103e+02 1.65000e+01 -1.37170e+02 1.65238e+01 -1.37238e+02 1.65476e+01 -1.37306e+02 1.65714e+01 -1.37373e+02 1.65952e+01 -1.37440e+02 1.66190e+01 -1.37508e+02 1.66429e+01 -1.37575e+02 1.66667e+01 -1.37642e+02 1.66905e+01 -1.37709e+02 1.67143e+01 -1.37776e+02 1.67381e+01 -1.37842e+02 1.67619e+01 -1.37909e+02 1.67857e+01 -1.37976e+02 1.68095e+01 -1.38042e+02 1.68333e+01 -1.38108e+02 1.68571e+01 -1.38175e+02 1.68810e+01 -1.38241e+02 1.69048e+01 -1.38307e+02 1.69286e+01 -1.38373e+02 1.69524e+01 -1.38439e+02 1.69762e+01 -1.38505e+02 1.70000e+01 -1.38570e+02 1.70238e+01 -1.38636e+02 1.70476e+01 -1.38701e+02 1.70714e+01 -1.38767e+02 1.70952e+01 -1.38832e+02 1.71190e+01 -1.38898e+02 1.71429e+01 -1.38963e+02 1.71667e+01 -1.39028e+02 1.71905e+01 -1.39093e+02 1.72143e+01 -1.39158e+02 1.72381e+01 -1.39222e+02 1.72619e+01 -1.39287e+02 1.72857e+01 -1.39352e+02 1.73095e+01 -1.39416e+02 1.73333e+01 -1.39481e+02 1.73571e+01 -1.39545e+02 1.73810e+01 -1.39609e+02 1.74048e+01 -1.39673e+02 1.74286e+01 -1.39737e+02 1.74524e+01 -1.39801e+02 1.74762e+01 -1.39865e+02 1.75000e+01 -1.39929e+02 1.75238e+01 -1.39993e+02 1.75476e+01 -1.40056e+02 1.75714e+01 -1.40120e+02 1.75952e+01 -1.40183e+02 1.76190e+01 -1.40247e+02 1.76429e+01 -1.40310e+02 1.76667e+01 -1.40373e+02 1.76905e+01 -1.40436e+02 1.77143e+01 -1.40499e+02 1.77381e+01 -1.40562e+02 1.77619e+01 -1.40625e+02 1.77857e+01 -1.40688e+02 1.78095e+01 -1.40751e+02 1.78333e+01 -1.40813e+02 1.78571e+01 -1.40876e+02 1.78810e+01 -1.40938e+02 1.79048e+01 -1.41001e+02 1.79286e+01 -1.41063e+02 1.79524e+01 -1.41125e+02 1.79762e+01 -1.41187e+02 1.80000e+01 -1.41249e+02 1.80238e+01 -1.41311e+02 1.80476e+01 -1.41373e+02 1.80714e+01 -1.41435e+02 1.80952e+01 -1.41496e+02 1.81190e+01 -1.41558e+02 1.81429e+01 -1.41619e+02 1.81667e+01 -1.41681e+02 1.81905e+01 -1.41742e+02 1.82143e+01 -1.41803e+02 1.82381e+01 -1.41864e+02 1.82619e+01 -1.41926e+02 1.82857e+01 -1.41987e+02 1.83095e+01 -1.42047e+02 1.83333e+01 -1.42108e+02 1.83571e+01 -1.42169e+02 1.83810e+01 -1.42230e+02 1.84048e+01 -1.42290e+02 1.84286e+01 -1.42351e+02 1.84524e+01 -1.42411e+02 1.84762e+01 -1.42471e+02 1.85000e+01 -1.42532e+02 1.85238e+01 -1.42592e+02 1.85476e+01 -1.42652e+02 1.85714e+01 -1.42712e+02 1.85952e+01 -1.42771e+02 1.86190e+01 -1.42831e+02 1.86429e+01 -1.42891e+02 1.86667e+01 -1.42951e+02 1.86905e+01 -1.43010e+02 1.87143e+01 -1.43069e+02 1.87381e+01 -1.43129e+02 1.87619e+01 -1.43188e+02 1.87857e+01 -1.43247e+02 1.88095e+01 -1.43306e+02 1.88333e+01 -1.43365e+02 1.88571e+01 -1.43424e+02 1.88810e+01 -1.43483e+02 1.89048e+01 -1.43542e+02 1.89286e+01 -1.43600e+02 1.89524e+01 -1.43659e+02 1.89762e+01 -1.43717e+02 1.90000e+01 -1.43776e+02 1.90238e+01 -1.43834e+02 1.90476e+01 -1.43892e+02 1.90714e+01 -1.43950e+02 1.90952e+01 -1.44008e+02 1.91190e+01 -1.44066e+02 1.91429e+01 -1.44123e+02 1.91667e+01 -1.44181e+02 1.91905e+01 -1.44239e+02 1.92143e+01 -1.44296e+02 1.92381e+01 -1.44353e+02 1.92619e+01 -1.44411e+02 1.92857e+01 -1.44468e+02 1.93095e+01 -1.44525e+02 1.93333e+01 -1.44582e+02 1.93571e+01 -1.44639e+02 1.93810e+01 -1.44695e+02 1.94048e+01 -1.44752e+02 1.94286e+01 -1.44808e+02 1.94524e+01 -1.44865e+02 1.94762e+01 -1.44921e+02 1.95000e+01 -1.44977e+02 1.95238e+01 -1.45033e+02 1.95476e+01 -1.45089e+02 1.95714e+01 -1.45145e+02 1.95952e+01 -1.45201e+02 1.96190e+01 -1.45256e+02 1.96429e+01 -1.45312e+02 1.96667e+01 -1.45367e+02 1.96905e+01 -1.45422e+02 1.97143e+01 -1.45477e+02 1.97381e+01 -1.45532e+02 1.97619e+01 -1.45587e+02 1.97857e+01 -1.45642e+02 1.98095e+01 -1.45696e+02 1.98333e+01 -1.45751e+02 1.98571e+01 -1.45805e+02 1.98810e+01 -1.45859e+02 1.99048e+01 -1.45913e+02 1.99286e+01 -1.45967e+02 1.99524e+01 -1.46021e+02 1.99762e+01 -1.46075e+02 2.00000e+01 -1.46128e+02 2.00238e+01 -1.46181e+02 2.00476e+01 -1.46235e+02 2.00714e+01 -1.46288e+02 2.00952e+01 -1.46341e+02 2.01190e+01 -1.46393e+02 2.01429e+01 -1.46446e+02 2.01667e+01 -1.46498e+02 2.01905e+01 -1.46551e+02 2.02143e+01 -1.46603e+02 2.02381e+01 -1.46655e+02 2.02619e+01 -1.46706e+02 2.02857e+01 -1.46758e+02 2.03095e+01 -1.46810e+02 2.03333e+01 -1.46861e+02 2.03571e+01 -1.46912e+02 2.03810e+01 -1.46963e+02 2.04048e+01 -1.47014e+02 2.04286e+01 -1.47064e+02 2.04524e+01 -1.47115e+02 2.04762e+01 -1.47165e+02 2.05000e+01 -1.47215e+02 2.05238e+01 -1.47265e+02 2.05476e+01 -1.47315e+02 2.05714e+01 -1.47364e+02 2.05952e+01 -1.47414e+02 2.06190e+01 -1.47463e+02 2.06429e+01 -1.47512e+02 2.06667e+01 -1.47560e+02 2.06905e+01 -1.47609e+02 2.07143e+01 -1.47657e+02 2.07381e+01 -1.47706e+02 2.07619e+01 -1.47754e+02 2.07857e+01 -1.47801e+02 2.08095e+01 -1.47849e+02 2.08333e+01 -1.47896e+02 2.08571e+01 -1.47944e+02 2.08810e+01 -1.47991e+02 2.09048e+01 -1.48037e+02 2.09286e+01 -1.48084e+02 2.09524e+01 -1.48130e+02 2.09762e+01 -1.48176e+02 2.10000e+01 -1.48222e+02 2.10238e+01 -1.48268e+02 2.10476e+01 -1.48314e+02 2.10714e+01 -1.48359e+02 2.10952e+01 -1.48404e+02 2.11190e+01 -1.48449e+02 2.11429e+01 -1.48494e+02 2.11667e+01 -1.48538e+02 2.11905e+01 -1.48583e+02 2.12143e+01 -1.48627e+02 2.12381e+01 -1.48671e+02 2.12619e+01 -1.48714e+02 2.12857e+01 -1.48758e+02 2.13095e+01 -1.48801e+02 2.13333e+01 -1.48844e+02 2.13571e+01 -1.48887e+02 2.13810e+01 -1.48930e+02 2.14048e+01 -1.48972e+02 2.14286e+01 -1.49014e+02 2.14524e+01 -1.49056e+02 2.14762e+01 -1.49098e+02 2.15000e+01 -1.49140e+02 2.15238e+01 -1.49181e+02 -------------- next part -------------- A non-text attachment was scrubbed... Name: phasewrap.png Type: image/png Size: 24383 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: phasewrap.py Type: text/x-python Size: 725 bytes Desc: not available URL: From ryanlists at gmail.com Tue Oct 10 13:35:50 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 10 Oct 2006 12:35:50 -0500 Subject: [SciPy-user] Plotting an array In-Reply-To: <452BD8D3.2010500@sri.com> References: <452BB87F.8000601@sri.com> <20061010161945.GA7916@clipper.ens.fr> <452BD8D3.2010500@sri.com> Message-ID: It seems like you are somehow plotting t as an integer. print t to your command window and check it out. If I just do t=arange(0,2.0,0.1) and plot(t) I get a plot that looks like your test2.png and my t is: t = arange(0, 2.0, 0.1) Ryan On 10/10/06, Andrew B. Young wrote: > Gael, > > The following little program-- > -- test.py ---------------------- > | from scipy import arange, sin, pi, interpolate > | from pylab import plot, show, savefig > | t = arange(0, 2.0, 0.1) > | plot(t) > | savefig('test.png') > > Produces the plot at http://polar.sri.com/~ayoung/test.png > > Why does the plot "turn on" at one? If I, > plot( [ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. > , 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9] ) > I get what I would expect: http://polar.sri.com/~ayoung/test2.png > > Thanks for you help. > -andyy > > Gael Varoquaux wrote: > > Hi, > > > > Can you give a complete minimum example that does not work. If I put the > > following code in a file and run python on it, I get the expected result. > > > > from scipy import * > > from pylab import * > > t = arange(0, 2.0, 0.1) > > y = sin(2*pi*t) > > tck = interpolate.splrep(t, y, s=0) > > tnew = arange(0, 2.0, 0.01) > > ynew = interpolate.splev(tnew, tck, der=0) > > plot(t, y, 'o', tnew, ynew) > > show() > > > > > > Ga?l > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From stefan at sun.ac.za Tue Oct 10 13:39:13 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 10 Oct 2006 19:39:13 +0200 Subject: [SciPy-user] Plotting an array In-Reply-To: <452BC719.2030103@gmx.net> References: <452BB87F.8000601@sri.com> <452BC719.2030103@gmx.net> Message-ID: <20061010173913.GA13457@mentat.za.net> On Tue, Oct 10, 2006 at 06:15:21PM +0200, Steve Schmerler wrote: > > t = arange(0, 2.0, 0.1) > > y = sin(2*pi*t) > > tck = interpolate.splrep(t, y, s=0) > > tnew = arange(0, 2.0, 0.01) > > ynew = interpolate.splev(tnew, tck, der=0) > > plot(t, y, 'o', tnew, ynew) > > show() > > > > PS: Rather unimportant note to the scipy guys: > > In [27]: len(tck) > Out[27]: 3 > In [28]: type(tck) > Out[28]: > > The splrep docstring says tck is a len 3 tuple, not a list ... Fixed in SVN. Cheers St?fan From jdhunter at ace.bsd.uchicago.edu Tue Oct 10 13:48:47 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue, 10 Oct 2006 12:48:47 -0500 Subject: [SciPy-user] Plotting an array In-Reply-To: <452BD8D3.2010500@sri.com> ("Andrew B. Young"'s message of "Tue, 10 Oct 2006 10:30:59 -0700") References: <452BB87F.8000601@sri.com> <20061010161945.GA7916@clipper.ens.fr> <452BD8D3.2010500@sri.com> Message-ID: <87slhwcb9c.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Andrew" == Andrew B Young writes: Andrew> Gael, The following little program-- -- test.py Andrew> ---------------------- | from scipy import arange, sin, Andrew> pi, interpolate | from pylab import plot, show, savefig | Andrew> t = arange(0, 2.0, 0.1) | plot(t) | savefig('test.png') Andrew> Produces the plot at http://polar.sri.com/~ayoung/test.png Andrew> Why does the plot "turn on" at one? If I, plot( [ 0. , Andrew> 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1, Andrew> 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9] ) I get what I Andrew> would expect: http://polar.sri.com/~ayoung/test2.png This looks odd; one possibility is that you are mixing your numerix packages. Check you matplotlib rc setting. If you are using a recent scipy built on top of a recent numpy, you should setting you numerix setting to numpy -- see http://matplotlib.sf.net/matplotlibrc Alternatively, just use the matplotlib numerix namespace to insure consistency from pylab import plot, show, savefig, nx t = nx.arange(0, 2.0, 0.1) plot(t) savefig('test.png') JDH From jdhunter at ace.bsd.uchicago.edu Tue Oct 10 13:59:29 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue, 10 Oct 2006 12:59:29 -0500 Subject: [SciPy-user] Plotting an array In-Reply-To: <87slhwcb9c.fsf@peds-pc311.bsd.uchicago.edu> (John Hunter's message of "Tue, 10 Oct 2006 12:48:47 -0500") References: <452BB87F.8000601@sri.com> <20061010161945.GA7916@clipper.ens.fr> <452BD8D3.2010500@sri.com> <87slhwcb9c.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <87u02caw72.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "John" == John Hunter writes: >>>>> "Andrew" == Andrew B Young writes: Andrew> Gael, The following little program-- -- test.py Andrew> ---------------------- | from scipy import arange, sin, Andrew> pi, interpolate | from pylab import plot, show, savefig | Andrew> t = arange(0, 2.0, 0.1) | plot(t) | savefig('test.png') Andrew> Produces the plot at http://polar.sri.com/~ayoung/test.png Andrew> Why does the plot "turn on" at one? If I, plot( [ 0. , Andrew> 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1, Andrew> 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9] ) I get what I Andrew> would expect: http://polar.sri.com/~ayoung/test2.png John> This looks odd; one possibility is that you are mixing your John> numerix packages. Check you matplotlib rc setting. If you Yep, that's it. I can reproduce it with an old Numeric and a newish numpy/scipy. The combination of versions below exposes the problem if your rc setting is Numeric. Either of the suggested changes I made in my last post will fix your problem. In [5]: import numpy; numpy.__version__ Out[5]: '0.9.6.2138' In [6]: import scipy; scipy.__version__ Out[6]: '0.4.7.1617' In [7]: import Numeric; Numeric.__version__ Out[7]: '23.1' JDH From pecora at anvil.nrl.navy.mil Tue Oct 10 14:17:45 2006 From: pecora at anvil.nrl.navy.mil (Louis Pecora) Date: Tue, 10 Oct 2006 14:17:45 -0400 Subject: [SciPy-user] Plotting an array In-Reply-To: <20061010161945.GA7916@clipper.ens.fr> References: <452BB87F.8000601@sri.com> <20061010161945.GA7916@clipper.ens.fr> Message-ID: <452BE3C9.9020109@anvil.nrl.navy.mil> Gael Varoquaux wrote: > Hi, > > Can you give a complete minimum example that does not work. If I put the > following code in a file and run python on it, I get the expected result. > > from scipy import * > from pylab import * > t = arange(0, 2.0, 0.1) > y = sin(2*pi*t) > tck = interpolate.splrep(t, y, s=0) > tnew = arange(0, 2.0, 0.01) > ynew = interpolate.splev(tnew, tck, der=0) > plot(t, y, 'o', tnew, ynew) > show() > > > Ga?l > When I do the above I get: Traceback (most recent call last): File "/Users/louispecora/Code/python/test_folder/junk.py", line 18, in ? from scipy import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linsolve/__init__.py", line 5, in ? from linsolve import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linsolve/linsolve.py", line 1, in ? from scipy.sparse import isspmatrix_csc, isspmatrix_csr, isspmatrix, spdiags File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/__init__.py", line 5, in ? from sparse import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 12, in ? import sparsetools ImportError: Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparsetools.so: Library not loaded: /usr/local/lib/libg2c.0.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparsetools.so Reason: image not found I am missing sparsetools.so. But I installed the SciPy superpack. Is sparsetools.so not in that? -- Cheers, Lou Pecora Code 6362 Naval Research Lab Washington, DC 20375 USA Ph: +202-767-6002 email: pecora at anvil.nrl.navy.mil From steveire at gmail.com Tue Oct 10 14:33:19 2006 From: steveire at gmail.com (Stephen Kelly) Date: Tue, 10 Oct 2006 19:33:19 +0100 Subject: [SciPy-user] How to calculate minors. In-Reply-To: References: Message-ID: <18fbbe5a0610101133u4aa733c8h873704e0d59d0f01@mail.gmail.com> I was looking for a way to do this a while ago too. I got it working by multiplying the inverse of the matrix by the determinant, which gave me the adjoint of the original matrix. >>> from numpy import * >>> from numpy.linalg import * >>> a = array([[1, 2, 3],[4, 5, 6],[7, 8, 9]]) >>> adj = det(a) * inv(a) >>> print adj array([[ -3., 6., -3.], [ 6., -12., 6.], [ -3., 6., -3.]]) So you can access the diagonal elements now to get the minors. It's a work around, and I think it should be built in to scipy properly. Steve. On 10/10/06, Armando Serrano Lombillo wrote: > > Hello, is there any way to easily calculate minors of matrixes with > scipy/numpy. > > For example if we have matrix a: > [[1 2 3] > [4 5 6] > [7 8 9]] > > then minor(a,0) should be the same as det([[5,6],[8,9]]) and minor (a,1) > should be the same as det([[1,3],[7,9]]). > > Armando. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Tue Oct 10 14:35:39 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 10 Oct 2006 12:35:39 -0600 Subject: [SciPy-user] phase wrapping algorithm In-Reply-To: References: Message-ID: <452BE7FB.7000509@ieee.org> Ryan Krauss wrote: > I am rethinking an algorithm for unwrapping phase data from arctan2 > for a vector of complex numbers (coming from lti models of feedback > control systems). I had written something a while ago using weave, > but my old code causes seg faults with a recent scipy update. So, I > am rethinking the algorithm. There is already a phase unwrapping algorithm in NumPy for 1-d data called unwrap. numpy.unwrap(b) seems to work fine on the second column of the data you posted. Best, -Travis From ryanlists at gmail.com Tue Oct 10 14:43:31 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 10 Oct 2006 13:43:31 -0500 Subject: [SciPy-user] phase wrapping algorithm In-Reply-To: <452BE7FB.7000509@ieee.org> References: <452BE7FB.7000509@ieee.org> Message-ID: Thanks Travis. I looked in scipy.signal and when I didn't see, I just assumed it didn't exist. I was just starting to make good progess on my FORTRAN version too..... I may need to revisit this when I need more control over messy experimental data. Is there a tricky in-place way to calculate the difference between a vector and itself shift by 1 or more elements? Thanks, Ryan On 10/10/06, Travis Oliphant wrote: > Ryan Krauss wrote: > > I am rethinking an algorithm for unwrapping phase data from arctan2 > > for a vector of complex numbers (coming from lti models of feedback > > control systems). I had written something a while ago using weave, > > but my old code causes seg faults with a recent scipy update. So, I > > am rethinking the algorithm. > > There is already a phase unwrapping algorithm in NumPy for 1-d data > called unwrap. > > numpy.unwrap(b) seems to work fine on the second column of the data you > posted. > > Best, > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From oliphant.travis at ieee.org Tue Oct 10 14:52:49 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 10 Oct 2006 12:52:49 -0600 Subject: [SciPy-user] phase wrapping algorithm In-Reply-To: References: <452BE7FB.7000509@ieee.org> Message-ID: <452BEC01.1000804@ieee.org> Ryan Krauss wrote: > Thanks Travis. > > I looked in scipy.signal and when I didn't see, I just assumed it didn't exist. > > I was just starting to make good progess on my FORTRAN version too..... > > The FORTRAN version would be faster and not require memory use. The NumPy version creates the difference vector, finds discontinuites and forms a correction vector that is cumsummed and added to the original. > I may need to revisit this when I need more control over messy > experimental data. Is there a tricky in-place way to calculate the > > difference between a vector and itself shift by 1 or more elements? > c[:-1] -= c[1:] # shift-by-one c[:-2] -= c[2:] # shift-by-two etc. This will leave un-changed the last elements of the array. -Travis From ryanlists at gmail.com Tue Oct 10 14:57:09 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 10 Oct 2006 13:57:09 -0500 Subject: [SciPy-user] phase wrapping algorithm In-Reply-To: <452BEC01.1000804@ieee.org> References: <452BE7FB.7000509@ieee.org> <452BEC01.1000804@ieee.org> Message-ID: For this simple problem, my FORTRAN version is running about 30 times faster, but neither takes very long: Python vector time=0.000700950622559 FORTRAN time=2.09808349609e-05 ratio=33.4090909091 Numpy.unwrap time=0.000639915466309 Numpy ratio=30.5 I will probably finish the FORTRAN version since it is almost done. Ryan On 10/10/06, Travis Oliphant wrote: > Ryan Krauss wrote: > > Thanks Travis. > > > > I looked in scipy.signal and when I didn't see, I just assumed it didn't exist. > > > > I was just starting to make good progess on my FORTRAN version too..... > > > > > The FORTRAN version would be faster and not require memory use. The > NumPy version creates the difference vector, finds discontinuites and > forms a correction vector that is cumsummed and added to the original. > > I may need to revisit this when I need more control over messy > > experimental data. Is there a tricky in-place way to calculate the > > > > difference between a vector and itself shift by 1 or more elements? > > > > c[:-1] -= c[1:] # shift-by-one > c[:-2] -= c[2:] # shift-by-two > > etc. > > This will leave un-changed the last elements of the array. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From jorgen.stenarson at bostream.nu Tue Oct 10 16:27:31 2006 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Tue, 10 Oct 2006 22:27:31 +0200 Subject: [SciPy-user] phase wrapping algorithm In-Reply-To: References: Message-ID: <452C0233.3070600@bostream.nu> Ryan, I don't have any tested code on this computer but here is pseudo code of how I remember doing it: phasestep=arg(data[1:]/data[:-1]) totalphase=arg(data[0])+accumulate(phasestep) where accumulate should give [0, phasestep[0], phasestep[0]+phasestep[1], ...] this will of course only work as long as the phase difference between neighbouring points is small enough, but that is probably true for any algorithm. /J?rgen Stenarson Ryan Krauss skrev: > I am rethinking an algorithm for unwrapping phase data from arctan2 > for a vector of complex numbers (coming from lti models of feedback > control systems). I had written something a while ago using weave, > but my old code causes seg faults with a recent scipy update. So, I > am rethinking the algorithm. > > The problem occurs when either because of noise or just the definition > of the inverse tangent, their are discontinuities in the phase (which > is coming from arctan2(imag(vector), real(vector)). For example, in > the attached plot and data file, there is a point where the phase > passes through -180 degrees. arctan2 jumps to returning +180, +175 > etc, when it makes a pretty plot and makes more sense to me to convert > +175 to -185. What I had done previously is write a for loop that > compares the current value of the phase to the value at the previous > index (i.e. if phase[i]-phase[-1]>300 or if phase[i]-phase[i-1]<-300) > to find the discontinuities. Is there a cleaner/faster way to do > this? (I had done it in weave and will probably now do it in f2py for > comparison on speeds). I have attached/included a first attempt at an > algorithm that is more vectorized. But is there a fast, easy way to > compare a vector to the same vector shift one element up or down? > Right now I am doing this > > phase2=r_[phase[0:1],phase[0:-1]] > > to shift the vector (and reuse the first element so there isn't a jump > from phase[-1] to phase[0] which may legitimately be far apart). > > Has anyone else already done this? Are there obvious bottlenecks with > the algorithm? > > Thanks for your thoughts, > > Ryan > > ===================================== > from scipy import arange, io, r_ > from pylab import show, figure, subplot, clf, cla, semilogx, xlabel, > ylabel, legend, xlim > > import copy > > data=io.read_array('phase_data.txt') > f=data[:,0] > phase=data[:,1] > figure(1) > clf() > startphase=copy.copy(phase) > semilogx(f,startphase) > > keepgoing=1 > > inds=arange(0,len(phase)) > > while keepgoing: > phase2=r_[phase[0:1],phase[0:-1]]#artificially repeat the first element > diff=phase-phase2 > jumps=inds[abs(diff)>250] > keepgoing=jumps.any() > if not keepgoing: > break > ind=jumps.min() > if diff[ind]>0: > phase[ind:]-=360 > else: > phase[ind:]+=360 > > > semilogx(f,phase,'r--') > > legend(['Before unwrapping','after unwrapping'],2) > > xlim([1,30]) > > show() > > > ------------------------------------------------------------------------ > > 1.30952e+00 -9.60103e+01 > 1.33333e+00 -9.61655e+01 > 1.35714e+00 -9.63309e+01 > 1.38095e+00 -9.65099e+01 > 1.40476e+00 -9.67070e+01 > 1.42857e+00 -9.69287e+01 > 1.45238e+00 -9.71850e+01 > 1.47619e+00 -9.74911e+01 > 1.50000e+00 -9.78711e+01 > 1.52381e+00 -9.83629e+01 > 1.54762e+00 -9.90266e+01 > 1.57143e+00 -9.99476e+01 > 1.59524e+00 -1.01203e+02 > 1.61905e+00 -1.02674e+02 > 1.64286e+00 -1.03685e+02 > 1.66667e+00 -1.03370e+02 > 1.69048e+00 -1.02078e+02 > 1.71429e+00 -1.00795e+02 > 1.73810e+00 -9.98998e+01 > 1.76190e+00 -9.93507e+01 > 1.78571e+00 -9.90331e+01 > 1.80952e+00 -9.88608e+01 > 1.83333e+00 -9.87801e+01 > 1.85714e+00 -9.87585e+01 > 1.88095e+00 -9.87760e+01 > 1.90476e+00 -9.88203e+01 > 1.92857e+00 -9.88835e+01 > 1.95238e+00 -9.89604e+01 > 1.97619e+00 -9.90475e+01 > 2.00000e+00 -9.91423e+01 > 2.02381e+00 -9.92431e+01 > 2.04762e+00 -9.93486e+01 > 2.07143e+00 -9.94580e+01 > 2.09524e+00 -9.95705e+01 > 2.11905e+00 -9.96857e+01 > 2.14286e+00 -9.98032e+01 > 2.16667e+00 -9.99227e+01 > 2.19048e+00 -1.00044e+02 > 2.21429e+00 -1.00167e+02 > 2.23810e+00 -1.00291e+02 > 2.26190e+00 -1.00416e+02 > 2.28571e+00 -1.00543e+02 > 2.30952e+00 -1.00671e+02 > 2.33333e+00 -1.00800e+02 > 2.35714e+00 -1.00930e+02 > 2.38095e+00 -1.01061e+02 > 2.40476e+00 -1.01193e+02 > 2.42857e+00 -1.01326e+02 > 2.45238e+00 -1.01460e+02 > 2.47619e+00 -1.01595e+02 > 2.50000e+00 -1.01731e+02 > 2.52381e+00 -1.01867e+02 > 2.54762e+00 -1.02005e+02 > 2.57143e+00 -1.02143e+02 > 2.59524e+00 -1.02283e+02 > 2.61905e+00 -1.02423e+02 > 2.64286e+00 -1.02564e+02 > 2.66667e+00 -1.02706e+02 > 2.69048e+00 -1.02849e+02 > 2.71429e+00 -1.02993e+02 > 2.73810e+00 -1.03138e+02 > 2.76190e+00 -1.03284e+02 > 2.78571e+00 -1.03431e+02 > 2.80952e+00 -1.03579e+02 > 2.83333e+00 -1.03728e+02 > 2.85714e+00 -1.03877e+02 > 2.88095e+00 -1.04028e+02 > 2.90476e+00 -1.04180e+02 > 2.92857e+00 -1.04333e+02 > 2.95238e+00 -1.04487e+02 > 2.97619e+00 -1.04642e+02 > 3.00000e+00 -1.04798e+02 > 3.02381e+00 -1.04955e+02 > 3.04762e+00 -1.05113e+02 > 3.07143e+00 -1.05272e+02 > 3.09524e+00 -1.05433e+02 > 3.11905e+00 -1.05595e+02 > 3.14286e+00 -1.05758e+02 > 3.16667e+00 -1.05922e+02 > 3.19048e+00 -1.06087e+02 > 3.21429e+00 -1.06253e+02 > 3.23810e+00 -1.06421e+02 > 3.26190e+00 -1.06590e+02 > 3.28571e+00 -1.06761e+02 > 3.30952e+00 -1.06932e+02 > 3.33333e+00 -1.07105e+02 > 3.35714e+00 -1.07280e+02 > 3.38095e+00 -1.07456e+02 > 3.40476e+00 -1.07633e+02 > 3.42857e+00 -1.07812e+02 > 3.45238e+00 -1.07992e+02 > 3.47619e+00 -1.08173e+02 > 3.50000e+00 -1.08357e+02 > 3.52381e+00 -1.08541e+02 > 3.54762e+00 -1.08728e+02 > 3.57143e+00 -1.08916e+02 > 3.59524e+00 -1.09105e+02 > 3.61905e+00 -1.09297e+02 > 3.64286e+00 -1.09490e+02 > 3.66667e+00 -1.09684e+02 > 3.69048e+00 -1.09881e+02 > 3.71429e+00 -1.10079e+02 > 3.73810e+00 -1.10280e+02 > 3.76190e+00 -1.10482e+02 > 3.78571e+00 -1.10686e+02 > 3.80952e+00 -1.10892e+02 > 3.83333e+00 -1.11100e+02 > 3.85714e+00 -1.11310e+02 > 3.88095e+00 -1.11522e+02 > 3.90476e+00 -1.11736e+02 > 3.92857e+00 -1.11953e+02 > 3.95238e+00 -1.12172e+02 > 3.97619e+00 -1.12393e+02 > 4.00000e+00 -1.12616e+02 > 4.02381e+00 -1.12842e+02 > 4.04762e+00 -1.13070e+02 > 4.07143e+00 -1.13301e+02 > 4.09524e+00 -1.13534e+02 > 4.11905e+00 -1.13770e+02 > 4.14286e+00 -1.14009e+02 > 4.16667e+00 -1.14250e+02 > 4.19048e+00 -1.14494e+02 > 4.21429e+00 -1.14741e+02 > 4.23810e+00 -1.14991e+02 > 4.26190e+00 -1.15244e+02 > 4.28571e+00 -1.15500e+02 > 4.30952e+00 -1.15759e+02 > 4.33333e+00 -1.16022e+02 > 4.35714e+00 -1.16287e+02 > 4.38095e+00 -1.16556e+02 > 4.40476e+00 -1.16829e+02 > 4.42857e+00 -1.17105e+02 > 4.45238e+00 -1.17385e+02 > 4.47619e+00 -1.17669e+02 > 4.50000e+00 -1.17956e+02 > 4.52381e+00 -1.18247e+02 > 4.54762e+00 -1.18543e+02 > 4.57143e+00 -1.18842e+02 > 4.59524e+00 -1.19146e+02 > 4.61905e+00 -1.19454e+02 > 4.64286e+00 -1.19767e+02 > 4.66667e+00 -1.20084e+02 > 4.69048e+00 -1.20406e+02 > 4.71429e+00 -1.20733e+02 > 4.73810e+00 -1.21065e+02 > 4.76190e+00 -1.21402e+02 > 4.78571e+00 -1.21745e+02 > 4.80952e+00 -1.22092e+02 > 4.83333e+00 -1.22446e+02 > 4.85714e+00 -1.22805e+02 > 4.88095e+00 -1.23170e+02 > 4.90476e+00 -1.23541e+02 > 4.92857e+00 -1.23918e+02 > 4.95238e+00 -1.24301e+02 > 4.97619e+00 -1.24692e+02 > 5.00000e+00 -1.25089e+02 > 5.02381e+00 -1.25492e+02 > 5.04762e+00 -1.25903e+02 > 5.07143e+00 -1.26322e+02 > 5.09524e+00 -1.26748e+02 > 5.11905e+00 -1.27182e+02 > 5.14286e+00 -1.27623e+02 > 5.16667e+00 -1.28073e+02 > 5.19048e+00 -1.28532e+02 > 5.21429e+00 -1.28999e+02 > 5.23810e+00 -1.29475e+02 > 5.26190e+00 -1.29961e+02 > 5.28571e+00 -1.30455e+02 > 5.30952e+00 -1.30960e+02 > 5.33333e+00 -1.31474e+02 > 5.35714e+00 -1.31999e+02 > 5.38095e+00 -1.32534e+02 > 5.40476e+00 -1.33081e+02 > 5.42857e+00 -1.33638e+02 > 5.45238e+00 -1.34207e+02 > 5.47619e+00 -1.34787e+02 > 5.50000e+00 -1.35379e+02 > 5.52381e+00 -1.35984e+02 > 5.54762e+00 -1.36601e+02 > 5.57143e+00 -1.37231e+02 > 5.59524e+00 -1.37875e+02 > 5.61905e+00 -1.38531e+02 > 5.64286e+00 -1.39202e+02 > 5.66667e+00 -1.39887e+02 > 5.69048e+00 -1.40585e+02 > 5.71429e+00 -1.41299e+02 > 5.73810e+00 -1.42027e+02 > 5.76190e+00 -1.42770e+02 > 5.78571e+00 -1.43529e+02 > 5.80952e+00 -1.44303e+02 > 5.83333e+00 -1.45093e+02 > 5.85714e+00 -1.45898e+02 > 5.88095e+00 -1.46719e+02 > 5.90476e+00 -1.47556e+02 > 5.92857e+00 -1.48409e+02 > 5.95238e+00 -1.49278e+02 > 5.97619e+00 -1.50163e+02 > 6.00000e+00 -1.51063e+02 > 6.02381e+00 -1.51978e+02 > 6.04762e+00 -1.52909e+02 > 6.07143e+00 -1.53855e+02 > 6.09524e+00 -1.54815e+02 > 6.11905e+00 -1.55789e+02 > 6.14286e+00 -1.56776e+02 > 6.16667e+00 -1.57776e+02 > 6.19048e+00 -1.58787e+02 > 6.21429e+00 -1.59810e+02 > 6.23810e+00 -1.60842e+02 > 6.26190e+00 -1.61882e+02 > 6.28571e+00 -1.62930e+02 > 6.30952e+00 -1.63984e+02 > 6.33333e+00 -1.65042e+02 > 6.35714e+00 -1.66102e+02 > 6.38095e+00 -1.67164e+02 > 6.40476e+00 -1.68224e+02 > 6.42857e+00 -1.69282e+02 > 6.45238e+00 -1.70333e+02 > 6.47619e+00 -1.71378e+02 > 6.50000e+00 -1.72411e+02 > 6.52381e+00 -1.73433e+02 > 6.54762e+00 -1.74438e+02 > 6.57143e+00 -1.75425e+02 > 6.59524e+00 -1.76391e+02 > 6.61905e+00 -1.77333e+02 > 6.64286e+00 -1.78247e+02 > 6.66667e+00 -1.79130e+02 > 6.69048e+00 -1.79979e+02 > 6.71429e+00 1.79210e+02 > 6.73810e+00 1.78441e+02 > 6.76190e+00 1.77716e+02 > 6.78571e+00 1.77042e+02 > 6.80952e+00 1.76420e+02 > 6.83333e+00 1.75856e+02 > 6.85714e+00 1.75355e+02 > 6.88095e+00 1.74920e+02 > 6.90476e+00 1.74556e+02 > 6.92857e+00 1.74270e+02 > 6.95238e+00 1.74066e+02 > 6.97619e+00 1.73950e+02 > 7.00000e+00 1.73928e+02 > 7.02381e+00 1.74007e+02 > 7.04762e+00 1.74192e+02 > 7.07143e+00 1.74492e+02 > 7.09524e+00 1.74913e+02 > 7.11905e+00 1.75464e+02 > 7.14286e+00 1.76151e+02 > 7.16667e+00 1.76983e+02 > 7.19048e+00 1.77967e+02 > 7.21429e+00 1.79110e+02 > 7.23810e+00 -1.79580e+02 > 7.26190e+00 -1.78100e+02 > 7.28571e+00 -1.76444e+02 > 7.30952e+00 -1.74612e+02 > 7.33333e+00 -1.72605e+02 > 7.35714e+00 -1.70429e+02 > 7.38095e+00 -1.68091e+02 > 7.40476e+00 -1.65606e+02 > 7.42857e+00 -1.62992e+02 > 7.45238e+00 -1.60271e+02 > 7.47619e+00 -1.57469e+02 > 7.50000e+00 -1.54617e+02 > 7.52381e+00 -1.51745e+02 > 7.54762e+00 -1.48884e+02 > 7.57143e+00 -1.46063e+02 > 7.59524e+00 -1.43310e+02 > 7.61905e+00 -1.40648e+02 > 7.64286e+00 -1.38095e+02 > 7.66667e+00 -1.35666e+02 > 7.69048e+00 -1.33370e+02 > 7.71429e+00 -1.31213e+02 > 7.73810e+00 -1.29197e+02 > 7.76190e+00 -1.27321e+02 > 7.78571e+00 -1.25582e+02 > 7.80952e+00 -1.23975e+02 > 7.83333e+00 -1.22493e+02 > 7.85714e+00 -1.21130e+02 > 7.88095e+00 -1.19879e+02 > 7.90476e+00 -1.18732e+02 > 7.92857e+00 -1.17683e+02 > 7.95238e+00 -1.16723e+02 > 7.97619e+00 -1.15846e+02 > 8.00000e+00 -1.15045e+02 > 8.02381e+00 -1.14315e+02 > 8.04762e+00 -1.13651e+02 > 8.07143e+00 -1.13045e+02 > 8.09524e+00 -1.12495e+02 > 8.11905e+00 -1.11995e+02 > 8.14286e+00 -1.11542e+02 > 8.16667e+00 -1.11131e+02 > 8.19048e+00 -1.10759e+02 > 8.21429e+00 -1.10424e+02 > 8.23810e+00 -1.10121e+02 > 8.26190e+00 -1.09849e+02 > 8.28571e+00 -1.09605e+02 > 8.30952e+00 -1.09388e+02 > 8.33333e+00 -1.09194e+02 > 8.35714e+00 -1.09022e+02 > 8.38095e+00 -1.08871e+02 > 8.40476e+00 -1.08738e+02 > 8.42857e+00 -1.08623e+02 > 8.45238e+00 -1.08524e+02 > 8.47619e+00 -1.08440e+02 > 8.50000e+00 -1.08370e+02 > 8.52381e+00 -1.08313e+02 > 8.54762e+00 -1.08268e+02 > 8.57143e+00 -1.08234e+02 > 8.59524e+00 -1.08210e+02 > 8.61905e+00 -1.08196e+02 > 8.64286e+00 -1.08190e+02 > 8.66667e+00 -1.08193e+02 > 8.69048e+00 -1.08203e+02 > 8.71429e+00 -1.08221e+02 > 8.73810e+00 -1.08245e+02 > 8.76190e+00 -1.08276e+02 > 8.78571e+00 -1.08312e+02 > 8.80952e+00 -1.08354e+02 > 8.83333e+00 -1.08401e+02 > 8.85714e+00 -1.08452e+02 > 8.88095e+00 -1.08508e+02 > 8.90476e+00 -1.08568e+02 > 8.92857e+00 -1.08632e+02 > 8.95238e+00 -1.08699e+02 > 8.97619e+00 -1.08770e+02 > 9.00000e+00 -1.08843e+02 > 9.02381e+00 -1.08920e+02 > 9.04762e+00 -1.08999e+02 > 9.07143e+00 -1.09081e+02 > 9.09524e+00 -1.09166e+02 > 9.11905e+00 -1.09252e+02 > 9.14286e+00 -1.09340e+02 > 9.16667e+00 -1.09431e+02 > 9.19048e+00 -1.09523e+02 > 9.21429e+00 -1.09617e+02 > 9.23810e+00 -1.09712e+02 > 9.26190e+00 -1.09809e+02 > 9.28571e+00 -1.09907e+02 > 9.30952e+00 -1.10007e+02 > 9.33333e+00 -1.10108e+02 > 9.35714e+00 -1.10209e+02 > 9.38095e+00 -1.10312e+02 > 9.40476e+00 -1.10416e+02 > 9.42857e+00 -1.10520e+02 > 9.45238e+00 -1.10626e+02 > 9.47619e+00 -1.10732e+02 > 9.50000e+00 -1.10839e+02 > 9.52381e+00 -1.10946e+02 > 9.54762e+00 -1.11054e+02 > 9.57143e+00 -1.11162e+02 > 9.59524e+00 -1.11271e+02 > 9.61905e+00 -1.11381e+02 > 9.64286e+00 -1.11491e+02 > 9.66667e+00 -1.11601e+02 > 9.69048e+00 -1.11712e+02 > 9.71429e+00 -1.11822e+02 > 9.73810e+00 -1.11934e+02 > 9.76190e+00 -1.12045e+02 > 9.78571e+00 -1.12156e+02 > 9.80952e+00 -1.12268e+02 > 9.83333e+00 -1.12380e+02 > 9.85714e+00 -1.12492e+02 > 9.88095e+00 -1.12604e+02 > 9.90476e+00 -1.12717e+02 > 9.92857e+00 -1.12829e+02 > 9.95238e+00 -1.12941e+02 > 9.97619e+00 -1.13054e+02 > 1.00000e+01 -1.13166e+02 > 1.00238e+01 -1.13279e+02 > 1.00476e+01 -1.13391e+02 > 1.00714e+01 -1.13504e+02 > 1.00952e+01 -1.13616e+02 > 1.01190e+01 -1.13728e+02 > 1.01429e+01 -1.13841e+02 > 1.01667e+01 -1.13953e+02 > 1.01905e+01 -1.14065e+02 > 1.02143e+01 -1.14177e+02 > 1.02381e+01 -1.14289e+02 > 1.02619e+01 -1.14401e+02 > 1.02857e+01 -1.14512e+02 > 1.03095e+01 -1.14624e+02 > 1.03333e+01 -1.14735e+02 > 1.03571e+01 -1.14847e+02 > 1.03810e+01 -1.14958e+02 > 1.04048e+01 -1.15069e+02 > 1.04286e+01 -1.15180e+02 > 1.04524e+01 -1.15290e+02 > 1.04762e+01 -1.15401e+02 > 1.05000e+01 -1.15511e+02 > 1.05238e+01 -1.15621e+02 > 1.05476e+01 -1.15731e+02 > 1.05714e+01 -1.15841e+02 > 1.05952e+01 -1.15951e+02 > 1.06190e+01 -1.16060e+02 > 1.06429e+01 -1.16169e+02 > 1.06667e+01 -1.16278e+02 > 1.06905e+01 -1.16387e+02 > 1.07143e+01 -1.16495e+02 > 1.07381e+01 -1.16604e+02 > 1.07619e+01 -1.16712e+02 > 1.07857e+01 -1.16820e+02 > 1.08095e+01 -1.16927e+02 > 1.08333e+01 -1.17035e+02 > 1.08571e+01 -1.17142e+02 > 1.08810e+01 -1.17249e+02 > 1.09048e+01 -1.17356e+02 > 1.09286e+01 -1.17463e+02 > 1.09524e+01 -1.17569e+02 > 1.09762e+01 -1.17675e+02 > 1.10000e+01 -1.17781e+02 > 1.10238e+01 -1.17887e+02 > 1.10476e+01 -1.17992e+02 > 1.10714e+01 -1.18098e+02 > 1.10952e+01 -1.18203e+02 > 1.11190e+01 -1.18307e+02 > 1.11429e+01 -1.18412e+02 > 1.11667e+01 -1.18516e+02 > 1.11905e+01 -1.18620e+02 > 1.12143e+01 -1.18724e+02 > 1.12381e+01 -1.18828e+02 > 1.12619e+01 -1.18931e+02 > 1.12857e+01 -1.19034e+02 > 1.13095e+01 -1.19137e+02 > 1.13333e+01 -1.19240e+02 > 1.13571e+01 -1.19342e+02 > 1.13810e+01 -1.19445e+02 > 1.14048e+01 -1.19547e+02 > 1.14286e+01 -1.19648e+02 > 1.14524e+01 -1.19750e+02 > 1.14762e+01 -1.19851e+02 > 1.15000e+01 -1.19952e+02 > 1.15238e+01 -1.20053e+02 > 1.15476e+01 -1.20154e+02 > 1.15714e+01 -1.20254e+02 > 1.15952e+01 -1.20354e+02 > 1.16190e+01 -1.20454e+02 > 1.16429e+01 -1.20554e+02 > 1.16667e+01 -1.20654e+02 > 1.16905e+01 -1.20753e+02 > 1.17143e+01 -1.20852e+02 > 1.17381e+01 -1.20951e+02 > 1.17619e+01 -1.21049e+02 > 1.17857e+01 -1.21148e+02 > 1.18095e+01 -1.21246e+02 > 1.18333e+01 -1.21344e+02 > 1.18571e+01 -1.21441e+02 > 1.18810e+01 -1.21539e+02 > 1.19048e+01 -1.21636e+02 > 1.19286e+01 -1.21733e+02 > 1.19524e+01 -1.21830e+02 > 1.19762e+01 -1.21927e+02 > 1.20000e+01 -1.22023e+02 > 1.20238e+01 -1.22119e+02 > 1.20476e+01 -1.22215e+02 > 1.20714e+01 -1.22311e+02 > 1.20952e+01 -1.22406e+02 > 1.21190e+01 -1.22502e+02 > 1.21429e+01 -1.22597e+02 > 1.21667e+01 -1.22692e+02 > 1.21905e+01 -1.22786e+02 > 1.22143e+01 -1.22881e+02 > 1.22381e+01 -1.22975e+02 > 1.22619e+01 -1.23069e+02 > 1.22857e+01 -1.23163e+02 > 1.23095e+01 -1.23256e+02 > 1.23333e+01 -1.23350e+02 > 1.23571e+01 -1.23443e+02 > 1.23810e+01 -1.23536e+02 > 1.24048e+01 -1.23629e+02 > 1.24286e+01 -1.23721e+02 > 1.24524e+01 -1.23814e+02 > 1.24762e+01 -1.23906e+02 > 1.25000e+01 -1.23998e+02 > 1.25238e+01 -1.24090e+02 > 1.25476e+01 -1.24181e+02 > 1.25714e+01 -1.24273e+02 > 1.25952e+01 -1.24364e+02 > 1.26190e+01 -1.24455e+02 > 1.26429e+01 -1.24546e+02 > 1.26667e+01 -1.24636e+02 > 1.26905e+01 -1.24727e+02 > 1.27143e+01 -1.24817e+02 > 1.27381e+01 -1.24907e+02 > 1.27619e+01 -1.24997e+02 > 1.27857e+01 -1.25086e+02 > 1.28095e+01 -1.25176e+02 > 1.28333e+01 -1.25265e+02 > 1.28571e+01 -1.25354e+02 > 1.28810e+01 -1.25443e+02 > 1.29048e+01 -1.25532e+02 > 1.29286e+01 -1.25620e+02 > 1.29524e+01 -1.25709e+02 > 1.29762e+01 -1.25797e+02 > 1.30000e+01 -1.25885e+02 > 1.30238e+01 -1.25972e+02 > 1.30476e+01 -1.26060e+02 > 1.30714e+01 -1.26147e+02 > 1.30952e+01 -1.26235e+02 > 1.31190e+01 -1.26322e+02 > 1.31429e+01 -1.26409e+02 > 1.31667e+01 -1.26495e+02 > 1.31905e+01 -1.26582e+02 > 1.32143e+01 -1.26668e+02 > 1.32381e+01 -1.26754e+02 > 1.32619e+01 -1.26840e+02 > 1.32857e+01 -1.26926e+02 > 1.33095e+01 -1.27012e+02 > 1.33333e+01 -1.27097e+02 > 1.33571e+01 -1.27182e+02 > 1.33810e+01 -1.27267e+02 > 1.34048e+01 -1.27352e+02 > 1.34286e+01 -1.27437e+02 > 1.34524e+01 -1.27522e+02 > 1.34762e+01 -1.27606e+02 > 1.35000e+01 -1.27690e+02 > 1.35238e+01 -1.27775e+02 > 1.35476e+01 -1.27858e+02 > 1.35714e+01 -1.27942e+02 > 1.35952e+01 -1.28026e+02 > 1.36190e+01 -1.28109e+02 > 1.36429e+01 -1.28193e+02 > 1.36667e+01 -1.28276e+02 > 1.36905e+01 -1.28359e+02 > 1.37143e+01 -1.28441e+02 > 1.37381e+01 -1.28524e+02 > 1.37619e+01 -1.28606e+02 > 1.37857e+01 -1.28689e+02 > 1.38095e+01 -1.28771e+02 > 1.38333e+01 -1.28853e+02 > 1.38571e+01 -1.28935e+02 > 1.38810e+01 -1.29016e+02 > 1.39048e+01 -1.29098e+02 > 1.39286e+01 -1.29179e+02 > 1.39524e+01 -1.29260e+02 > 1.39762e+01 -1.29341e+02 > 1.40000e+01 -1.29422e+02 > 1.40238e+01 -1.29503e+02 > 1.40476e+01 -1.29584e+02 > 1.40714e+01 -1.29664e+02 > 1.40952e+01 -1.29744e+02 > 1.41190e+01 -1.29824e+02 > 1.41429e+01 -1.29904e+02 > 1.41667e+01 -1.29984e+02 > 1.41905e+01 -1.30064e+02 > 1.42143e+01 -1.30143e+02 > 1.42381e+01 -1.30223e+02 > 1.42619e+01 -1.30302e+02 > 1.42857e+01 -1.30381e+02 > 1.43095e+01 -1.30460e+02 > 1.43333e+01 -1.30539e+02 > 1.43571e+01 -1.30617e+02 > 1.43810e+01 -1.30696e+02 > 1.44048e+01 -1.30774e+02 > 1.44286e+01 -1.30853e+02 > 1.44524e+01 -1.30931e+02 > 1.44762e+01 -1.31009e+02 > 1.45000e+01 -1.31086e+02 > 1.45238e+01 -1.31164e+02 > 1.45476e+01 -1.31241e+02 > 1.45714e+01 -1.31319e+02 > 1.45952e+01 -1.31396e+02 > 1.46190e+01 -1.31473e+02 > 1.46429e+01 -1.31550e+02 > 1.46667e+01 -1.31627e+02 > 1.46905e+01 -1.31704e+02 > 1.47143e+01 -1.31780e+02 > 1.47381e+01 -1.31857e+02 > 1.47619e+01 -1.31933e+02 > 1.47857e+01 -1.32009e+02 > 1.48095e+01 -1.32085e+02 > 1.48333e+01 -1.32161e+02 > 1.48571e+01 -1.32237e+02 > 1.48810e+01 -1.32312e+02 > 1.49048e+01 -1.32388e+02 > 1.49286e+01 -1.32463e+02 > 1.49524e+01 -1.32538e+02 > 1.49762e+01 -1.32613e+02 > 1.50000e+01 -1.32688e+02 > 1.50238e+01 -1.32763e+02 > 1.50476e+01 -1.32838e+02 > 1.50714e+01 -1.32912e+02 > 1.50952e+01 -1.32987e+02 > 1.51190e+01 -1.33061e+02 > 1.51429e+01 -1.33135e+02 > 1.51667e+01 -1.33209e+02 > 1.51905e+01 -1.33283e+02 > 1.52143e+01 -1.33357e+02 > 1.52381e+01 -1.33431e+02 > 1.52619e+01 -1.33504e+02 > 1.52857e+01 -1.33578e+02 > 1.53095e+01 -1.33651e+02 > 1.53333e+01 -1.33724e+02 > 1.53571e+01 -1.33797e+02 > 1.53810e+01 -1.33870e+02 > 1.54048e+01 -1.33943e+02 > 1.54286e+01 -1.34016e+02 > 1.54524e+01 -1.34088e+02 > 1.54762e+01 -1.34161e+02 > 1.55000e+01 -1.34233e+02 > 1.55238e+01 -1.34306e+02 > 1.55476e+01 -1.34378e+02 > 1.55714e+01 -1.34450e+02 > 1.55952e+01 -1.34521e+02 > 1.56190e+01 -1.34593e+02 > 1.56429e+01 -1.34665e+02 > 1.56667e+01 -1.34736e+02 > 1.56905e+01 -1.34808e+02 > 1.57143e+01 -1.34879e+02 > 1.57381e+01 -1.34950e+02 > 1.57619e+01 -1.35021e+02 > 1.57857e+01 -1.35092e+02 > 1.58095e+01 -1.35163e+02 > 1.58333e+01 -1.35234e+02 > 1.58571e+01 -1.35305e+02 > 1.58810e+01 -1.35375e+02 > 1.59048e+01 -1.35445e+02 > 1.59286e+01 -1.35516e+02 > 1.59524e+01 -1.35586e+02 > 1.59762e+01 -1.35656e+02 > 1.60000e+01 -1.35726e+02 > 1.60238e+01 -1.35796e+02 > 1.60476e+01 -1.35865e+02 > 1.60714e+01 -1.35935e+02 > 1.60952e+01 -1.36005e+02 > 1.61190e+01 -1.36074e+02 > 1.61429e+01 -1.36143e+02 > 1.61667e+01 -1.36213e+02 > 1.61905e+01 -1.36282e+02 > 1.62143e+01 -1.36351e+02 > 1.62381e+01 -1.36419e+02 > 1.62619e+01 -1.36488e+02 > 1.62857e+01 -1.36557e+02 > 1.63095e+01 -1.36626e+02 > 1.63333e+01 -1.36694e+02 > 1.63571e+01 -1.36762e+02 > 1.63810e+01 -1.36831e+02 > 1.64048e+01 -1.36899e+02 > 1.64286e+01 -1.36967e+02 > 1.64524e+01 -1.37035e+02 > 1.64762e+01 -1.37103e+02 > 1.65000e+01 -1.37170e+02 > 1.65238e+01 -1.37238e+02 > 1.65476e+01 -1.37306e+02 > 1.65714e+01 -1.37373e+02 > 1.65952e+01 -1.37440e+02 > 1.66190e+01 -1.37508e+02 > 1.66429e+01 -1.37575e+02 > 1.66667e+01 -1.37642e+02 > 1.66905e+01 -1.37709e+02 > 1.67143e+01 -1.37776e+02 > 1.67381e+01 -1.37842e+02 > 1.67619e+01 -1.37909e+02 > 1.67857e+01 -1.37976e+02 > 1.68095e+01 -1.38042e+02 > 1.68333e+01 -1.38108e+02 > 1.68571e+01 -1.38175e+02 > 1.68810e+01 -1.38241e+02 > 1.69048e+01 -1.38307e+02 > 1.69286e+01 -1.38373e+02 > 1.69524e+01 -1.38439e+02 > 1.69762e+01 -1.38505e+02 > 1.70000e+01 -1.38570e+02 > 1.70238e+01 -1.38636e+02 > 1.70476e+01 -1.38701e+02 > 1.70714e+01 -1.38767e+02 > 1.70952e+01 -1.38832e+02 > 1.71190e+01 -1.38898e+02 > 1.71429e+01 -1.38963e+02 > 1.71667e+01 -1.39028e+02 > 1.71905e+01 -1.39093e+02 > 1.72143e+01 -1.39158e+02 > 1.72381e+01 -1.39222e+02 > 1.72619e+01 -1.39287e+02 > 1.72857e+01 -1.39352e+02 > 1.73095e+01 -1.39416e+02 > 1.73333e+01 -1.39481e+02 > 1.73571e+01 -1.39545e+02 > 1.73810e+01 -1.39609e+02 > 1.74048e+01 -1.39673e+02 > 1.74286e+01 -1.39737e+02 > 1.74524e+01 -1.39801e+02 > 1.74762e+01 -1.39865e+02 > 1.75000e+01 -1.39929e+02 > 1.75238e+01 -1.39993e+02 > 1.75476e+01 -1.40056e+02 > 1.75714e+01 -1.40120e+02 > 1.75952e+01 -1.40183e+02 > 1.76190e+01 -1.40247e+02 > 1.76429e+01 -1.40310e+02 > 1.76667e+01 -1.40373e+02 > 1.76905e+01 -1.40436e+02 > 1.77143e+01 -1.40499e+02 > 1.77381e+01 -1.40562e+02 > 1.77619e+01 -1.40625e+02 > 1.77857e+01 -1.40688e+02 > 1.78095e+01 -1.40751e+02 > 1.78333e+01 -1.40813e+02 > 1.78571e+01 -1.40876e+02 > 1.78810e+01 -1.40938e+02 > 1.79048e+01 -1.41001e+02 > 1.79286e+01 -1.41063e+02 > 1.79524e+01 -1.41125e+02 > 1.79762e+01 -1.41187e+02 > 1.80000e+01 -1.41249e+02 > 1.80238e+01 -1.41311e+02 > 1.80476e+01 -1.41373e+02 > 1.80714e+01 -1.41435e+02 > 1.80952e+01 -1.41496e+02 > 1.81190e+01 -1.41558e+02 > 1.81429e+01 -1.41619e+02 > 1.81667e+01 -1.41681e+02 > 1.81905e+01 -1.41742e+02 > 1.82143e+01 -1.41803e+02 > 1.82381e+01 -1.41864e+02 > 1.82619e+01 -1.41926e+02 > 1.82857e+01 -1.41987e+02 > 1.83095e+01 -1.42047e+02 > 1.83333e+01 -1.42108e+02 > 1.83571e+01 -1.42169e+02 > 1.83810e+01 -1.42230e+02 > 1.84048e+01 -1.42290e+02 > 1.84286e+01 -1.42351e+02 > 1.84524e+01 -1.42411e+02 > 1.84762e+01 -1.42471e+02 > 1.85000e+01 -1.42532e+02 > 1.85238e+01 -1.42592e+02 > 1.85476e+01 -1.42652e+02 > 1.85714e+01 -1.42712e+02 > 1.85952e+01 -1.42771e+02 > 1.86190e+01 -1.42831e+02 > 1.86429e+01 -1.42891e+02 > 1.86667e+01 -1.42951e+02 > 1.86905e+01 -1.43010e+02 > 1.87143e+01 -1.43069e+02 > 1.87381e+01 -1.43129e+02 > 1.87619e+01 -1.43188e+02 > 1.87857e+01 -1.43247e+02 > 1.88095e+01 -1.43306e+02 > 1.88333e+01 -1.43365e+02 > 1.88571e+01 -1.43424e+02 > 1.88810e+01 -1.43483e+02 > 1.89048e+01 -1.43542e+02 > 1.89286e+01 -1.43600e+02 > 1.89524e+01 -1.43659e+02 > 1.89762e+01 -1.43717e+02 > 1.90000e+01 -1.43776e+02 > 1.90238e+01 -1.43834e+02 > 1.90476e+01 -1.43892e+02 > 1.90714e+01 -1.43950e+02 > 1.90952e+01 -1.44008e+02 > 1.91190e+01 -1.44066e+02 > 1.91429e+01 -1.44123e+02 > 1.91667e+01 -1.44181e+02 > 1.91905e+01 -1.44239e+02 > 1.92143e+01 -1.44296e+02 > 1.92381e+01 -1.44353e+02 > 1.92619e+01 -1.44411e+02 > 1.92857e+01 -1.44468e+02 > 1.93095e+01 -1.44525e+02 > 1.93333e+01 -1.44582e+02 > 1.93571e+01 -1.44639e+02 > 1.93810e+01 -1.44695e+02 > 1.94048e+01 -1.44752e+02 > 1.94286e+01 -1.44808e+02 > 1.94524e+01 -1.44865e+02 > 1.94762e+01 -1.44921e+02 > 1.95000e+01 -1.44977e+02 > 1.95238e+01 -1.45033e+02 > 1.95476e+01 -1.45089e+02 > 1.95714e+01 -1.45145e+02 > 1.95952e+01 -1.45201e+02 > 1.96190e+01 -1.45256e+02 > 1.96429e+01 -1.45312e+02 > 1.96667e+01 -1.45367e+02 > 1.96905e+01 -1.45422e+02 > 1.97143e+01 -1.45477e+02 > 1.97381e+01 -1.45532e+02 > 1.97619e+01 -1.45587e+02 > 1.97857e+01 -1.45642e+02 > 1.98095e+01 -1.45696e+02 > 1.98333e+01 -1.45751e+02 > 1.98571e+01 -1.45805e+02 > 1.98810e+01 -1.45859e+02 > 1.99048e+01 -1.45913e+02 > 1.99286e+01 -1.45967e+02 > 1.99524e+01 -1.46021e+02 > 1.99762e+01 -1.46075e+02 > 2.00000e+01 -1.46128e+02 > 2.00238e+01 -1.46181e+02 > 2.00476e+01 -1.46235e+02 > 2.00714e+01 -1.46288e+02 > 2.00952e+01 -1.46341e+02 > 2.01190e+01 -1.46393e+02 > 2.01429e+01 -1.46446e+02 > 2.01667e+01 -1.46498e+02 > 2.01905e+01 -1.46551e+02 > 2.02143e+01 -1.46603e+02 > 2.02381e+01 -1.46655e+02 > 2.02619e+01 -1.46706e+02 > 2.02857e+01 -1.46758e+02 > 2.03095e+01 -1.46810e+02 > 2.03333e+01 -1.46861e+02 > 2.03571e+01 -1.46912e+02 > 2.03810e+01 -1.46963e+02 > 2.04048e+01 -1.47014e+02 > 2.04286e+01 -1.47064e+02 > 2.04524e+01 -1.47115e+02 > 2.04762e+01 -1.47165e+02 > 2.05000e+01 -1.47215e+02 > 2.05238e+01 -1.47265e+02 > 2.05476e+01 -1.47315e+02 > 2.05714e+01 -1.47364e+02 > 2.05952e+01 -1.47414e+02 > 2.06190e+01 -1.47463e+02 > 2.06429e+01 -1.47512e+02 > 2.06667e+01 -1.47560e+02 > 2.06905e+01 -1.47609e+02 > 2.07143e+01 -1.47657e+02 > 2.07381e+01 -1.47706e+02 > 2.07619e+01 -1.47754e+02 > 2.07857e+01 -1.47801e+02 > 2.08095e+01 -1.47849e+02 > 2.08333e+01 -1.47896e+02 > 2.08571e+01 -1.47944e+02 > 2.08810e+01 -1.47991e+02 > 2.09048e+01 -1.48037e+02 > 2.09286e+01 -1.48084e+02 > 2.09524e+01 -1.48130e+02 > 2.09762e+01 -1.48176e+02 > 2.10000e+01 -1.48222e+02 > 2.10238e+01 -1.48268e+02 > 2.10476e+01 -1.48314e+02 > 2.10714e+01 -1.48359e+02 > 2.10952e+01 -1.48404e+02 > 2.11190e+01 -1.48449e+02 > 2.11429e+01 -1.48494e+02 > 2.11667e+01 -1.48538e+02 > 2.11905e+01 -1.48583e+02 > 2.12143e+01 -1.48627e+02 > 2.12381e+01 -1.48671e+02 > 2.12619e+01 -1.48714e+02 > 2.12857e+01 -1.48758e+02 > 2.13095e+01 -1.48801e+02 > 2.13333e+01 -1.48844e+02 > 2.13571e+01 -1.48887e+02 > 2.13810e+01 -1.48930e+02 > 2.14048e+01 -1.48972e+02 > 2.14286e+01 -1.49014e+02 > 2.14524e+01 -1.49056e+02 > 2.14762e+01 -1.49098e+02 > 2.15000e+01 -1.49140e+02 > 2.15238e+01 -1.49181e+02 > > > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------ > > from scipy import arange, io, r_ > from pylab import show, figure, subplot, clf, cla, semilogx, xlabel, ylabel, legend, xlim > > import copy > > data=io.read_array('phase_data.txt') > f=data[:,0] > phase=data[:,1] > figure(1) > clf() > startphase=copy.copy(phase) > semilogx(f,startphase) > > keepgoing=1 > > inds=arange(0,len(phase)) > > while keepgoing: > phase2=r_[phase[0:1],phase[0:-1]]#artificially repeat the first element > diff=phase-phase2 > jumps=inds[abs(diff)>250] > keepgoing=jumps.any() > if not keepgoing: > break > ind=jumps.min() > if diff[ind]>0: > phase[ind:]-=360 > else: > phase[ind:]+=360 > > > semilogx(f,phase,'r--') > > legend(['Before unwrapping','after unwrapping'],2) > > xlim([1,30]) > > show() > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From bryan at cole.uklinux.net Tue Oct 10 16:55:49 2006 From: bryan at cole.uklinux.net (Bryan Cole) Date: Tue, 10 Oct 2006 21:55:49 +0100 Subject: [SciPy-user] phase wrapping algorithm In-Reply-To: References: Message-ID: <1160513747.8450.167.camel@pc1.cole.uklinux.net> I do this quite a lot in the context of FFTs. try: def unwrap(thetaArray): """takes an array of theta values in degrees returns the data in unwrapped form""" diff = zeros_like(thetaArray) diff[1:] = thetaArray[1:] - thetaArray[:-1] upSteps = diff > 180 downSteps = diff < -180 shift = cumsum(upSteps) - cumsum(downSteps) return thetaArray - 360*shift Obviously, a Weave version will be a lot faster, but this is at least fully vectorised, so it's not bad. BC On Tue, 2006-10-10 at 12:32 -0500, Ryan Krauss wrote: > I am rethinking an algorithm for unwrapping phase data from arctan2 > for a vector of complex numbers (coming from lti models of feedback > control systems). I had written something a while ago using weave, > but my old code causes seg faults with a recent scipy update. So, I > am rethinking the algorithm. > > The problem occurs when either because of noise or just the definition > of the inverse tangent, their are discontinuities in the phase (which > is coming from arctan2(imag(vector), real(vector)). For example, in > the attached plot and data file, there is a point where the phase > passes through -180 degrees. arctan2 jumps to returning +180, +175 > etc, when it makes a pretty plot and makes more sense to me to convert > +175 to -185. What I had done previously is write a for loop that > compares the current value of the phase to the value at the previous > index (i.e. if phase[i]-phase[-1]>300 or if phase[i]-phase[i-1]<-300) > to find the discontinuities. Is there a cleaner/faster way to do > this? (I had done it in weave and will probably now do it in f2py for > comparison on speeds). I have attached/included a first attempt at an > algorithm that is more vectorized. But is there a fast, easy way to > compare a vector to the same vector shift one element up or down? > Right now I am doing this > > phase2=r_[phase[0:1],phase[0:-1]] > > to shift the vector (and reuse the first element so there isn't a jump > from phase[-1] to phase[0] which may legitimately be far apart). > > Has anyone else already done this? Are there obvious bottlenecks with > the algorithm? > > Thanks for your thoughts, > > Ryan > > ===================================== > from scipy import arange, io, r_ > from pylab import show, figure, subplot, clf, cla, semilogx, xlabel, > ylabel, legend, xlim > > import copy > > data=io.read_array('phase_data.txt') > f=data[:,0] > phase=data[:,1] > figure(1) > clf() > startphase=copy.copy(phase) > semilogx(f,startphase) > > keepgoing=1 > > inds=arange(0,len(phase)) > > while keepgoing: > phase2=r_[phase[0:1],phase[0:-1]]#artificially repeat the first element > diff=phase-phase2 > jumps=inds[abs(diff)>250] > keepgoing=jumps.any() > if not keepgoing: > break > ind=jumps.min() > if diff[ind]>0: > phase[ind:]-=360 > else: > phase[ind:]+=360 > > > semilogx(f,phase,'r--') > > legend(['Before unwrapping','after unwrapping'],2) > > xlim([1,30]) > > show() > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From andrew.young at sri.com Tue Oct 10 16:58:54 2006 From: andrew.young at sri.com (Andrew B. Young) Date: Tue, 10 Oct 2006 13:58:54 -0700 Subject: [SciPy-user] Plotting an array In-Reply-To: <87u02caw72.fsf@peds-pc311.bsd.uchicago.edu> References: <452BB87F.8000601@sri.com> <20061010161945.GA7916@clipper.ens.fr> <452BD8D3.2010500@sri.com> <87slhwcb9c.fsf@peds-pc311.bsd.uchicago.edu> <87u02caw72.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <452C098E.5090500@sri.com> Thanks John! -andyy John Hunter wrote: >>>>>> "John" == John Hunter writes: >>>>>> > > >>>>>> "Andrew" == Andrew B Young writes: >>>>>> > Andrew> Gael, The following little program-- -- test.py > Andrew> ---------------------- | from scipy import arange, sin, > Andrew> pi, interpolate | from pylab import plot, show, savefig | > Andrew> t = arange(0, 2.0, 0.1) | plot(t) | savefig('test.png') > > Andrew> Produces the plot at http://polar.sri.com/~ayoung/test.png > > Andrew> Why does the plot "turn on" at one? If I, plot( [ 0. , > Andrew> 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1, > Andrew> 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9] ) I get what I > Andrew> would expect: http://polar.sri.com/~ayoung/test2.png > > John> This looks odd; one possibility is that you are mixing your > John> numerix packages. Check you matplotlib rc setting. If you > > Yep, that's it. I can reproduce it with an old Numeric and a newish > numpy/scipy. The combination of versions below exposes the problem if > your rc setting is Numeric. Either of the suggested changes I made in > my last post will fix your problem. > > In [5]: import numpy; numpy.__version__ Out[5]: '0.9.6.2138' > > In [6]: import scipy; scipy.__version__ > Out[6]: '0.4.7.1617' > > In [7]: import Numeric; Numeric.__version__ > Out[7]: '23.1' > > > JDH > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From pecora at anvil.nrl.navy.mil Tue Oct 10 17:14:07 2006 From: pecora at anvil.nrl.navy.mil (Louis Pecora) Date: Tue, 10 Oct 2006 17:14:07 -0400 Subject: [SciPy-user] Missing library will not allow SciPy to load. Message-ID: <452C0D1F.9060301@anvil.nrl.navy.mil> When I do from scipy import * I get the following error: ImportError: Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparsetools.so: Library not loaded: /usr/local/lib/libg2c.0.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparsetools.so Reason: image not found It seems I'm missing libg2c.0.dylib. I just installed the SciPy SuperPack. I guess it wasn't in there or was supposed to be installed from some other package. Does anyone know about this? -- Cheers, Lou Pecora Code 6362 Naval Research Lab Washington, DC 20375 USA Ph: +202-767-6002 email: pecora at anvil.nrl.navy.mil From peridot.faceted at gmail.com Tue Oct 10 17:14:07 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Tue, 10 Oct 2006 17:14:07 -0400 Subject: [SciPy-user] How to calculate minors. In-Reply-To: References: Message-ID: On 10/10/06, Armando Serrano Lombillo wrote: > Hello, is there any way to easily calculate minors of matrixes with > scipy/numpy. > > For example if we have matrix a: > [[1 2 3] > [4 5 6] > [7 8 9]] > > then minor(a,0) should be the same as det([[5,6],[8,9]]) and minor (a,1) > should be the same as det([[1,3],[7,9]]). > > Armando. You can always you fancy indexing to chop out the row and column you are not interested in and take the determinant (with appropriate signs). Maybe a bit complicated, but if you want a single minor it's good enough to implement minor() in pure python. A. M. Archibald From ryanlists at gmail.com Tue Oct 10 17:58:13 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 10 Oct 2006 16:58:13 -0500 Subject: [SciPy-user] phase wrapping algorithm In-Reply-To: <1160513747.8450.167.camel@pc1.cole.uklinux.net> References: <1160513747.8450.167.camel@pc1.cole.uklinux.net> Message-ID: This probably wasn't worth the ammount of time I spent on it, but I'll submit it to the list in case anyone else is curious or can learn from it. This is a bit of a contrived example, but I often find with experimental data that I have significant noise at high and low frequncies, so I want to start the phase wrapping in the middle at a known good point. So I wrote a phase wrapping algorithm that breaks the data into two vectors, phase wraps the one above the middle frequency in the normal way, but needs to step through the one below the middle frequency in reverse. The FORTRAN and Python versions are attached. phasewrap.py contains the Python implementation and includes a function to compare the two. Typical output includes the attached plot and these times: FORTRAN time=0.00100302696228 Numpy.unwrap time=0.00893402099609 Numpy/FORTRAN time ratio=8.90705966247 Bryan Cole time=0.00318098068237 Bryan/FORTRAN time ratio=3.17138103161 Bryan's algorithm is exactly the kind of vectorized thinking that I need to learn how to do. I think in for loops; I don't know why. Thanks to everyone who contributed to this, Ryan On 10/10/06, Bryan Cole wrote: > I do this quite a lot in the context of FFTs. > try: > > def unwrap(thetaArray): > """takes an array of theta values in degrees > returns the data in unwrapped form""" > diff = zeros_like(thetaArray) > diff[1:] = thetaArray[1:] - thetaArray[:-1] > upSteps = diff > 180 > downSteps = diff < -180 > shift = cumsum(upSteps) - cumsum(downSteps) > return thetaArray - 360*shift > > Obviously, a Weave version will be a lot faster, but this is at least > fully vectorised, so it's not bad. > > BC > > On Tue, 2006-10-10 at 12:32 -0500, Ryan Krauss wrote: > > I am rethinking an algorithm for unwrapping phase data from arctan2 > > for a vector of complex numbers (coming from lti models of feedback > > control systems). I had written something a while ago using weave, > > but my old code causes seg faults with a recent scipy update. So, I > > am rethinking the algorithm. > > > > The problem occurs when either because of noise or just the definition > > of the inverse tangent, their are discontinuities in the phase (which > > is coming from arctan2(imag(vector), real(vector)). For example, in > > the attached plot and data file, there is a point where the phase > > passes through -180 degrees. arctan2 jumps to returning +180, +175 > > etc, when it makes a pretty plot and makes more sense to me to convert > > +175 to -185. What I had done previously is write a for loop that > > compares the current value of the phase to the value at the previous > > index (i.e. if phase[i]-phase[-1]>300 or if phase[i]-phase[i-1]<-300) > > to find the discontinuities. Is there a cleaner/faster way to do > > this? (I had done it in weave and will probably now do it in f2py for > > comparison on speeds). I have attached/included a first attempt at an > > algorithm that is more vectorized. But is there a fast, easy way to > > compare a vector to the same vector shift one element up or down? > > Right now I am doing this > > > > phase2=r_[phase[0:1],phase[0:-1]] > > > > to shift the vector (and reuse the first element so there isn't a jump > > from phase[-1] to phase[0] which may legitimately be far apart). > > > > Has anyone else already done this? Are there obvious bottlenecks with > > the algorithm? > > > > Thanks for your thoughts, > > > > Ryan > > > > ===================================== > > from scipy import arange, io, r_ > > from pylab import show, figure, subplot, clf, cla, semilogx, xlabel, > > ylabel, legend, xlim > > > > import copy > > > > data=io.read_array('phase_data.txt') > > f=data[:,0] > > phase=data[:,1] > > figure(1) > > clf() > > startphase=copy.copy(phase) > > semilogx(f,startphase) > > > > keepgoing=1 > > > > inds=arange(0,len(phase)) > > > > while keepgoing: > > phase2=r_[phase[0:1],phase[0:-1]]#artificially repeat the first element > > diff=phase-phase2 > > jumps=inds[abs(diff)>250] > > keepgoing=jumps.any() > > if not keepgoing: > > break > > ind=jumps.min() > > if diff[ind]>0: > > phase[ind:]-=360 > > else: > > phase[ind:]+=360 > > > > > > semilogx(f,phase,'r--') > > > > legend(['Before unwrapping','after unwrapping'],2) > > > > xlim([1,30]) > > > > show() > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- 1.30952e+00 -9.60103e+01 1.33333e+00 -9.61655e+01 1.35714e+00 -9.63309e+01 1.38095e+00 -9.65099e+01 1.40476e+00 -9.67070e+01 1.42857e+00 -9.69287e+01 1.45238e+00 -9.71850e+01 1.47619e+00 -9.74911e+01 1.50000e+00 -9.78711e+01 1.52381e+00 -9.83629e+01 1.54762e+00 -9.90266e+01 1.57143e+00 -9.99476e+01 1.59524e+00 -1.01203e+02 1.61905e+00 -1.02674e+02 1.64286e+00 -1.03685e+02 1.66667e+00 -1.03370e+02 1.69048e+00 -1.02078e+02 1.71429e+00 -1.00795e+02 1.73810e+00 -9.98998e+01 1.76190e+00 -9.93507e+01 1.78571e+00 -9.90331e+01 1.80952e+00 -9.88608e+01 1.83333e+00 -9.87801e+01 1.85714e+00 -9.87585e+01 1.88095e+00 -9.87760e+01 1.90476e+00 -9.88203e+01 1.92857e+00 -9.88835e+01 1.95238e+00 -9.89604e+01 1.97619e+00 -9.90475e+01 2.00000e+00 -9.91423e+01 2.02381e+00 -9.92431e+01 2.04762e+00 -9.93486e+01 2.07143e+00 -9.94580e+01 2.09524e+00 -9.95705e+01 2.11905e+00 -9.96857e+01 2.14286e+00 -9.98032e+01 2.16667e+00 -9.99227e+01 2.19048e+00 -1.00044e+02 2.21429e+00 -1.00167e+02 2.23810e+00 -1.00291e+02 2.26190e+00 -1.00416e+02 2.28571e+00 -1.00543e+02 2.30952e+00 -1.00671e+02 2.33333e+00 -1.00800e+02 2.35714e+00 -1.00930e+02 2.38095e+00 -1.01061e+02 2.40476e+00 -1.01193e+02 2.42857e+00 -1.01326e+02 2.45238e+00 -1.01460e+02 2.47619e+00 -1.01595e+02 2.50000e+00 -1.01731e+02 2.52381e+00 -1.01867e+02 2.54762e+00 -1.02005e+02 2.57143e+00 -1.02143e+02 2.59524e+00 -1.02283e+02 2.61905e+00 -1.02423e+02 2.64286e+00 -1.02564e+02 2.66667e+00 -1.02706e+02 2.69048e+00 -1.02849e+02 2.71429e+00 -1.02993e+02 2.73810e+00 -1.03138e+02 2.76190e+00 -1.03284e+02 2.78571e+00 -1.03431e+02 2.80952e+00 -1.03579e+02 2.83333e+00 -1.03728e+02 2.85714e+00 -1.03877e+02 2.88095e+00 -1.04028e+02 2.90476e+00 -1.04180e+02 2.92857e+00 -1.04333e+02 2.95238e+00 -1.04487e+02 2.97619e+00 -1.04642e+02 3.00000e+00 -1.04798e+02 3.02381e+00 -1.04955e+02 3.04762e+00 -1.05113e+02 3.07143e+00 -1.05272e+02 3.09524e+00 -1.05433e+02 3.11905e+00 -1.05595e+02 3.14286e+00 -1.05758e+02 3.16667e+00 -1.05922e+02 3.19048e+00 -1.06087e+02 3.21429e+00 -1.06253e+02 3.23810e+00 -1.06421e+02 3.26190e+00 -1.06590e+02 3.28571e+00 -1.06761e+02 3.30952e+00 -1.06932e+02 3.33333e+00 -1.07105e+02 3.35714e+00 -1.07280e+02 3.38095e+00 -1.07456e+02 3.40476e+00 -1.07633e+02 3.42857e+00 -1.07812e+02 3.45238e+00 -1.07992e+02 3.47619e+00 -1.08173e+02 3.50000e+00 -1.08357e+02 3.52381e+00 -1.08541e+02 3.54762e+00 -1.08728e+02 3.57143e+00 -1.08916e+02 3.59524e+00 -1.09105e+02 3.61905e+00 -1.09297e+02 3.64286e+00 -1.09490e+02 3.66667e+00 -1.09684e+02 3.69048e+00 -1.09881e+02 3.71429e+00 -1.10079e+02 3.73810e+00 -1.10280e+02 3.76190e+00 -1.10482e+02 3.78571e+00 -1.10686e+02 3.80952e+00 -1.10892e+02 3.83333e+00 -1.11100e+02 3.85714e+00 -1.11310e+02 3.88095e+00 -1.11522e+02 3.90476e+00 -1.11736e+02 3.92857e+00 -1.11953e+02 3.95238e+00 -1.12172e+02 3.97619e+00 -1.12393e+02 4.00000e+00 -1.12616e+02 4.02381e+00 -1.12842e+02 4.04762e+00 -1.13070e+02 4.07143e+00 -1.13301e+02 4.09524e+00 -1.13534e+02 4.11905e+00 -1.13770e+02 4.14286e+00 -1.14009e+02 4.16667e+00 -1.14250e+02 4.19048e+00 -1.14494e+02 4.21429e+00 -1.14741e+02 4.23810e+00 -1.14991e+02 4.26190e+00 -1.15244e+02 4.28571e+00 -1.15500e+02 4.30952e+00 -1.15759e+02 4.33333e+00 -1.16022e+02 4.35714e+00 -1.16287e+02 4.38095e+00 -1.16556e+02 4.40476e+00 -1.16829e+02 4.42857e+00 -1.17105e+02 4.45238e+00 -1.17385e+02 4.47619e+00 -1.17669e+02 4.50000e+00 -1.17956e+02 4.52381e+00 -1.18247e+02 4.54762e+00 -1.18543e+02 4.57143e+00 -1.18842e+02 4.59524e+00 -1.19146e+02 4.61905e+00 -1.19454e+02 4.64286e+00 -1.19767e+02 4.66667e+00 -1.20084e+02 4.69048e+00 -1.20406e+02 4.71429e+00 -1.20733e+02 4.73810e+00 -1.21065e+02 4.76190e+00 -1.21402e+02 4.78571e+00 -1.21745e+02 4.80952e+00 -1.22092e+02 4.83333e+00 -1.22446e+02 4.85714e+00 -1.22805e+02 4.88095e+00 -1.23170e+02 4.90476e+00 -1.23541e+02 4.92857e+00 -1.23918e+02 4.95238e+00 -1.24301e+02 4.97619e+00 -1.24692e+02 5.00000e+00 -1.25089e+02 5.02381e+00 -1.25492e+02 5.04762e+00 -1.25903e+02 5.07143e+00 -1.26322e+02 5.09524e+00 -1.26748e+02 5.11905e+00 -1.27182e+02 5.14286e+00 -1.27623e+02 5.16667e+00 -1.28073e+02 5.19048e+00 -1.28532e+02 5.21429e+00 -1.28999e+02 5.23810e+00 -1.29475e+02 5.26190e+00 -1.29961e+02 5.28571e+00 -1.30455e+02 5.30952e+00 -1.30960e+02 5.33333e+00 -1.31474e+02 5.35714e+00 -1.31999e+02 5.38095e+00 -1.32534e+02 5.40476e+00 -1.33081e+02 5.42857e+00 -1.33638e+02 5.45238e+00 -1.34207e+02 5.47619e+00 -1.34787e+02 5.50000e+00 -1.35379e+02 5.52381e+00 -1.35984e+02 5.54762e+00 -1.36601e+02 5.57143e+00 -1.37231e+02 5.59524e+00 -1.37875e+02 5.61905e+00 -1.38531e+02 5.64286e+00 -1.39202e+02 5.66667e+00 -1.39887e+02 5.69048e+00 -1.40585e+02 5.71429e+00 -1.41299e+02 5.73810e+00 -1.42027e+02 5.76190e+00 -1.42770e+02 5.78571e+00 -1.43529e+02 5.80952e+00 -1.44303e+02 5.83333e+00 -1.45093e+02 5.85714e+00 -1.45898e+02 5.88095e+00 -1.46719e+02 5.90476e+00 -1.47556e+02 5.92857e+00 -1.48409e+02 5.95238e+00 -1.49278e+02 5.97619e+00 -1.50163e+02 6.00000e+00 -1.51063e+02 6.02381e+00 -1.51978e+02 6.04762e+00 -1.52909e+02 6.07143e+00 -1.53855e+02 6.09524e+00 -1.54815e+02 6.11905e+00 -1.55789e+02 6.14286e+00 -1.56776e+02 6.16667e+00 -1.57776e+02 6.19048e+00 -1.58787e+02 6.21429e+00 -1.59810e+02 6.23810e+00 -1.60842e+02 6.26190e+00 -1.61882e+02 6.28571e+00 -1.62930e+02 6.30952e+00 -1.63984e+02 6.33333e+00 -1.65042e+02 6.35714e+00 -1.66102e+02 6.38095e+00 -1.67164e+02 6.40476e+00 -1.68224e+02 6.42857e+00 -1.69282e+02 6.45238e+00 -1.70333e+02 6.47619e+00 -1.71378e+02 6.50000e+00 -1.72411e+02 6.52381e+00 -1.73433e+02 6.54762e+00 -1.74438e+02 6.57143e+00 -1.75425e+02 6.59524e+00 -1.76391e+02 6.61905e+00 -1.77333e+02 6.64286e+00 -1.78247e+02 6.66667e+00 -1.79130e+02 6.69048e+00 -1.79979e+02 6.71429e+00 1.79210e+02 6.73810e+00 1.78441e+02 6.76190e+00 1.77716e+02 6.78571e+00 1.77042e+02 6.80952e+00 1.76420e+02 6.83333e+00 1.75856e+02 6.85714e+00 1.75355e+02 6.88095e+00 1.74920e+02 6.90476e+00 1.74556e+02 6.92857e+00 1.74270e+02 6.95238e+00 1.74066e+02 6.97619e+00 1.73950e+02 7.00000e+00 1.73928e+02 7.02381e+00 1.74007e+02 7.04762e+00 1.74192e+02 7.07143e+00 1.74492e+02 7.09524e+00 1.74913e+02 7.11905e+00 1.75464e+02 7.14286e+00 1.76151e+02 7.16667e+00 1.76983e+02 7.19048e+00 1.77967e+02 7.21429e+00 1.79110e+02 7.23810e+00 -1.79580e+02 7.26190e+00 -1.78100e+02 7.28571e+00 -1.76444e+02 7.30952e+00 -1.74612e+02 7.33333e+00 -1.72605e+02 7.35714e+00 -1.70429e+02 7.38095e+00 -1.68091e+02 7.40476e+00 -1.65606e+02 7.42857e+00 -1.62992e+02 7.45238e+00 -1.60271e+02 7.47619e+00 -1.57469e+02 7.50000e+00 -1.54617e+02 7.52381e+00 -1.51745e+02 7.54762e+00 -1.48884e+02 7.57143e+00 -1.46063e+02 7.59524e+00 -1.43310e+02 7.61905e+00 -1.40648e+02 7.64286e+00 -1.38095e+02 7.66667e+00 -1.35666e+02 7.69048e+00 -1.33370e+02 7.71429e+00 -1.31213e+02 7.73810e+00 -1.29197e+02 7.76190e+00 -1.27321e+02 7.78571e+00 -1.25582e+02 7.80952e+00 -1.23975e+02 7.83333e+00 -1.22493e+02 7.85714e+00 -1.21130e+02 7.88095e+00 -1.19879e+02 7.90476e+00 -1.18732e+02 7.92857e+00 -1.17683e+02 7.95238e+00 -1.16723e+02 7.97619e+00 -1.15846e+02 8.00000e+00 -1.15045e+02 8.02381e+00 -1.14315e+02 8.04762e+00 -1.13651e+02 8.07143e+00 -1.13045e+02 8.09524e+00 -1.12495e+02 8.11905e+00 -1.11995e+02 8.14286e+00 -1.11542e+02 8.16667e+00 -1.11131e+02 8.19048e+00 -1.10759e+02 8.21429e+00 -1.10424e+02 8.23810e+00 -1.10121e+02 8.26190e+00 -1.09849e+02 8.28571e+00 -1.09605e+02 8.30952e+00 -1.09388e+02 8.33333e+00 -1.09194e+02 8.35714e+00 -1.09022e+02 8.38095e+00 -1.08871e+02 8.40476e+00 -1.08738e+02 8.42857e+00 -1.08623e+02 8.45238e+00 -1.08524e+02 8.47619e+00 -1.08440e+02 8.50000e+00 -1.08370e+02 8.52381e+00 -1.08313e+02 8.54762e+00 -1.08268e+02 8.57143e+00 -1.08234e+02 8.59524e+00 -1.08210e+02 8.61905e+00 -1.08196e+02 8.64286e+00 -1.08190e+02 8.66667e+00 -1.08193e+02 8.69048e+00 -1.08203e+02 8.71429e+00 -1.08221e+02 8.73810e+00 -1.08245e+02 8.76190e+00 -1.08276e+02 8.78571e+00 -1.08312e+02 8.80952e+00 -1.08354e+02 8.83333e+00 -1.08401e+02 8.85714e+00 -1.08452e+02 8.88095e+00 -1.08508e+02 8.90476e+00 -1.08568e+02 8.92857e+00 -1.08632e+02 8.95238e+00 -1.08699e+02 8.97619e+00 -1.08770e+02 9.00000e+00 -1.08843e+02 9.02381e+00 -1.08920e+02 9.04762e+00 -1.08999e+02 9.07143e+00 -1.09081e+02 9.09524e+00 -1.09166e+02 9.11905e+00 -1.09252e+02 9.14286e+00 -1.09340e+02 9.16667e+00 -1.09431e+02 9.19048e+00 -1.09523e+02 9.21429e+00 -1.09617e+02 9.23810e+00 -1.09712e+02 9.26190e+00 -1.09809e+02 9.28571e+00 -1.09907e+02 9.30952e+00 -1.10007e+02 9.33333e+00 -1.10108e+02 9.35714e+00 -1.10209e+02 9.38095e+00 -1.10312e+02 9.40476e+00 -1.10416e+02 9.42857e+00 -1.10520e+02 9.45238e+00 -1.10626e+02 9.47619e+00 -1.10732e+02 9.50000e+00 -1.10839e+02 9.52381e+00 -1.10946e+02 9.54762e+00 -1.11054e+02 9.57143e+00 -1.11162e+02 9.59524e+00 -1.11271e+02 9.61905e+00 -1.11381e+02 9.64286e+00 -1.11491e+02 9.66667e+00 -1.11601e+02 9.69048e+00 -1.11712e+02 9.71429e+00 -1.11822e+02 9.73810e+00 -1.11934e+02 9.76190e+00 -1.12045e+02 9.78571e+00 -1.12156e+02 9.80952e+00 -1.12268e+02 9.83333e+00 -1.12380e+02 9.85714e+00 -1.12492e+02 9.88095e+00 -1.12604e+02 9.90476e+00 -1.12717e+02 9.92857e+00 -1.12829e+02 9.95238e+00 -1.12941e+02 9.97619e+00 -1.13054e+02 1.00000e+01 -1.13166e+02 1.00238e+01 -1.13279e+02 1.00476e+01 -1.13391e+02 1.00714e+01 -1.13504e+02 1.00952e+01 -1.13616e+02 1.01190e+01 -1.13728e+02 1.01429e+01 -1.13841e+02 1.01667e+01 -1.13953e+02 1.01905e+01 -1.14065e+02 1.02143e+01 -1.14177e+02 1.02381e+01 -1.14289e+02 1.02619e+01 -1.14401e+02 1.02857e+01 -1.14512e+02 1.03095e+01 -1.14624e+02 1.03333e+01 -1.14735e+02 1.03571e+01 -1.14847e+02 1.03810e+01 -1.14958e+02 1.04048e+01 -1.15069e+02 1.04286e+01 -1.15180e+02 1.04524e+01 -1.15290e+02 1.04762e+01 -1.15401e+02 1.05000e+01 -1.15511e+02 1.05238e+01 -1.15621e+02 1.05476e+01 -1.15731e+02 1.05714e+01 -1.15841e+02 1.05952e+01 -1.15951e+02 1.06190e+01 -1.16060e+02 1.06429e+01 -1.16169e+02 1.06667e+01 -1.16278e+02 1.06905e+01 -1.16387e+02 1.07143e+01 -1.16495e+02 1.07381e+01 -1.16604e+02 1.07619e+01 -1.16712e+02 1.07857e+01 -1.16820e+02 1.08095e+01 -1.16927e+02 1.08333e+01 -1.17035e+02 1.08571e+01 -1.17142e+02 1.08810e+01 -1.17249e+02 1.09048e+01 -1.17356e+02 1.09286e+01 -1.17463e+02 1.09524e+01 -1.17569e+02 1.09762e+01 -1.17675e+02 1.10000e+01 -1.17781e+02 1.10238e+01 -1.17887e+02 1.10476e+01 -1.17992e+02 1.10714e+01 -1.18098e+02 1.10952e+01 -1.18203e+02 1.11190e+01 -1.18307e+02 1.11429e+01 -1.18412e+02 1.11667e+01 -1.18516e+02 1.11905e+01 -1.18620e+02 1.12143e+01 -1.18724e+02 1.12381e+01 -1.18828e+02 1.12619e+01 -1.18931e+02 1.12857e+01 -1.19034e+02 1.13095e+01 -1.19137e+02 1.13333e+01 -1.19240e+02 1.13571e+01 -1.19342e+02 1.13810e+01 -1.19445e+02 1.14048e+01 -1.19547e+02 1.14286e+01 -1.19648e+02 1.14524e+01 -1.19750e+02 1.14762e+01 -1.19851e+02 1.15000e+01 -1.19952e+02 1.15238e+01 -1.20053e+02 1.15476e+01 -1.20154e+02 1.15714e+01 -1.20254e+02 1.15952e+01 -1.20354e+02 1.16190e+01 -1.20454e+02 1.16429e+01 -1.20554e+02 1.16667e+01 -1.20654e+02 1.16905e+01 -1.20753e+02 1.17143e+01 -1.20852e+02 1.17381e+01 -1.20951e+02 1.17619e+01 -1.21049e+02 1.17857e+01 -1.21148e+02 1.18095e+01 -1.21246e+02 1.18333e+01 -1.21344e+02 1.18571e+01 -1.21441e+02 1.18810e+01 -1.21539e+02 1.19048e+01 -1.21636e+02 1.19286e+01 -1.21733e+02 1.19524e+01 -1.21830e+02 1.19762e+01 -1.21927e+02 1.20000e+01 -1.22023e+02 1.20238e+01 -1.22119e+02 1.20476e+01 -1.22215e+02 1.20714e+01 -1.22311e+02 1.20952e+01 -1.22406e+02 1.21190e+01 -1.22502e+02 1.21429e+01 -1.22597e+02 1.21667e+01 -1.22692e+02 1.21905e+01 -1.22786e+02 1.22143e+01 -1.22881e+02 1.22381e+01 -1.22975e+02 1.22619e+01 -1.23069e+02 1.22857e+01 -1.23163e+02 1.23095e+01 -1.23256e+02 1.23333e+01 -1.23350e+02 1.23571e+01 -1.23443e+02 1.23810e+01 -1.23536e+02 1.24048e+01 -1.23629e+02 1.24286e+01 -1.23721e+02 1.24524e+01 -1.23814e+02 1.24762e+01 -1.23906e+02 1.25000e+01 -1.23998e+02 1.25238e+01 -1.24090e+02 1.25476e+01 -1.24181e+02 1.25714e+01 -1.24273e+02 1.25952e+01 -1.24364e+02 1.26190e+01 -1.24455e+02 1.26429e+01 -1.24546e+02 1.26667e+01 -1.24636e+02 1.26905e+01 -1.24727e+02 1.27143e+01 -1.24817e+02 1.27381e+01 -1.24907e+02 1.27619e+01 -1.24997e+02 1.27857e+01 -1.25086e+02 1.28095e+01 -1.25176e+02 1.28333e+01 -1.25265e+02 1.28571e+01 -1.25354e+02 1.28810e+01 -1.25443e+02 1.29048e+01 -1.25532e+02 1.29286e+01 -1.25620e+02 1.29524e+01 -1.25709e+02 1.29762e+01 -1.25797e+02 1.30000e+01 -1.25885e+02 1.30238e+01 -1.25972e+02 1.30476e+01 -1.26060e+02 1.30714e+01 -1.26147e+02 1.30952e+01 -1.26235e+02 1.31190e+01 -1.26322e+02 1.31429e+01 -1.26409e+02 1.31667e+01 -1.26495e+02 1.31905e+01 -1.26582e+02 1.32143e+01 -1.26668e+02 1.32381e+01 -1.26754e+02 1.32619e+01 -1.26840e+02 1.32857e+01 -1.26926e+02 1.33095e+01 -1.27012e+02 1.33333e+01 -1.27097e+02 1.33571e+01 -1.27182e+02 1.33810e+01 -1.27267e+02 1.34048e+01 -1.27352e+02 1.34286e+01 -1.27437e+02 1.34524e+01 -1.27522e+02 1.34762e+01 -1.27606e+02 1.35000e+01 -1.27690e+02 1.35238e+01 -1.27775e+02 1.35476e+01 -1.27858e+02 1.35714e+01 -1.27942e+02 1.35952e+01 -1.28026e+02 1.36190e+01 -1.28109e+02 1.36429e+01 -1.28193e+02 1.36667e+01 -1.28276e+02 1.36905e+01 -1.28359e+02 1.37143e+01 -1.28441e+02 1.37381e+01 -1.28524e+02 1.37619e+01 -1.28606e+02 1.37857e+01 -1.28689e+02 1.38095e+01 -1.28771e+02 1.38333e+01 -1.28853e+02 1.38571e+01 -1.28935e+02 1.38810e+01 -1.29016e+02 1.39048e+01 -1.29098e+02 1.39286e+01 -1.29179e+02 1.39524e+01 -1.29260e+02 1.39762e+01 -1.29341e+02 1.40000e+01 -1.29422e+02 1.40238e+01 -1.29503e+02 1.40476e+01 -1.29584e+02 1.40714e+01 -1.29664e+02 1.40952e+01 -1.29744e+02 1.41190e+01 -1.29824e+02 1.41429e+01 -1.29904e+02 1.41667e+01 -1.29984e+02 1.41905e+01 -1.30064e+02 1.42143e+01 -1.30143e+02 1.42381e+01 -1.30223e+02 1.42619e+01 -1.30302e+02 1.42857e+01 -1.30381e+02 1.43095e+01 -1.30460e+02 1.43333e+01 -1.30539e+02 1.43571e+01 -1.30617e+02 1.43810e+01 -1.30696e+02 1.44048e+01 -1.30774e+02 1.44286e+01 -1.30853e+02 1.44524e+01 -1.30931e+02 1.44762e+01 -1.31009e+02 1.45000e+01 -1.31086e+02 1.45238e+01 -1.31164e+02 1.45476e+01 -1.31241e+02 1.45714e+01 -1.31319e+02 1.45952e+01 -1.31396e+02 1.46190e+01 -1.31473e+02 1.46429e+01 -1.31550e+02 1.46667e+01 -1.31627e+02 1.46905e+01 -1.31704e+02 1.47143e+01 -1.31780e+02 1.47381e+01 -1.31857e+02 1.47619e+01 -1.31933e+02 1.47857e+01 -1.32009e+02 1.48095e+01 -1.32085e+02 1.48333e+01 -1.32161e+02 1.48571e+01 -1.32237e+02 1.48810e+01 -1.32312e+02 1.49048e+01 -1.32388e+02 1.49286e+01 -1.32463e+02 1.49524e+01 -1.32538e+02 1.49762e+01 -1.32613e+02 1.50000e+01 -1.32688e+02 1.50238e+01 -1.32763e+02 1.50476e+01 -1.32838e+02 1.50714e+01 -1.32912e+02 1.50952e+01 -1.32987e+02 1.51190e+01 -1.33061e+02 1.51429e+01 -1.33135e+02 1.51667e+01 -1.33209e+02 1.51905e+01 -1.33283e+02 1.52143e+01 -1.33357e+02 1.52381e+01 -1.33431e+02 1.52619e+01 -1.33504e+02 1.52857e+01 -1.33578e+02 1.53095e+01 -1.33651e+02 1.53333e+01 -1.33724e+02 1.53571e+01 -1.33797e+02 1.53810e+01 -1.33870e+02 1.54048e+01 -1.33943e+02 1.54286e+01 -1.34016e+02 1.54524e+01 -1.34088e+02 1.54762e+01 -1.34161e+02 1.55000e+01 -1.34233e+02 1.55238e+01 -1.34306e+02 1.55476e+01 -1.34378e+02 1.55714e+01 -1.34450e+02 1.55952e+01 -1.34521e+02 1.56190e+01 -1.34593e+02 1.56429e+01 -1.34665e+02 1.56667e+01 -1.34736e+02 1.56905e+01 -1.34808e+02 1.57143e+01 -1.34879e+02 1.57381e+01 -1.34950e+02 1.57619e+01 -1.35021e+02 1.57857e+01 -1.35092e+02 1.58095e+01 -1.35163e+02 1.58333e+01 -1.35234e+02 1.58571e+01 -1.35305e+02 1.58810e+01 -1.35375e+02 1.59048e+01 -1.35445e+02 1.59286e+01 -1.35516e+02 1.59524e+01 -1.35586e+02 1.59762e+01 -1.35656e+02 1.60000e+01 -1.35726e+02 1.60238e+01 -1.35796e+02 1.60476e+01 -1.35865e+02 1.60714e+01 -1.35935e+02 1.60952e+01 -1.36005e+02 1.61190e+01 -1.36074e+02 1.61429e+01 -1.36143e+02 1.61667e+01 -1.36213e+02 1.61905e+01 -1.36282e+02 1.62143e+01 -1.36351e+02 1.62381e+01 -1.36419e+02 1.62619e+01 -1.36488e+02 1.62857e+01 -1.36557e+02 1.63095e+01 -1.36626e+02 1.63333e+01 -1.36694e+02 1.63571e+01 -1.36762e+02 1.63810e+01 -1.36831e+02 1.64048e+01 -1.36899e+02 1.64286e+01 -1.36967e+02 1.64524e+01 -1.37035e+02 1.64762e+01 -1.37103e+02 1.65000e+01 -1.37170e+02 1.65238e+01 -1.37238e+02 1.65476e+01 -1.37306e+02 1.65714e+01 -1.37373e+02 1.65952e+01 -1.37440e+02 1.66190e+01 -1.37508e+02 1.66429e+01 -1.37575e+02 1.66667e+01 -1.37642e+02 1.66905e+01 -1.37709e+02 1.67143e+01 -1.37776e+02 1.67381e+01 -1.37842e+02 1.67619e+01 -1.37909e+02 1.67857e+01 -1.37976e+02 1.68095e+01 -1.38042e+02 1.68333e+01 -1.38108e+02 1.68571e+01 -1.38175e+02 1.68810e+01 -1.38241e+02 1.69048e+01 -1.38307e+02 1.69286e+01 -1.38373e+02 1.69524e+01 -1.38439e+02 1.69762e+01 -1.38505e+02 1.70000e+01 -1.38570e+02 1.70238e+01 -1.38636e+02 1.70476e+01 -1.38701e+02 1.70714e+01 -1.38767e+02 1.70952e+01 -1.38832e+02 1.71190e+01 -1.38898e+02 1.71429e+01 -1.38963e+02 1.71667e+01 -1.39028e+02 1.71905e+01 -1.39093e+02 1.72143e+01 -1.39158e+02 1.72381e+01 -1.39222e+02 1.72619e+01 -1.39287e+02 1.72857e+01 -1.39352e+02 1.73095e+01 -1.39416e+02 1.73333e+01 -1.39481e+02 1.73571e+01 -1.39545e+02 1.73810e+01 -1.39609e+02 1.74048e+01 -1.39673e+02 1.74286e+01 -1.39737e+02 1.74524e+01 -1.39801e+02 1.74762e+01 -1.39865e+02 1.75000e+01 -1.39929e+02 1.75238e+01 -1.39993e+02 1.75476e+01 -1.40056e+02 1.75714e+01 -1.40120e+02 1.75952e+01 -1.40183e+02 1.76190e+01 -1.40247e+02 1.76429e+01 -1.40310e+02 1.76667e+01 -1.40373e+02 1.76905e+01 -1.40436e+02 1.77143e+01 -1.40499e+02 1.77381e+01 -1.40562e+02 1.77619e+01 -1.40625e+02 1.77857e+01 -1.40688e+02 1.78095e+01 -1.40751e+02 1.78333e+01 -1.40813e+02 1.78571e+01 -1.40876e+02 1.78810e+01 -1.40938e+02 1.79048e+01 -1.41001e+02 1.79286e+01 -1.41063e+02 1.79524e+01 -1.41125e+02 1.79762e+01 -1.41187e+02 1.80000e+01 -1.41249e+02 1.80238e+01 -1.41311e+02 1.80476e+01 -1.41373e+02 1.80714e+01 -1.41435e+02 1.80952e+01 -1.41496e+02 1.81190e+01 -1.41558e+02 1.81429e+01 -1.41619e+02 1.81667e+01 -1.41681e+02 1.81905e+01 -1.41742e+02 1.82143e+01 -1.41803e+02 1.82381e+01 -1.41864e+02 1.82619e+01 -1.41926e+02 1.82857e+01 -1.41987e+02 1.83095e+01 -1.42047e+02 1.83333e+01 -1.42108e+02 1.83571e+01 -1.42169e+02 1.83810e+01 -1.42230e+02 1.84048e+01 -1.42290e+02 1.84286e+01 -1.42351e+02 1.84524e+01 -1.42411e+02 1.84762e+01 -1.42471e+02 1.85000e+01 -1.42532e+02 1.85238e+01 -1.42592e+02 1.85476e+01 -1.42652e+02 1.85714e+01 -1.42712e+02 1.85952e+01 -1.42771e+02 1.86190e+01 -1.42831e+02 1.86429e+01 -1.42891e+02 1.86667e+01 -1.42951e+02 1.86905e+01 -1.43010e+02 1.87143e+01 -1.43069e+02 1.87381e+01 -1.43129e+02 1.87619e+01 -1.43188e+02 1.87857e+01 -1.43247e+02 1.88095e+01 -1.43306e+02 1.88333e+01 -1.43365e+02 1.88571e+01 -1.43424e+02 1.88810e+01 -1.43483e+02 1.89048e+01 -1.43542e+02 1.89286e+01 -1.43600e+02 1.89524e+01 -1.43659e+02 1.89762e+01 -1.43717e+02 1.90000e+01 -1.43776e+02 1.90238e+01 -1.43834e+02 1.90476e+01 -1.43892e+02 1.90714e+01 -1.43950e+02 1.90952e+01 -1.44008e+02 1.91190e+01 -1.44066e+02 1.91429e+01 -1.44123e+02 1.91667e+01 -1.44181e+02 1.91905e+01 -1.44239e+02 1.92143e+01 -1.44296e+02 1.92381e+01 -1.44353e+02 1.92619e+01 -1.44411e+02 1.92857e+01 -1.44468e+02 1.93095e+01 -1.44525e+02 1.93333e+01 -1.44582e+02 1.93571e+01 -1.44639e+02 1.93810e+01 -1.44695e+02 1.94048e+01 -1.44752e+02 1.94286e+01 -1.44808e+02 1.94524e+01 -1.44865e+02 1.94762e+01 -1.44921e+02 1.95000e+01 -1.44977e+02 1.95238e+01 -1.45033e+02 1.95476e+01 -1.45089e+02 1.95714e+01 -1.45145e+02 1.95952e+01 -1.45201e+02 1.96190e+01 -1.45256e+02 1.96429e+01 -1.45312e+02 1.96667e+01 -1.45367e+02 1.96905e+01 -1.45422e+02 1.97143e+01 -1.45477e+02 1.97381e+01 -1.45532e+02 1.97619e+01 -1.45587e+02 1.97857e+01 -1.45642e+02 1.98095e+01 -1.45696e+02 1.98333e+01 -1.45751e+02 1.98571e+01 -1.45805e+02 1.98810e+01 -1.45859e+02 1.99048e+01 -1.45913e+02 1.99286e+01 -1.45967e+02 1.99524e+01 -1.46021e+02 1.99762e+01 -1.46075e+02 2.00000e+01 -1.46128e+02 2.00238e+01 -1.46181e+02 2.00476e+01 -1.46235e+02 2.00714e+01 -1.46288e+02 2.00952e+01 -1.46341e+02 2.01190e+01 -1.46393e+02 2.01429e+01 -1.46446e+02 2.01667e+01 -1.46498e+02 2.01905e+01 -1.46551e+02 2.02143e+01 -1.46603e+02 2.02381e+01 -1.46655e+02 2.02619e+01 -1.46706e+02 2.02857e+01 -1.46758e+02 2.03095e+01 -1.46810e+02 2.03333e+01 -1.46861e+02 2.03571e+01 -1.46912e+02 2.03810e+01 -1.46963e+02 2.04048e+01 -1.47014e+02 2.04286e+01 -1.47064e+02 2.04524e+01 -1.47115e+02 2.04762e+01 -1.47165e+02 2.05000e+01 -1.47215e+02 2.05238e+01 -1.47265e+02 2.05476e+01 -1.47315e+02 2.05714e+01 -1.47364e+02 2.05952e+01 -1.47414e+02 2.06190e+01 -1.47463e+02 2.06429e+01 -1.47512e+02 2.06667e+01 -1.47560e+02 2.06905e+01 -1.47609e+02 2.07143e+01 -1.47657e+02 2.07381e+01 -1.47706e+02 2.07619e+01 -1.47754e+02 2.07857e+01 -1.47801e+02 2.08095e+01 -1.47849e+02 2.08333e+01 -1.47896e+02 2.08571e+01 -1.47944e+02 2.08810e+01 -1.47991e+02 2.09048e+01 -1.48037e+02 2.09286e+01 -1.48084e+02 2.09524e+01 -1.48130e+02 2.09762e+01 -1.48176e+02 2.10000e+01 -1.48222e+02 2.10238e+01 -1.48268e+02 2.10476e+01 -1.48314e+02 2.10714e+01 -1.48359e+02 2.10952e+01 -1.48404e+02 2.11190e+01 -1.48449e+02 2.11429e+01 -1.48494e+02 2.11667e+01 -1.48538e+02 2.11905e+01 -1.48583e+02 2.12143e+01 -1.48627e+02 2.12381e+01 -1.48671e+02 2.12619e+01 -1.48714e+02 2.12857e+01 -1.48758e+02 2.13095e+01 -1.48801e+02 2.13333e+01 -1.48844e+02 2.13571e+01 -1.48887e+02 2.13810e+01 -1.48930e+02 2.14048e+01 -1.48972e+02 2.14286e+01 -1.49014e+02 2.14524e+01 -1.49056e+02 2.14762e+01 -1.49098e+02 2.15000e+01 -1.49140e+02 2.15238e+01 -1.49181e+02 -------------- next part -------------- A non-text attachment was scrubbed... Name: phasemassage.f Type: text/x-fortran Size: 1401 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: phasewrap.py Type: text/x-python Size: 2634 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: phasewrap.png Type: image/png Size: 27842 bytes Desc: not available URL: From cygnusx1 at mac.com Tue Oct 10 22:21:28 2006 From: cygnusx1 at mac.com (Tom Bridgman) Date: Tue, 10 Oct 2006 22:21:28 -0400 Subject: [SciPy-user] Missing library will not allow SciPy to load. Message-ID: <7D612544-6319-457E-B27F-90F734F0BA69@mac.com> > When I do > > from scipy import * > > I get the following error: > > ImportError: Failure linking new module: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ > site-packages/scipy/sparse/sparsetools.so: > Library not loaded: /usr/local/lib/libg2c.0.dylib > Referenced from: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ > site-packages/scipy/sparse/sparsetools.so > Reason: image not found > > > It seems I'm missing libg2c.0.dylib. I just installed the SciPy > SuperPack. I guess it wasn't in there or was supposed to be installed > from some other package. Does anyone know about this? > I believe this is installed as part of the g77 package (g77v3.4- bin.tar.gz). Tom -- W.T. Bridgman, Ph.D. Physics & Astronomy From w.richert at gmx.net Wed Oct 11 06:34:35 2006 From: w.richert at gmx.net (Willi Richert) Date: Wed, 11 Oct 2006 12:34:35 +0200 Subject: [SciPy-user] Pickling poly1d() Message-ID: <200610111234.36223.w.richert@gmx.net> Hi, to be able to pickle poly1d instances I had to add __set/getstate__ and remove __setattr__ from it. It works now, however, I think there were some good reasons for the current version to not provide those methods. Do I now involve other problems / side effects with the attached version? I diff'ed against numpy-0.9.8, but polynomial.py in the newest version (1.0rc2) has the same problem. Regards, wr -------------- next part -------------- A non-text attachment was scrubbed... Name: polynomial.py.diff Type: text/x-diff Size: 414 bytes Desc: not available URL: From pecora at anvil.nrl.navy.mil Wed Oct 11 06:53:50 2006 From: pecora at anvil.nrl.navy.mil (Louis Pecora) Date: Wed, 11 Oct 2006 06:53:50 -0400 Subject: [SciPy-user] Missing library will not allow SciPy to load. In-Reply-To: <7D612544-6319-457E-B27F-90F734F0BA69@mac.com> References: <7D612544-6319-457E-B27F-90F734F0BA69@mac.com> Message-ID: <452CCD3E.3060706@anvil.nrl.navy.mil> Tom Bridgman wrote: >> When I do >> >> from scipy import * >> >> I get the following error: >> >> ImportError: Failure linking new module: >> /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ >> site-packages/scipy/sparse/sparsetools.so: >> Library not loaded: /usr/local/lib/libg2c.0.dylib >> Referenced from: >> /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ >> site-packages/scipy/sparse/sparsetools.so >> Reason: image not found >> >> >> It seems I'm missing libg2c.0.dylib. I just installed the SciPy >> SuperPack. I guess it wasn't in there or was supposed to be installed >> from some other package. Does anyone know about this? >> >> > I believe this is installed as part of the g77 package (g77v3.4- > bin.tar.gz). > > Tom > -- > Thanks. You know, I think I didn't install g77 thinking I would never need it since I don't use Fortran anymore. But I didn't think about dependencies within SciPy. I'll install it. -- Cheers, Lou Pecora Code 6362 Naval Research Lab Washington, DC 20375 USA Ph: +202-767-6002 email: pecora at anvil.nrl.navy.mil From ryanlists at gmail.com Wed Oct 11 10:55:43 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 11 Oct 2006 09:55:43 -0500 Subject: [SciPy-user] SciPy Ubuntu Dapper installation using Andrew Straw's packages Message-ID: I have a dumb apt question. I have added deb http://debs.astraw.com/ dapper/ deb-src http://debs.astraw.com/ dapper/ to my sources.list. What at the names of the actual packages I need to install? I searched using synaptic for numpy and found one called python-scientific but didn't actually see anything called numpy or scipy. Thanks, Ryan From gael.varoquaux at normalesup.org Wed Oct 11 11:02:09 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 11 Oct 2006 17:02:09 +0200 Subject: [SciPy-user] SciPy Ubuntu Dapper installation using Andrew Straw's packages In-Reply-To: References: Message-ID: <20061011150208.GH19992@clipper.ens.fr> On Wed, Oct 11, 2006 at 09:55:43AM -0500, Ryan Krauss wrote: > I have a dumb apt question. I have added > deb http://debs.astraw.com/ dapper/ > deb-src http://debs.astraw.com/ dapper/ > to my sources.list. What at the names of the actual packages I need to install? Did you update your packages. Searching for numpy with updated package list gives: python-numeric-tutorial - Tutorial for the Numerical Python Library python-scientific - Python modules useful for scientific computing python-numarray - An array processing package modelled after Python-Numeric python-numeric - Numerical (matrix-oriented) Mathematics for Python python-scientific-doc - Python modules useful for scientific computing python2.4-numarray - An array processing package modelled after Python-Numeric python2.4-numeric - Numerical (matrix-oriented) Mathematics for Python python-numpy - NumPy: array processing for numbers, strings, records,... I suggest you first remove all the matlplotib and scipy packages, then install python-matplotlib, python-numpy and python-scipy. Ga?l From R.Springuel at umit.maine.edu Wed Oct 11 17:23:46 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Wed, 11 Oct 2006 17:23:46 -0400 Subject: [SciPy-user] Installation Problems on Mac OS X Message-ID: <452D60E2.70400@umit.maine.edu> Okay, I've followed the instructions in the Wiki, but I'm still unable to install scipy-0.5.1 on my new Mac. I've included the full output of what happens when I try below. Can anyone tell me what I'm doing wrong? $ python setup.py install fft_opt_info: fftw3_info: libraries fftw3 not found in /Library/Frameworks/Python.framework/Versions/2.5/lib FOUND: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] djbfft_info: NOT AVAILABLE FOUND: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] non-existing path in 'Lib/linsolve': 'tests' umfpack_info: libraries umfpack not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/distutils/system_info.py:401: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE non-existing path in 'Lib/maxentropy': 'doc' Warning: Subpackage 'Lib' configuration returned as 'scipy' running install running build running config_fc running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "superlu_src" sources building library "minpack" sources building library "rootfind" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. adding 'build/src.macosx-10.3-fat-2.5/Lib/interpolate/dfitpack-f2pywrappers.f' to sources. building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. adding 'build/src.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/Lib/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/lib/lapack/clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. adding 'build/src.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. adding 'build/src.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.linalg._iterative" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.linsolve._zsuperlu" sources building extension "scipy.linsolve._dsuperlu" sources building extension "scipy.linsolve._csuperlu" sources building extension "scipy.linsolve._ssuperlu" sources building extension "scipy.linsolve.umfpack.__umfpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse.sparsetools" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. adding 'build/src.macosx-10.3-fat-2.5/Lib/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building data_files sources running build_py copying build/src.macosx-10.3-fat-2.5/scipy/__config__.py -> build/lib.macosx-10.3-fat-2.5/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize NAGFCompiler customize AbsoftFCompiler customize IbmFCompiler Could not locate executable g77 Could not locate executable f77 Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize Gnu95FCompiler customize G95FCompiler customize GnuFCompiler customize Gnu95FCompiler customize NAGFCompiler customize NAGFCompiler using build_clib building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: f95 -fixed -O4 -target=native Fortran f90 compiler: f95 -O4 -target=native Fortran fix compiler: f95 -fixed -O4 -target=native compile options: '-c' f95:f77: Lib/fftpack/dfftpack/dcosqb.f sh: line 1: f95: command not found sh: line 1: f95: command not found error: Command "f95 -fixed -O4 -target=native -c -c Lib/fftpack/dfftpack/dcosqb.f -o build/temp.macosx-10.3-fat-2.5/Lib/fftpack/dfftpack/dcosqb.o" failed with exit status 127 -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: Wednesday 2-3pm From lists.steve at arachnedesign.net Wed Oct 11 17:39:11 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Wed, 11 Oct 2006 17:39:11 -0400 Subject: [SciPy-user] Installation Problems on Mac OS X In-Reply-To: <452D60E2.70400@umit.maine.edu> References: <452D60E2.70400@umit.maine.edu> Message-ID: <1D67DE63-896A-4359-830C-B1DDF605F163@arachnedesign.net> Just a stab in the dark, but it still looks like you don't have a fortran compiler that scipy can find. Look in your output here: > Could not locate executable g77 > Could not locate executable f77 > Could not locate executable gfortran > Could not locate executable f95 ... and ... > compiling Fortran sources > Fortran f77 compiler: f95 -fixed -O4 -target=native > Fortran f90 compiler: f95 -O4 -target=native > Fortran fix compiler: f95 -fixed -O4 -target=native > compile options: '-c' > f95:f77: Lib/fftpack/dfftpack/dcosqb.f > sh: line 1: f95: command not found > sh: line 1: f95: command not found > error: Command "f95 -fixed -O4 -target=native -c -c > Lib/fftpack/dfftpack/dcosqb.f -o It says it can't find any fortran compiler (including f95), then it tries to use "f95" .. so ... that's what looks fishy to me. Can you find f95 in your path? Maybe it's installed in /usr/local/bin and you need to add that (but maybe scipy will find it by some other means(?)) -steve From lists.steve at arachnedesign.net Wed Oct 11 17:39:11 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Wed, 11 Oct 2006 17:39:11 -0400 Subject: [SciPy-user] Installation Problems on Mac OS X In-Reply-To: <452D60E2.70400@umit.maine.edu> References: <452D60E2.70400@umit.maine.edu> Message-ID: <1D67DE63-896A-4359-830C-B1DDF605F163@arachnedesign.net> Just a stab in the dark, but it still looks like you don't have a fortran compiler that scipy can find. Look in your output here: > Could not locate executable g77 > Could not locate executable f77 > Could not locate executable gfortran > Could not locate executable f95 ... and ... > compiling Fortran sources > Fortran f77 compiler: f95 -fixed -O4 -target=native > Fortran f90 compiler: f95 -O4 -target=native > Fortran fix compiler: f95 -fixed -O4 -target=native > compile options: '-c' > f95:f77: Lib/fftpack/dfftpack/dcosqb.f > sh: line 1: f95: command not found > sh: line 1: f95: command not found > error: Command "f95 -fixed -O4 -target=native -c -c > Lib/fftpack/dfftpack/dcosqb.f -o It says it can't find any fortran compiler (including f95), then it tries to use "f95" .. so ... that's what looks fishy to me. Can you find f95 in your path? Maybe it's installed in /usr/local/bin and you need to add that (but maybe scipy will find it by some other means(?)) -steve From strang at nmr.mgh.harvard.edu Wed Oct 11 21:28:17 2006 From: strang at nmr.mgh.harvard.edu (Gary Strangman) Date: Wed, 11 Oct 2006 21:28:17 -0400 (EDT) Subject: [SciPy-user] scipy build problem, linux centos4 (fwd) Message-ID: Hi, I appear to have a compile problem similar to those reported by OS/X users, but this is on a Linux CentOS4 platform and I've been unable to find a solution in the list archives. The tail end of my "python setup.py install" command is shown below, followed by a the items requested by the scipy.org FAQ. I didn't have trouble compiling on my FedoraCore2 box but was forced to upgrade by my sysadmins (grrrr) and now I've hit a brick wall. It was (at least originally) an out-of-the-box installation attempt. Any suggestions/help would be most appreciated. Gary build/temp.linux-i686-2.4/build/src.linux-i686-2.4/fortranobject.o(.text+0x1abf):build/src.linux-i686-2.4/fortranobject.c:267: undefined reference to `PyExc_AttributeError' build/temp.linux-i686-2.4/build/src.linux-i686-2.4/fortranobject.o(.text+0x1ac6):build/src.linux-i686-2.4/fortranobject.c:267: undefined reference to `PyErr_SetString' build/temp.linux-i686-2.4/build/src.linux-i686-2.4/fortranobject.o(.text+0x1adf):build/src.linux-i686-2.4/fortranobject.c:218: undefined reference to `PyExc_AttributeError' build/temp.linux-i686-2.4/build/src.linux-i686-2.4/fortranobject.o(.text+0x1ae7):build/src.linux-i686-2.4/fortranobject.c:218: undefined reference to `PyErr_SetString' /usr/lib/gcc/i386-redhat-linux/3.4.5/libfrtbegin.a(frtbegin.o)(.text+0x35): In function `main': : undefined reference to `MAIN__' collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -L/usr/X11R6/lib build/temp.linux-i686-2.4/build/src.linux-i686-2.4/Lib/fftpack/_fftpackmodule.o build/temp.linux-i686-2.4/Lib/fftpack/src/zfft.o build/temp.linux-i686-2.4/Lib/fftpack/src/drfft.o build/temp.linux-i686-2.4/Lib/fftpack/src/zrfft.o build/temp.linux-i686-2.4/Lib/fftpack/src/zfftnd.o build/temp.linux-i686-2.4/build/src.linux-i686-2.4/fortranobject.o -L/usr/local/lib -Lbuild/temp.linux-i686-2.4 -ldfftpack -lfftw3 -lg2c -o build/lib.linux-i686-2.4/scipy/fftpack/_fftpack.so" failed with exit status 1 /space/nsg/8/users/scipy-0.5.1> ------------------------------------------- Linux guppy 2.6.9-42.0.2.ELsmp #1 SMP Wed Aug 23 00:17:26 CDT 2006 i686 i686 i386 GNU/Linux sys.platform = linux2 sys.version = '2.4.3 (#1, Oct 6 2006, 15:32:21) \n[GCC 3.4.5 20051201 (Red Hat 3.4.5-2)]' numpy.__version__ = '1.0b5' /space/nsg/8/users> python -c 'from numpy.f2py.diagnose import run; run()' ------ os.name='posix' ------ sys.platform='linux2' ------ sys.version: 2.4.3 (#1, Oct 6 2006, 15:32:21) [GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] ------ sys.prefix: /space/nsg/8/users ------ sys.path=':/homes/3/strang/python:/space/nsg/8/users/numpy-1.0b5/numpy/lib:/space/nsg/8/users/include/python2.3:/space/nsg/8/users/lib/python2.3:/usr/include:/usr/lib:/space/nsg/8/users/lib/R/lib:/space/nsg/8/users/lib/python24.zip:/space/nsg/8/users/lib/python2.4:/space/nsg/8/users/lib/python2.4/plat-linux2:/space/nsg/8/users/lib/python2.4/lib-tk:/space/nsg/8/users/lib/python2.4/lib-dynload:/space/nsg/8/users/lib/python2.4/site-packages:/space/nsg/8/users/lib/python2.4/site-packages/Numeric:/space/nsg/8/users/lib/python2.4/site-packages/PIL' ------ Failed to import numarray: No module named numarray Found Numeric version '24.2' in /space/nsg/8/users/lib/python2.4/site-packages/Numeric/Numeric.pyc Found new numpy version '1.0b5' in /space/nsg/8/users/lib/python2.4/site-packages/numpy/__init__.pyc Found f2py2e version '2_3118' in /space/nsg/8/users/lib/python2.4/site-packages/numpy/f2py/f2py2e.pyc Found numpy.distutils version '0.4.0' in '/space/nsg/8/users/lib/python2.4/site-packages/numpy/distutils/__init__.pyc' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: customize CompaqFCompiler customize NoneFCompiler customize AbsoftFCompiler Could not locate executable ifort Could not locate executable ifc Could not locate executable ifort Could not locate executable efort Could not locate executable efc Could not locate executable ifort Could not locate executable efort Could not locate executable efc customize IntelFCompiler Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize SunFCompiler customize NAGFCompiler customize VastFCompiler customize GnuFCompiler customize IbmFCompiler customize Gnu95FCompiler customize IntelVisualFCompiler customize G95FCompiler customize IntelItaniumFCompiler customize PGroupFCompiler customize LaheyFCompiler customize CompaqVisualFCompiler customize MipsFCompiler customize HPUXFCompiler customize IntelItaniumVisualFCompiler customize IntelEM64TFCompiler List of available Fortran compilers: --fcompiler=gnu GNU Fortran Compiler (3.4.5) List of unavailable Fortran compilers: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=compaqv DIGITAL|Compaq Visual Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=gnu95 GNU 95 Fortran Compiler --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for EM64T-based apps --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=none Fake Fortran compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=sun Sun|Forte Fortran 95 Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler List of unimplemented Fortran compilers: --fcompiler=f Fortran Company/NAG F Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: getNCPUs has_mmx has_sse has_sse2 is_32bit is_Intel is_XEON is_Xeon ------ /space/nsg/8/users> python -c 'from numpy import show_config as s;s()' atlas_threads_info: libraries = ['lapack', 'lapack', 'blas'] library_dirs = ['/usr/lib'] language = c blas_opt_info: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib'] define_macros = [('NO_ATLAS_INFO', 2)] language = c atlas_blas_threads_info: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib'] language = c lapack_opt_info: libraries = ['lapack', 'lapack', 'blas'] library_dirs = ['/usr/lib'] define_macros = [('NO_ATLAS_INFO', 2)] language = c lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE mkl_info: NOT AVAILABLE From ryanlists at gmail.com Wed Oct 11 22:40:27 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 11 Oct 2006 21:40:27 -0500 Subject: [SciPy-user] SciPy Ubuntu Dapper installation using Andrew Straw's packages In-Reply-To: <20061011150208.GH19992@clipper.ens.fr> References: <20061011150208.GH19992@clipper.ens.fr> Message-ID: So, Andrew's packages worked beautifully. The information I was looking for is actually on http://scipy.org/Installing_SciPy/Linux . I kept searching the list for the instructions but it is on the website (in case anyone else is ever looking here and finds this message). Are the packages using optimized Atlas and Lapack if I have them installed specific to my processor (sse2)? Would I see any performance increases if I built from source? Thanks, Ryan On 10/11/06, Gael Varoquaux wrote: > On Wed, Oct 11, 2006 at 09:55:43AM -0500, Ryan Krauss wrote: > > I have a dumb apt question. I have added > > deb http://debs.astraw.com/ dapper/ > > deb-src http://debs.astraw.com/ dapper/ > > > to my sources.list. What at the names of the actual packages I need to install? > > Did you update your packages. Searching for numpy with updated package > list gives: > > python-numeric-tutorial - Tutorial for the Numerical Python Library > python-scientific - Python modules useful for scientific computing > python-numarray - An array processing package modelled after Python-Numeric > python-numeric - Numerical (matrix-oriented) Mathematics for Python > python-scientific-doc - Python modules useful for scientific computing > python2.4-numarray - An array processing package modelled after Python-Numeric > python2.4-numeric - Numerical (matrix-oriented) Mathematics for Python > python-numpy - NumPy: array processing for numbers, strings, records,... > > > I suggest you first remove all the matlplotib and scipy packages, then > install python-matplotlib, python-numpy and python-scipy. > > Ga?l > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From david at ar.media.kyoto-u.ac.jp Thu Oct 12 00:03:43 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Oct 2006 13:03:43 +0900 Subject: [SciPy-user] SciPy Ubuntu Dapper installation using Andrew Straw's packages In-Reply-To: References: <20061011150208.GH19992@clipper.ens.fr> Message-ID: <452DBE9F.3020902@ar.media.kyoto-u.ac.jp> Ryan Krauss wrote: > So, Andrew's packages worked beautifully. The information I was > looking for is actually on > http://scipy.org/Installing_SciPy/Linux . I kept searching the list > for the instructions but it is on the website (in case anyone else is > ever looking here and finds this message). > > Are the packages using optimized Atlas and Lapack if I have them > installed specific to my processor (sse2)? An 'easy' way to check if you are using sse2 optimized atlas is to check which libraries the loader loads when loading numpy and scipy libraries. For example, for numpy: ldd /usr/lib/python2.4/site-packages/numpy/core/_dotblas.so | grep blas returns the atlas enabled libblas (in the /usr/lib/sse2 directory, but this is distribution dependent, I think) > Would I see any > performance increases if I built from source? > Only if your atlas is faster than the shipped one (stating the obvious:) ). Concretely, this means that you are able to compile a faster atlas, with the correct extended lapack, and this is not trivial, specially if your cpu is not in the default. Recent (after 3.7.14, I think) atlas have a much easier to use configuration. If you have a new intel core 2 duo, last atlas give outstanding results according to the main developer of Atlas: http://sourceforge.net/mailarchive/forum.php?thread_id=30384800&forum_id=426 David From ndbecker2 at gmail.com Thu Oct 12 14:49:17 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 12 Oct 2006 14:49:17 -0400 Subject: [SciPy-user] Prob with romberg integration? Message-ID: This is scipy-0.5.1: --------------------------- from scipy.integrate import romberg from math import sqrt, log, exp def F (x): return x def G (v): return exp (v) * F (exp (v)) gain = 1/sqrt (2*romberg (G, log (1e-7), log (0.5))) ------------------------ Traceback (most recent call last): File "", line 1, in ? File "/usr/tmp/python-MSHhoK.py", line 11, in ? gain = 1/sqrt (2*romberg (G, log (1e-7), log (0.5))) File "/usr/lib64/python2.4/site-packages/scipy/integrate/quadrature.py", line 377, in romberg ordsum = ordsum + _difftrap(vfunc, interval, n) File "/usr/lib64/python2.4/site-packages/scipy/integrate/quadrature.py", line 326, in _difftrap s = sum(function(points),0) File "/usr/lib64/python2.4/site-packages/scipy/integrate/quadrature.py", line 54, in vfunc output = empty((n,), dtype=y0.dtype) AttributeError: 'float' object has no attribute 'dtype' From peridot.faceted at gmail.com Thu Oct 12 17:14:28 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Thu, 12 Oct 2006 17:14:28 -0400 Subject: [SciPy-user] Prob with romberg integration? In-Reply-To: References: Message-ID: On 12/10/06, Neal Becker wrote: > AttributeError: 'float' object has no attribute 'dtype' Looks like things are not getting converted to Numeric data types. A bug, but easy to work around; just wrap the arguments in array() calls. A. M. Archibald From pecora at anvil.nrl.navy.mil Thu Oct 12 17:17:04 2006 From: pecora at anvil.nrl.navy.mil (Louis Pecora) Date: Thu, 12 Oct 2006 17:17:04 -0400 Subject: [SciPy-user] Missing library will not allow SciPy to load. In-Reply-To: <7D612544-6319-457E-B27F-90F734F0BA69@mac.com> References: <7D612544-6319-457E-B27F-90F734F0BA69@mac.com> Message-ID: <452EB0D0.6020807@anvil.nrl.navy.mil> Tom Bridgman wrote: >> When I do >> >> from scipy import * >> >> I get the following error: >> >> ImportError: Failure linking new module: >> /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ >> site-packages/scipy/sparse/sparsetools.so: >> Library not loaded: /usr/local/lib/libg2c.0.dylib >> Referenced from: >> /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ >> site-packages/scipy/sparse/sparsetools.so >> Reason: image not found >> >> >> It seems I'm missing libg2c.0.dylib. I just installed the SciPy >> SuperPack. I guess it wasn't in there or was supposed to be installed >> from some other package. Does anyone know about this? >> >> > I believe this is installed as part of the g77 package (g77v3.4- > bin.tar.gz). > > Tom > Yes, this seemed to do it. But I did have to use Terminal to manually transfer those files into /usr/local and sub directories. Unlike the packages, this one is back up to the user. Thanks. -- Cheers, Lou Pecora Code 6362 Naval Research Lab Washington, DC 20375 USA Ph: +202-767-6002 email: pecora at anvil.nrl.navy.mil From ryanlists at gmail.com Thu Oct 12 17:22:49 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 12 Oct 2006 16:22:49 -0500 Subject: [SciPy-user] SciPy Ubuntu Dapper installation using Andrew Straw's packages In-Reply-To: <452DBE9F.3020902@ar.media.kyoto-u.ac.jp> References: <20061011150208.GH19992@clipper.ens.fr> <452DBE9F.3020902@ar.media.kyoto-u.ac.jp> Message-ID: ryan at ubuntu:~$ ldd /usr/lib/python2.4/site-packages/numpy/core/_dotblas.so | grep blas libblas.so.3 => /usr/lib/atlas/sse2/libblas.so.3 (0xb7387000) I think I am happy and I am done thinking about this. Thanks, Ryan On 10/11/06, David Cournapeau wrote: > Ryan Krauss wrote: > > So, Andrew's packages worked beautifully. The information I was > > looking for is actually on > > http://scipy.org/Installing_SciPy/Linux . I kept searching the list > > for the instructions but it is on the website (in case anyone else is > > ever looking here and finds this message). > > > > Are the packages using optimized Atlas and Lapack if I have them > > installed specific to my processor (sse2)? > An 'easy' way to check if you are using sse2 optimized atlas is to check > which libraries the loader loads when loading numpy and scipy libraries. > For example, for numpy: > > ldd /usr/lib/python2.4/site-packages/numpy/core/_dotblas.so | grep blas > returns the atlas enabled libblas (in the /usr/lib/sse2 directory, but > this is distribution dependent, I think) > > > Would I see any > > performance increases if I built from source? > > > Only if your atlas is faster than the shipped one (stating the obvious:) > ). Concretely, this means that you are able to compile a faster atlas, > with the correct extended lapack, and this is not trivial, specially if > your cpu is not in the default. Recent (after 3.7.14, I think) atlas > have a much easier to use configuration. > > If you have a new intel core 2 duo, last atlas give outstanding results > according to the main developer of Atlas: > > http://sourceforge.net/mailarchive/forum.php?thread_id=30384800&forum_id=426 > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From R.Springuel at umit.maine.edu Thu Oct 12 18:53:06 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Thu, 12 Oct 2006 18:53:06 -0400 Subject: [SciPy-user] Installation Problems on Mac OS X Message-ID: <452EC752.7030000@umit.maine.edu> I have gfortran installed under /usr/local/bin. I do not have f95 installed. Now the output says that it does look for gfortran so I shouldn't need f95, right? If gfortran is sufficient, what do I need to do so that scipy can find gfortran? -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: Wednesday 2-3pm From ndbecker2 at gmail.com Thu Oct 12 20:25:46 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 12 Oct 2006 20:25:46 -0400 Subject: [SciPy-user] Prob with romberg integration? References: Message-ID: A. M. Archibald wrote: > On 12/10/06, Neal Becker wrote: > >> AttributeError: 'float' object has no attribute 'dtype' > > Looks like things are not getting converted to Numeric data types. A > bug, but easy to work around; just wrap the arguments in array() > calls. > If I print out the value of 'x' in quadrature.py/vectororize1 (the arg to vfunc), it seems that the first 2 times 'x' is a simple scalar, but the 3rd time it is a list with 1 element, and that triggers the error. Something's wrong here. From samnemo at gmail.com Thu Oct 12 23:41:22 2006 From: samnemo at gmail.com (sam n) Date: Thu, 12 Oct 2006 23:41:22 -0400 Subject: [SciPy-user] warnings+errors(check_integer) with scipy.test() on scipy-0.5.1 Message-ID: Hi All, I just built and installed numpy-1.0rc2 and scipy-0.5.1 and encountered the warnings & errors below (only get errors from scipy.test() , no problems with numpy.test() ). If anyone knows how to fix these problems please let me know. I already had LAPACK and BLAS installed. I also included diagnostic information after the warnings & errors. Sorry if this email is too long. Thanks for any help/suggestions. Sam >>> import scipy >>> scipy.test() Warning: FAILURE importing tests for /usr/arch/lib/python2.4/site-packages/scipy/linalg/basic.py:23: ImportError: cannot import name calc_lwork (in ?) Found 42 tests for scipy.lib.lapack Warning: FAILURE importing tests for /usr/arch/lib/python2.4/site-packages/scipy/linalg/basic.py:23: ImportError: cannot import name calc_lwork (in ?) ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/usr/arch/lib/python2.4/site-packages/scipy/stats/__init__.py", line 7, in ? from stats import * File "/usr/arch/lib/python2.4/site-packages/scipy/stats/stats.py", line 191, in ? import scipy.special as special File "/usr/arch/lib/python2.4/site-packages/scipy/special/__init__.py", line 10, in ? import orthogonal File "/usr/arch/lib/python2.4/site-packages/scipy/special/orthogonal.py", line 67, in ? from scipy.linalg import eig File "/usr/arch/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/arch/lib/python2.4/site-packages/scipy/linalg/basic.py", line 23, in ? from scipy.linalg import calc_lwork ImportError: cannot import name calc_lwork ====================================================================== ERROR: check loadmat case cellnest ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 80, in cc self._check_case(name, expected) File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case self._check_level(k_label, expected, matdict[k]) File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 30, in _check_level assert len(expected) == len(actual), "Different list lengths at %s" % label TypeError: len() of unsized object ---------------------------------------------------------------------- Ran 630 tests in 7.745s FAILED (errors=2) diagnostic information from : python -c 'from numpy.f2py.diagnose import run; run()' ------ os.name='posix' ------ sys.platform='linux2' ------ sys.version: 2.4.1 (#3, May 9 2005, 14:40:40) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-49)] ------ sys.prefix: /usr/arch ------ sys.path=':/usr/arch/lib/python24.zip:/usr/arch/lib/python2.4:/usr/arch/lib/python2.4/plat-linux2:/usr/arch/lib/python2.4/lib-tk:/usr/arch/lib/python2.4/lib-dynload:/usr/arch/lib/python2.4/site-packages ' ------ Failed to import Numeric: No module named Numeric Failed to import numarray: No module named numarray Found new numpy version '1.0rc2' in /usr/arch/lib/python2.4/site-packages/numpy/__init__.pyc Found f2py2e version '2_3296' in /usr/arch/lib/python2.4/site-packages/numpy/f2py/f2py2e.pyc Found numpy.distutils version '0.4.0' in '/usr/arch/lib/python2.4/site-packages/numpy/distutils/__init__.pyc' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: customize CompaqFCompiler customize NoneFCompiler customize AbsoftFCompiler Could not locate executable ifort Could not locate executable ifc Could not locate executable ifort Could not locate executable efort Could not locate executable efc Could not locate executable ifort Could not locate executable efort Could not locate executable efc customize IntelFCompiler Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize SunFCompiler customize NAGFCompiler customize VastFCompiler customize GnuFCompiler customize IbmFCompiler customize Gnu95FCompiler customize IntelVisualFCompiler customize G95FCompiler customize IntelItaniumFCompiler customize PGroupFCompiler customize LaheyFCompiler customize CompaqVisualFCompiler customize MipsFCompiler customize HPUXFCompiler customize IntelItaniumVisualFCompiler customize IntelEM64TFCompiler List of available Fortran compilers: --fcompiler=gnu GNU Fortran Compiler (3.2.3) List of unavailable Fortran compilers: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=compaqv DIGITAL|Compaq Visual Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=gnu95 GNU 95 Fortran Compiler --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for EM64T-based apps --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=none Fake Fortran compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=sun Sun|Forte Fortran 95 Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler List of unimplemented Fortran compilers: --fcompiler=f Fortran Company/NAG F Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: getNCPUs has_mmx has_sse is_32bit is_Intel is_Pentium is_PentiumII is_PentiumIII is_i686 ------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Fri Oct 13 12:12:07 2006 From: fredmfp at gmail.com (fred) Date: Fri, 13 Oct 2006 18:12:07 +0200 Subject: [SciPy-user] editing wiki on localhost.. In-Reply-To: <17690.9056.272622.186684@prpc.aero.iitb.ac.in> References: <45183A9A.5060408@gmail.com> <17688.50113.296358.856192@prpc.aero.iitb.ac.in> <45195BCB.4070403@gmail.com> <17690.9056.272622.186684@prpc.aero.iitb.ac.in> Message-ID: <452FBAD7.4090007@gmail.com> Hi, I have found some stuff (ViewSourceWith on sourceforge) to edit wiki pages on localhost. But I don't understand how scipy.org wiki pages work. For example, I have to give some information about the files I want to edit in local. Uh..., what files ? FredericPetit is a directory or a file ?? If it's a directory, which files are in it ? I don't see neither directory nor file. What am I doing wrong ? Cheers, -- http://scipy.org/FredericPetit From pecora at anvil.nrl.navy.mil Fri Oct 13 14:26:11 2006 From: pecora at anvil.nrl.navy.mil (Louis Pecora) Date: Fri, 13 Oct 2006 14:26:11 -0400 Subject: [SciPy-user] Can't run IPython from SuperPack. ?? Message-ID: <452FDA43.9040409@anvil.nrl.navy.mil> I installed the full SuperPack from SciPy. Things seem to work well for pylab and scipy, but I want to use pylab/matplotlib in interactive mode. The current wisdom is to do that in IPython. The superpack intalled IPython down in /Lib/Framework/python...blah, blah/site-packages/. But typing ipython, IPython or any variations in the Terminal does nothing. I don't see ipython in /bin /usr/local/bin or other places I find python or pythonw executables. Any idea what I have to do to get up and running with IPython? MacOS X 10.4 on PPC laptop, python2.4, latest SciPy, Pylab and Matplotlib. Thanks for any help. By the way. My congratulations and endorsement to all who put together the SuperPack and its contents. That's the way to do it for us python users. Almost one-stop shopping. Very nicely done. -- Cheers, Lou Pecora Code 6362 Naval Research Lab Washington, DC 20375 USA Ph: +202-767-6002 email: pecora at anvil.nrl.navy.mil From davidlinke at tiscali.de Fri Oct 13 15:53:04 2006 From: davidlinke at tiscali.de (David Linke) Date: Fri, 13 Oct 2006 21:53:04 +0200 Subject: [SciPy-user] editing wiki on localhost.. In-Reply-To: <452FBAD7.4090007@gmail.com> References: <45183A9A.5060408@gmail.com> <17688.50113.296358.856192@prpc.aero.iitb.ac.in> <45195BCB.4070403@gmail.com> <17690.9056.272622.186684@prpc.aero.iitb.ac.in> <452FBAD7.4090007@gmail.com> Message-ID: <452FEEA0.3040605@tiscali.de> fred wrote: > Hi, > > I have found some stuff (ViewSourceWith on sourceforge) to edit wiki > pages on localhost. > > But I don't understand how scipy.org wiki pages work. > > For example, I have to give some information about the files I want to > edit in local. > > Uh..., what files ? FredericPetit is a directory or a file ?? > > If it's a directory, which files are in it ? > > I don't see neither directory nor file. > > What am I doing wrong ? > I don't know about ViewSourceWith but it may be of interest for you to test http://labix.org/editmoin which is made specifically for MoinMoin-Wikis. Regards, David From peridot.faceted at gmail.com Fri Oct 13 16:13:01 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Fri, 13 Oct 2006 16:13:01 -0400 Subject: [SciPy-user] Prob with romberg integration? In-Reply-To: References: Message-ID: On 12/10/06, Neal Becker wrote: > A. M. Archibald wrote: > > > On 12/10/06, Neal Becker wrote: > > > >> AttributeError: 'float' object has no attribute 'dtype' > > > > Looks like things are not getting converted to Numeric data types. A > > bug, but easy to work around; just wrap the arguments in array() > > calls. > > > > If I print out the value of 'x' in quadrature.py/vectororize1 (the arg to > vfunc), it seems that the first 2 times 'x' is a simple scalar, but the 3rd > time it is a list with 1 element, and that triggers the error. > > Something's wrong here. The bug appears to be in the function vectorize1 (which is internal to scipy.integrate); it assumes that your function returns a numpy object (which yours does not). Switching to "from numpy import exp, ..." would probably cure the problem, as would wrapping the return value in numpy.array(). You might find that scipy.integrate.quad, which is based on solid FORTRAN routines from QUADPACK, is faster and more reliable. Nevertheless, that bug in scipy is annoying. A. M. Archibald From fredmfp at gmail.com Fri Oct 13 18:35:22 2006 From: fredmfp at gmail.com (fred) Date: Sat, 14 Oct 2006 00:35:22 +0200 Subject: [SciPy-user] editing wiki on localhost.. In-Reply-To: <452FEEA0.3040605@tiscali.de> References: <45183A9A.5060408@gmail.com> <17688.50113.296358.856192@prpc.aero.iitb.ac.in> <45195BCB.4070403@gmail.com> <17690.9056.272622.186684@prpc.aero.iitb.ac.in> <452FBAD7.4090007@gmail.com> <452FEEA0.3040605@tiscali.de> Message-ID: <453014AA.1000409@gmail.com> David Linke a ?crit : >I don't know about ViewSourceWith but it may be of interest for you to >test http://labix.org/editmoin which is made specifically for >MoinMoin-Wikis. > > This is exactly what I was lookin for. Thanks a lot (again ;-) -- http://scipy.org/FredericPetit From cygnusx1 at mac.com Sun Oct 15 09:31:49 2006 From: cygnusx1 at mac.com (Tom Bridgman) Date: Sun, 15 Oct 2006 09:31:49 -0400 Subject: [SciPy-user] Missing library will not allow SciPy to load. Message-ID: > Tom Bridgman wrote: > >> When I do > >> > >> from scipy import * > >> > >> I get the following error: > >> > >> ImportError: Failure linking new module: > >> /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ > >> site-packages/scipy/sparse/sparsetools.so: > >> Library not loaded: /usr/local/lib/libg2c.0.dylib > >> Referenced from: > >> /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ > >> site-packages/scipy/sparse/sparsetools.so > >> Reason: image not found > >> > >> > >> It seems I'm missing libg2c.0.dylib. I just installed the SciPy > >> SuperPack. I guess it wasn't in there or was supposed to be > installed > >> from some other package. Does anyone know about this? > >> > >> > > I believe this is installed as part of the g77 package (g77v3.4- > > bin.tar.gz). > > > > Tom > > > > Yes, this seemed to do it. But I did have to use Terminal to manually > transfer those files into /usr/local and sub directories. Unlike the > packages, this one is back up to the user. > I believe the -C option in the instructions does that for you: % sudo tar -xvf g77v3.4-bin.tar -C / Tom From pecora at anvil.nrl.navy.mil Sun Oct 15 10:46:15 2006 From: pecora at anvil.nrl.navy.mil (Louis Pecora) Date: Sun, 15 Oct 2006 10:46:15 -0400 Subject: [SciPy-user] Missing library will not allow SciPy to load. In-Reply-To: References: Message-ID: <453249B7.2050009@anvil.nrl.navy.mil> Tom Bridgman wrote: >> >> Yes, this seemed to do it. But I did have to use Terminal to manually >> transfer those files into /usr/local and sub directories. Unlike the >> packages, this one is back up to the user. >> >> > > I believe the -C option in the instructions does that for you: > > % sudo tar -xvf g77v3.4-bin.tar -C / > > Tom > _____ > Well, learn something every day. I didn't know about that so I did it the hard way. Thanks. -- Cheers, Lou Pecora Code 6362 Naval Research Lab Washington, DC 20375 USA Ph: +202-767-6002 email: pecora at anvil.nrl.navy.mil From lroubeyrie at limair.asso.fr Mon Oct 16 03:46:19 2006 From: lroubeyrie at limair.asso.fr (Lionel Roubeyrie) Date: Mon, 16 Oct 2006 09:46:19 +0200 Subject: [SciPy-user] Dataframe Message-ID: <200610160946.19215.lroubeyrie@limair.asso.fr> Hi all, Where can I find examples for using dataframe? Cookbook is pretty short (http://www.scipy.org/Cookbook/DataFrame) and there no documentation in the file :-/ thanks -- Lionel Roubeyrie - lroubeyrie at limair.asso.fr LIMAIR http://www.limair.asso.fr From gkoczyk at echostar.pl Mon Oct 16 05:57:32 2006 From: gkoczyk at echostar.pl (Grzegorz Koczyk) Date: Mon, 16 Oct 2006 11:57:32 +0200 Subject: [SciPy-user] Problem with scipy on FC5 (PentiumM/PentiumIV) Message-ID: <1160992652.28112.22.camel@localhost.localdomain> I am not sure whether this an FC5-specific problem, so I am submitting this here and not as a ticket. For Fedora Core 5 running on Pentium4-based servers following scipy tests fail and I am at a complete loss, at what can be done (and more importantly to what extent other modules depends on the tested code): [ Note: I did try compiling ATLAS (3.6) from source, however then both numpy and scipy complain about gfortran missing symbols in lapack_lite.so - so I am unable to verify whether problem stems from some peculiarity of vanilla FC5 RPMs. ] ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (3.3613915289673437e-35-1.0143136978149414j) DESIRED: (-9+2j) ---------------------------------------------------------------------- Scipy configuration: >>> numpy.__version__ '1.0.dev3341' >>> scipy.__version__ '0.5.2.dev2288' >>> scipy.show_config() umfpack_info: NOT AVAILABLE atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = c include_dirs = ['/usr/include/atlas'] atlas_blas_threads_info: NOT AVAILABLE djbfft_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = c include_dirs = ['/usr/include/atlas'] fftw3_info: libraries = ['fftw3'] library_dirs = ['/usr/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/include'] atlas_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c include_dirs = ['/usr/include/atlas'] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c include_dirs = ['/usr/include/atlas'] mkl_info: NOT AVAILABLE ---------------------------------------------------------------------- Relevant RPMs installed: atlas-sse2-3.6.0-10.fc5 atlas-sse2-devel-3.6.0-10.fc5 blas-3.0-37.fc5 blas-devel-3.0-37.fc5 fftw-3.1.1-1.fc5 fftw-devel-3.1.1-1.fc5 lapack-3.0-37.fc5 lapack-devel-3.0-37.fc5 python-2.4.3-8.FC5 ---------------------------------------------------------------------- /proc/cpuinfo: processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 4 model name : Intel(R) Pentium(R) 4 CPU 3.00GHz stepping : 3 cpu MHz : 2992.931 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc pni monitor ds_cpl est cid cx16 xtpr bogomips : 5997.83 processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 4 model name : Intel(R) Pentium(R) 4 CPU 3.00GHz stepping : 3 cpu MHz : 2992.931 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc pni monitor ds_cpl est cid cx16 xtpr bogomips : 5985.40 ---------------------------------------------------------------------- Any help or information concerning the matter would be highly appreciated. Best regards, Grzegorz Koczyk From meesters at uni-mainz.de Mon Oct 16 07:44:29 2006 From: meesters at uni-mainz.de (Christian Meesters) Date: Mon, 16 Oct 2006 13:44:29 +0200 Subject: [SciPy-user] polynomal regression Message-ID: <200610161344.30811.meesters@uni-mainz.de> Hi, Well, scipy allows to fit a polynomal of arbitrary order. But is there any method implemented which does this and returns uncertainties in the coefficients as well? For instance: If the data represent a polynomal p0 + p1*x + p1*x^2, is there a way to retreive delta-p0, -p1, and p2? Sorry, if this question has already been asked - or even answered. I could image that somebody came up with it already, but I can't find the thread. Any hints / links appreciated ... TIA Christian From gerard.vermeulen at grenoble.cnrs.fr Mon Oct 16 07:55:31 2006 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Mon, 16 Oct 2006 13:55:31 +0200 Subject: [SciPy-user] Problem with scipy on FC5 (PentiumM/PentiumIV) In-Reply-To: <1160992652.28112.22.camel@localhost.localdomain> References: <1160992652.28112.22.camel@localhost.localdomain> Message-ID: <20061016135531.76512ffb.gerard.vermeulen@grenoble.cnrs.fr> I have the same unit test failure on Mac OSX where I failed to build ATLAS from source (it looks like that SciPy falls back on a different BLAS expecting a different memory layout). You can try to use something like the following site.cfg when rebuilding NumPy (put it in the NumPy source tree before building): cat >site.cfg < wrote: > I am not sure whether this an FC5-specific problem, so I am submitting > this here and not as a ticket. > > For Fedora Core 5 running on Pentium4-based servers following scipy > tests fail and I am at a complete loss, at what can be done (and more > importantly to what extent other modules depends on the tested code): > > [ Note: I did try compiling ATLAS (3.6) from source, however then both > numpy and scipy complain about gfortran missing symbols in > lapack_lite.so - so I am unable to verify whether problem stems from > some peculiarity of vanilla FC5 RPMs. ] > > ====================================================================== > FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_blas.py", line > 75, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line > 156, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: (3.3613915289673437e-35-1.0143136978149414j) > DESIRED: (-9+2j) > ---------------------------------------------------------------------- > Scipy configuration: > > >>> numpy.__version__ > '1.0.dev3341' > >>> scipy.__version__ > '0.5.2.dev2288' > >>> scipy.show_config() > umfpack_info: > NOT AVAILABLE > > atlas_threads_info: > NOT AVAILABLE > > blas_opt_info: > libraries = ['f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/lib/sse2'] > define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] > language = c > include_dirs = ['/usr/include/atlas'] > > atlas_blas_threads_info: > NOT AVAILABLE > > djbfft_info: > NOT AVAILABLE > > lapack_opt_info: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/lib/sse2'] > define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] > language = c > include_dirs = ['/usr/include/atlas'] > > fftw3_info: > libraries = ['fftw3'] > library_dirs = ['/usr/lib'] > define_macros = [('SCIPY_FFTW3_H', None)] > include_dirs = ['/usr/include'] > > atlas_info: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/lib/sse2'] > language = c > include_dirs = ['/usr/include/atlas'] > > lapack_mkl_info: > NOT AVAILABLE > > blas_mkl_info: > NOT AVAILABLE > > atlas_blas_info: > libraries = ['f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/lib/sse2'] > language = c > include_dirs = ['/usr/include/atlas'] > > mkl_info: > NOT AVAILABLE > ---------------------------------------------------------------------- > Relevant RPMs installed: > > atlas-sse2-3.6.0-10.fc5 > atlas-sse2-devel-3.6.0-10.fc5 > blas-3.0-37.fc5 > blas-devel-3.0-37.fc5 > fftw-3.1.1-1.fc5 > fftw-devel-3.1.1-1.fc5 > lapack-3.0-37.fc5 > lapack-devel-3.0-37.fc5 > python-2.4.3-8.FC5 > ---------------------------------------------------------------------- > /proc/cpuinfo: > > processor : 0 > vendor_id : GenuineIntel > cpu family : 15 > model : 4 > model name : Intel(R) Pentium(R) 4 CPU 3.00GHz > stepping : 3 > cpu MHz : 2992.931 > cache size : 2048 KB > physical id : 0 > siblings : 2 > core id : 0 > cpu cores : 1 > fdiv_bug : no > hlt_bug : no > f00f_bug : no > coma_bug : no > fpu : yes > fpu_exception : yes > cpuid level : 5 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca > cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm > constant_tsc pni monitor ds_cpl est cid cx16 xtpr > bogomips : 5997.83 > > processor : 1 > vendor_id : GenuineIntel > cpu family : 15 > model : 4 > model name : Intel(R) Pentium(R) 4 CPU 3.00GHz > stepping : 3 > cpu MHz : 2992.931 > cache size : 2048 KB > physical id : 0 > siblings : 2 > core id : 0 > cpu cores : 1 > fdiv_bug : no > hlt_bug : no > f00f_bug : no > coma_bug : no > fpu : yes > fpu_exception : yes > cpuid level : 5 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca > cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm > constant_tsc pni monitor ds_cpl est cid cx16 xtpr > bogomips : 5985.40 > ---------------------------------------------------------------------- > > Any help or information concerning the matter would be highly > appreciated. > > Best regards, > Grzegorz Koczyk > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Mon Oct 16 09:27:03 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Mon, 16 Oct 2006 09:27:03 -0400 Subject: [SciPy-user] polynomal regression In-Reply-To: <200610161344.30811.meesters@uni-mainz.de> References: <200610161344.30811.meesters@uni-mainz.de> Message-ID: On 16/10/06, Christian Meesters wrote: > Hi, > > Well, scipy allows to fit a polynomal of arbitrary order. But is there any > method implemented which does this and returns uncertainties in the > coefficients as well? > For instance: If the data represent a polynomal p0 + p1*x + p1*x^2, is there a > way to retreive delta-p0, -p1, and p2? > > Sorry, if this question has already been asked - or even answered. I could > image that somebody came up with it already, but I can't find the thread. Any > hints / links appreciated ... numpy's least-squares fitting procedure will do just what you're asking for. I think it's called numpy.lstsqr (but it may have a different number of ss and ts). What you really want is probably the full covariance matrix, and I think it can give that to you. A. M. Archibald From ryanlists at gmail.com Mon Oct 16 12:24:50 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 16 Oct 2006 11:24:50 -0500 Subject: [SciPy-user] ubuntu Edgy Eft and Scipy Message-ID: Anybody out there running Scipy/NumPy/Matplotlib under Ubuntu Edgy Eft? I am considering building a new computer based on AMD socket AM2 stuff, and from what I learned from a guy on the Ubuntuforums, the hardware works very well under Edgy. But my main use will be Scipy related and I don't want a very fast computer that is good for games and not for work. Thanks, Ryan From robert.kern at gmail.com Mon Oct 16 12:28:32 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Oct 2006 11:28:32 -0500 Subject: [SciPy-user] ubuntu Edgy Eft and Scipy In-Reply-To: References: Message-ID: <4533B330.2020609@gmail.com> Ryan Krauss wrote: > Anybody out there running Scipy/NumPy/Matplotlib under Ubuntu Edgy > Eft? I'm still on Dapper, but I see no reason why any of those packages would not work or perform badly on Edgy Eft. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Mon Oct 16 12:34:38 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 16 Oct 2006 18:34:38 +0200 Subject: [SciPy-user] ubuntu Edgy Eft and Scipy In-Reply-To: <4533B330.2020609@gmail.com> References: <4533B330.2020609@gmail.com> Message-ID: <20061016163438.GB4393@mentat.za.net> On Mon, Oct 16, 2006 at 11:28:32AM -0500, Robert Kern wrote: > Ryan Krauss wrote: > > Anybody out there running Scipy/NumPy/Matplotlib under Ubuntu Edgy > > Eft? > > I'm still on Dapper, but I see no reason why any of those packages would not > work or perform badly on Edgy Eft. Indeed, it runs fine on Edgy. Cheers St?fan From meesters at uni-mainz.de Mon Oct 16 13:16:22 2006 From: meesters at uni-mainz.de (Christian Meesters) Date: Mon, 16 Oct 2006 19:16:22 +0200 Subject: [SciPy-user] polynomal regression In-Reply-To: References: <200610161344.30811.meesters@uni-mainz.de> Message-ID: <200610161916.23510.meesters@uni-mainz.de> On Monday 16 October 2006 15:27, A. M. Archibald wrote: > > numpy's least-squares fitting procedure will do just what you're > asking for. I think it's called numpy.lstsqr (but it may have a > different number of ss and ts). What you really want is probably the > full covariance matrix, and I think it can give that to you. Thanks, but I'm not sure what you mean: In my numpy there is no lstsqr in the namespace, if I do 'from numpy import *' (fresh download from svn - I needed an upgrade anyway). Perhaps my English prevents me from being understood here ... Another attempt: Currently what I'm using is scipy.linalg.lstsq for linear regressions (mostly) and scipy.polyfit in other cases. For calculating 'errors' / 'deviations' / 'uncertainties' of the calculated coefficients the function needs the input with errors in x & y, right? Is there any such function in scipy? Christian From peelle at gmail.com Mon Oct 16 13:27:06 2006 From: peelle at gmail.com (Jonathan Peelle) Date: Mon, 16 Oct 2006 13:27:06 -0400 Subject: [SciPy-user] installation issue on OS X Message-ID: <958c3f860610161027l480241bv9bc07878f773df1c@mail.gmail.com> I attempted to install SciPy on OS X (10.4.7, Intel processor), Python2.4, using these instructions: http://www.scipy.org/Installing_SciPy/Mac_OS_X The fortran compiler and fftw libraries seemed to install just fine. (I already had numpy 1.0b5 installed, and that works fine, too.) As near as I can tell the scipy installation is able to find all of these installations. After installing scipy, two tests fail: check_integer and check_simple_todense. The important info seems to be: ImportError: Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/special/_cephes.so: Symbol not found: ___dso_handle Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/special/_cephes.so Expected in: dynamic lookup Output of build command is here: http://docs.google.com/View?docid=df5p92sv_6fr7292 and output of the install command is here: http://docs.google.com/View?docid=df5p92sv_7cscvwk Any suggestions would be much appreciated. Thanks. From nwagner at iam.uni-stuttgart.de Mon Oct 16 13:35:31 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 16 Oct 2006 19:35:31 +0200 Subject: [SciPy-user] polynomal regression In-Reply-To: <200610161916.23510.meesters@uni-mainz.de> References: <200610161344.30811.meesters@uni-mainz.de> <200610161916.23510.meesters@uni-mainz.de> Message-ID: On Mon, 16 Oct 2006 19:16:22 +0200 Christian Meesters wrote: > On Monday 16 October 2006 15:27, A. M. Archibald wrote: >> >> numpy's least-squares fitting procedure will do just >>what you're >> asking for. I think it's called numpy.lstsqr (but it may >>have a >> different number of ss and ts). What you really want is >>probably the >> full covariance matrix, and I think it can give that to >>you. > Thanks, but I'm not sure what you mean: In my numpy >there is no lstsqr in the > namespace, if I do 'from numpy import *' (fresh download >from svn - I needed > an upgrade anyway). > > Perhaps my English prevents me from being understood >here ... Another attempt: > Currently what I'm using is scipy.linalg.lstsq for >linear regressions (mostly) > and scipy.polyfit in other cases. For calculating >'errors' / 'deviations' / > 'uncertainties' of the calculated coefficients the >function needs the input > with errors in x & y, right? Is there any such function >in scipy? > > Christian > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user It's linalg.lstsq. Are you looking for Total Least Squares ? AFAIK, it's not implemented yet. Nils From meesters at uni-mainz.de Mon Oct 16 13:53:20 2006 From: meesters at uni-mainz.de (Christian Meesters) Date: Mon, 16 Oct 2006 19:53:20 +0200 Subject: [SciPy-user] polynomal regression In-Reply-To: References: <200610161344.30811.meesters@uni-mainz.de> <200610161916.23510.meesters@uni-mainz.de> Message-ID: <200610161953.20309.meesters@uni-mainz.de> > > It's linalg.lstsq. > Are you looking for Total Least Squares ? > AFAIK, it's not implemented yet. > Nils Thanks, Nils. Too bad. Well, back to Origin for me in this case ... Christian From oliphant.travis at ieee.org Mon Oct 16 14:10:11 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 16 Oct 2006 12:10:11 -0600 Subject: [SciPy-user] polynomal regression In-Reply-To: <200610161953.20309.meesters@uni-mainz.de> References: <200610161344.30811.meesters@uni-mainz.de> <200610161916.23510.meesters@uni-mainz.de> <200610161953.20309.meesters@uni-mainz.de> Message-ID: <4533CB03.1090304@ieee.org> Christian Meesters wrote: >> It's linalg.lstsq. >> Are you looking for Total Least Squares ? >> AFAIK, it's not implemented yet. >> Nils >> > Thanks, Nils. > Too bad. Well, back to Origin for me in this case ... > Total Least Squares is pretty easy to implement using the SVD. Try your hand. -Travis From robert.kern at gmail.com Mon Oct 16 14:27:51 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Oct 2006 13:27:51 -0500 Subject: [SciPy-user] polynomal regression In-Reply-To: <4533CB03.1090304@ieee.org> References: <200610161344.30811.meesters@uni-mainz.de> <200610161916.23510.meesters@uni-mainz.de> <200610161953.20309.meesters@uni-mainz.de> <4533CB03.1090304@ieee.org> Message-ID: <4533CF27.6050208@gmail.com> Travis Oliphant wrote: > Christian Meesters wrote: >>> It's linalg.lstsq. >>> Are you looking for Total Least Squares ? >>> AFAIK, it's not implemented yet. >>> Nils >>> >> Thanks, Nils. >> Too bad. Well, back to Origin for me in this case ... > > Total Least Squares is pretty easy to implement using the SVD. Try > your hand. Total Least Squares can also be implemented as Orthogonal Distance Regression (ODR). However, this is neither here nor there for Christian's original question, which is how to find estimates of the uncertainties in the estimated parameters. This has nothing to do with Total Least Squares. A. M. Archibald was closest, but got the name of the function wrong. In short, the only functions currently implemented that give uncertainties on estimated parameters are the non-linear least squares implementations. They aren't too bad for linear problems, either. The function A. M. was thinking of was scipy.optimize.leastsq() with full_output=True. An alternative is the scipy.sandbox.odr package, which can do ordinary least squares or ODR (of which Total Least Squares is the linear special case) and provide uncertainty estimates for both algorithms. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ckkart at hoc.net Mon Oct 16 20:24:39 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 17 Oct 2006 09:24:39 +0900 Subject: [SciPy-user] polynomal regression In-Reply-To: <200610161953.20309.meesters@uni-mainz.de> References: <200610161344.30811.meesters@uni-mainz.de> <200610161916.23510.meesters@uni-mainz.de> <200610161953.20309.meesters@uni-mainz.de> Message-ID: <453422C7.2070604@hoc.net> Christian Meesters wrote: >> It's linalg.lstsq. >> Are you looking for Total Least Squares ? >> AFAIK, it's not implemented yet. >> Nils > Thanks, Nils. > Too bad. Well, back to Origin for me in this case ... In that case you might want to try peak-o-mat (lorentz.sf.net) which does curve fitting based on the scipy.sandbox.odr module. The odr module is included in the software so you can use a standard scipy built. As fitting model you may enter anything that python understands, e.g.: a*x**2+b*x+c*x or you can use predefined tokens, e.g. CB GA LO LO for a constant background, a gaussian and two lorentian peak shapes. Christian From strawman at astraw.com Tue Oct 17 00:00:21 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 16 Oct 2006 21:00:21 -0700 Subject: [SciPy-user] ubuntu Edgy Eft and Scipy In-Reply-To: <20061016163438.GB4393@mentat.za.net> References: <4533B330.2020609@gmail.com> <20061016163438.GB4393@mentat.za.net> Message-ID: <45345555.3060600@astraw.com> I think William Grant has ported very recent versions of numpy/scipy/matplotlib to Edgy. Thanks William - this is great! I'll probably keep my .deb repository up-to-date for Dapper but not bother with Edgy (unless problems crop up with Ubuntu's mainline repository). I probably won't be upgrading our work machines to Edgy given that Dapper is working so well and is a stable target for the next 4.5 years. For a home machine, I'd be tempted to install Edgy, especially given that xen appears to work without recompiling the kernel: https://wiki.ubuntu.com/XenOnEdgy Ciao, Andrew Stefan van der Walt wrote: > On Mon, Oct 16, 2006 at 11:28:32AM -0500, Robert Kern wrote: > >> Ryan Krauss wrote: >> >>> Anybody out there running Scipy/NumPy/Matplotlib under Ubuntu Edgy >>> Eft? >>> >> I'm still on Dapper, but I see no reason why any of those packages would not >> work or perform badly on Edgy Eft. >> > > Indeed, it runs fine on Edgy. > > Cheers > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From david at ar.media.kyoto-u.ac.jp Tue Oct 17 01:05:25 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 17 Oct 2006 14:05:25 +0900 Subject: [SciPy-user] ubuntu Edgy Eft and Scipy In-Reply-To: <45345555.3060600@astraw.com> References: <4533B330.2020609@gmail.com> <20061016163438.GB4393@mentat.za.net> <45345555.3060600@astraw.com> Message-ID: <45346495.8060707@ar.media.kyoto-u.ac.jp> Andrew Straw wrote: > I think William Grant has ported very recent versions of > numpy/scipy/matplotlib to Edgy. Thanks William - this is great! > > Those are the official numpy packages, right ? scipy + numpy + matplotlib are officially included in edgy, which should really help numpy spreading, I think. cheers, David From strawman at astraw.com Tue Oct 17 02:06:34 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 16 Oct 2006 23:06:34 -0700 Subject: [SciPy-user] ubuntu Edgy Eft and Scipy In-Reply-To: <45346495.8060707@ar.media.kyoto-u.ac.jp> References: <4533B330.2020609@gmail.com> <20061016163438.GB4393@mentat.za.net> <45345555.3060600@astraw.com> <45346495.8060707@ar.media.kyoto-u.ac.jp> Message-ID: <453472EA.5090205@astraw.com> David Cournapeau wrote: > Andrew Straw wrote: > >> I think William Grant has ported very recent versions of >> numpy/scipy/matplotlib to Edgy. Thanks William - this is great! >> >> >> > Those are the official numpy packages, right ? scipy + numpy + > matplotlib are officially included in edgy, which should really help > numpy spreading, I think. > As of today, these are the packages you get if you install Edgy, add the "universe" repository to your /etc/apt/sources.list, and type "apt-get install python-numpy python-scipy python-matplotlib" (or use the nice Synaptic Package Manager GUI): numpy 1.0rc1 matplotlib 0.87.5 scipy 0.5.1 I guess this is what you mean by "the official numpy packages". We have at least William Alexander Grant, Jordan Mantha, Marco Presi, Jos? Fonseca, Alexandre Fayolle, Vittorio Palmisano, and Matthias Klose to thank for creating and maintaining the Debian and Ubuntu ports of these packages. From david at ar.media.kyoto-u.ac.jp Tue Oct 17 02:18:41 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 17 Oct 2006 15:18:41 +0900 Subject: [SciPy-user] ubuntu Edgy Eft and Scipy In-Reply-To: <453472EA.5090205@astraw.com> References: <4533B330.2020609@gmail.com> <20061016163438.GB4393@mentat.za.net> <45345555.3060600@astraw.com> <45346495.8060707@ar.media.kyoto-u.ac.jp> <453472EA.5090205@astraw.com> Message-ID: <453475C1.9030003@ar.media.kyoto-u.ac.jp> > As of today, these are the packages you get if you install Edgy, add the > "universe" repository to your /etc/apt/sources.list, and type "apt-get > install python-numpy python-scipy python-matplotlib" (or use the nice > Synaptic Package Manager GUI): > > numpy 1.0rc1 > matplotlib 0.87.5 > scipy 0.5.1 > > I guess this is what you mean by "the official numpy packages". > > Yes, exactly. I discovered that the debian packages were included in ubuntu as well recently, which would make them easier for me to test (I used a debian chroot jail to test them inside my ubuntu machines before). But it looks like they are still buggy in the dependencies with lapack and such, David From ctmedra at unizar.es Tue Oct 17 12:54:46 2006 From: ctmedra at unizar.es (Carlos Medrano) Date: Tue, 17 Oct 2006 16:54:46 +0000 (UTC) Subject: [SciPy-user] loadmat behaviour Message-ID: Hi I'am using scipy and numpy on Kubuntu 6.0.6 >>> numpy.__version__ '1.0b5' >>> scipy.__version__ '0.5.2.dev2197' >>> I have got a problem when using loadmat In matlab I had a matrix L1 that I saved to a file MATLAB CODE -------------- >> L1 L1 = 10 -352 97346 >> save datafile L1 ----------------- I used v6 or v7 of MatLab, but this did not alter the problem I report below. When I try to use io.loadmat it reads numbers as unsigned integers >>> ret=io.loadmat('datafile.mat') >>> ret['L1'] array([ 10, 4294966944, 97346], dtype=uint32) How can I do to read the original data type? I can try to do some dirty things afterwards but it would be nice if I just could read data in their original format. Thanks Carlos Medrano From matthew.brett at gmail.com Tue Oct 17 13:10:47 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 17 Oct 2006 18:10:47 +0100 Subject: [SciPy-user] loadmat behaviour In-Reply-To: References: Message-ID: <1e2af89e0610171010w4f47d82eu22956604c4b11924@mail.gmail.com> Hi, > When I try to use io.loadmat it reads numbers as unsigned integers > > >>> ret=io.loadmat('datafile.mat') > >>> ret['L1'] > array([ 10, 4294966944, 97346], dtype=uint32) > > > How can I do to read the original data type? This is because matlab saves the data in that format. You could try this hardly documented feature: import scipy.io as sio MR = sio.MatFile5Reader(open('datafile.mat'), matlab_compatible=True) ret = MR.get_variables() Best, Matthew From cygnusx1 at mac.com Tue Oct 17 17:40:12 2006 From: cygnusx1 at mac.com (Tom Bridgman) Date: Tue, 17 Oct 2006 17:40:12 -0400 Subject: [SciPy-user] numpy-1.0rc2 still does not build on MacOS X 10.4.7 (PPC) Message-ID: <4F867CBF-BBC0-4633-8674-EA5AD7B10FBF@mac.com> Selected gcc 3.3 g77v3.4 installed from tarball fftw libraries installed as described in http://www.scipy.org/Installing_SciPy/Mac_OS_X numpy build still fails. I suspect the important error is gcc: cannot specify -o with -c or -S and multiple compilations but I have no idea of where to start with this. I suspect is something to do with PPC vs Intel build options? Perhaps I need to turn off the '-c' which appears in the compile options but where is that located? Suggestions? Tom Log below: Planck:/Users/Shared/ToBeRestored/Python/numpy-1.0rc2 bridgman$ sudo python setup.py install Password: Running from numpy source directory. F2PY Version 2_3296 blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] running install running build running config_fc running build_src building py_modules sources building extension "numpy.core.multiarray" sources Generating build/src.macosx-10.4-fat-2.4/numpy/core/config.h customize NAGFCompiler customize AbsoftFCompiler customize IbmFCompiler Could not locate executable f77 Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-I/Library/Frameworks/Python.framework/Versions/2.4/ include/python2.4 -Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c gcc: cannot specify -o with -c or -S and multiple compilations gcc: cannot specify -o with -c or -S and multiple compilations failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "setup.py", line 89, in ? setup_package() File "setup.py", line 82, in setup_package configuration=configuration ) File "/Users/Shared/ToBeRestored/Python/numpy-1.0rc2/numpy/ distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/Users/Shared/ToBeRestored/Python/numpy-1.0rc2/numpy/ distutils/command/install.py", line 16, in run r = old_install.run(self) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/command/install.py", line 506, in run self.run_command('build') File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/Users/Shared/ToBeRestored/Python/numpy-1.0rc2/numpy/ distutils/command/build_src.py", line 87, in run self.build_sources() File "/Users/Shared/ToBeRestored/Python/numpy-1.0rc2/numpy/ distutils/command/build_src.py", line 106, in build_sources self.build_extension_sources(ext) File "/Users/Shared/ToBeRestored/Python/numpy-1.0rc2/numpy/ distutils/command/build_src.py", line 212, in build_extension_sources sources = self.generate_sources(sources, ext) File "/Users/Shared/ToBeRestored/Python/numpy-1.0rc2/numpy/ distutils/command/build_src.py", line 270, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 50, in generate_config_h raise "ERROR: Failed to test configuration" ERROR: Failed to test configuration Planck:/Users/Shared/ToBeRestored/Python/numpy-1.0rc2 bridgman$ From oliphant at ee.byu.edu Tue Oct 17 17:47:05 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 17 Oct 2006 15:47:05 -0600 Subject: [SciPy-user] numpy-1.0rc2 still does not build on MacOS X 10.4.7 (PPC) In-Reply-To: <4F867CBF-BBC0-4633-8674-EA5AD7B10FBF@mac.com> References: <4F867CBF-BBC0-4633-8674-EA5AD7B10FBF@mac.com> Message-ID: <45354F59.90304@ee.byu.edu> Tom Bridgman wrote: >Selected gcc 3.3 >g77v3.4 installed from tarball >fftw libraries installed as described in >http://www.scipy.org/Installing_SciPy/Mac_OS_X > >numpy build still fails. I suspect the important error is > >gcc: cannot specify -o with -c or -S and multiple compilations > >but I have no idea of where to start with this. > >I suspect is something to do with PPC vs Intel build options? >Perhaps I need to turn off the '-c' which appears in the compile > > >options but where is that located? > > I think this is a Python build issue. As far as I know the build flags are gathered from Python. Look for a config directory where Python is installed and the Makefile in that directory. On my system it is in /usr/lib/python2.4/config/Makefile Changing the options there seems to affect distutils which is where numpy gets it's information about how to build. -Travis From robert.kern at gmail.com Tue Oct 17 17:58:32 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 17 Oct 2006 16:58:32 -0500 Subject: [SciPy-user] numpy-1.0rc2 still does not build on MacOS X 10.4.7 (PPC) In-Reply-To: <4F867CBF-BBC0-4633-8674-EA5AD7B10FBF@mac.com> References: <4F867CBF-BBC0-4633-8674-EA5AD7B10FBF@mac.com> Message-ID: <45355208.5030305@gmail.com> Tom Bridgman wrote: > Selected gcc 3.3 > g77v3.4 installed from tarball > fftw libraries installed as described in > http://www.scipy.org/Installing_SciPy/Mac_OS_X > > numpy build still fails. I suspect the important error is > > gcc: cannot specify -o with -c or -S and multiple compilations > > but I have no idea of where to start with this. > > I suspect is something to do with PPC vs Intel build options? > Perhaps I need to turn off the '-c' which appears in the compile > options but where is that located? With a Universal build of Python, you will need to use gcc 4.0. Unfortunately, that may mean you will have to use gfortran instead of g77. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bpederse at gmail.com Tue Oct 17 19:02:45 2006 From: bpederse at gmail.com (Brent Pedersen) Date: Tue, 17 Oct 2006 16:02:45 -0700 Subject: [SciPy-user] object based image classification Message-ID: hi, object based classification (http://en.wikipedia.org/wiki/Object-oriented_image_classification ) for digital image processing seems to be getting popular in remote-sensing/GIS. i'm told that the only software available to do that is e-cognition ( http://www.definiens.com/ http://www.infoserve.co.jp/Tech_spec/eCognition/Definiens_paper_Elsevier.pdf ) has anyone done anything similar with python and/or open source tools? are there any libs out there for medical imaging or astronomy that i could look into? any pointers to related info? thanks, -brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahoover at eecs.berkeley.edu Tue Oct 17 20:53:05 2006 From: ahoover at eecs.berkeley.edu (Aaron Hoover) Date: Tue, 17 Oct 2006 17:53:05 -0700 Subject: [SciPy-user] object based image classification In-Reply-To: References: Message-ID: <45357AF1.5000701@eecs.berkeley.edu> Hi Brent, I'm using Intel's open source computer vision library OpenCV with Python wrappers (SciPy, NumPy, etc) for vision tasks that are similar to what you're describing. There's quite a bit of computer vision research going on these days, and object recognition/shape context/feature matching are all active problems. The OpenCV wiki is online at http://opencvlibrary.sourceforge.net/. The project is fairly mature (there's a 1.0 beta) and the community is active (even if the signal to noise ratio on the newsgroup is a little low). As far as medical imaging vs. astronomy vs. machine vision the subproblems in those domains are simply specific cases of more general problems like shape recognition, feature matching, image registration, correspondence, etc. for which OpenCV provides a host of tools. If you're interested in knowing more, send me an email off-list. Cheers, Aaron Brent Pedersen wrote: > hi, object based classification > (http://en.wikipedia.org/wiki/Object-oriented_image_classification ) > for digital image processing seems to be getting popular in > remote-sensing/GIS. i'm told that the only software available to do > that is e-cognition ( > http://www.definiens.com/ > http://www.infoserve.co.jp/Tech_spec/eCognition/Definiens_paper_Elsevier.pdf > > ) > > > has anyone done anything similar with python and/or open source tools? > are there any libs out there for medical imaging or astronomy that i > could look into? > any pointers to related info? > > thanks, > -brent > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cygnusx1 at mac.com Tue Oct 17 22:24:47 2006 From: cygnusx1 at mac.com (Tom Bridgman) Date: Tue, 17 Oct 2006 22:24:47 -0400 Subject: [SciPy-user] numpy-1.0rc2 still does not build on MacOS X 10.4.7 (PPC) Message-ID: <3F072FD0-CC2F-458C-A919-23B42B237B26@mac.com> > I think this is a Python build issue. As far as I know the build > flags > are gathered from Python. > > Look for a config directory where Python is installed and the Makefile > in that directory. > > On my system it is in > > /usr/lib/python2.4/config/Makefile > > Changing the options there seems to affect distutils which is where > numpy gets it's information about how to build. > > -Travis > I've found the OS X version of this file in /Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/config/Makefile. There appear to be -c compiler flags in a section under the heading: "Special rules for object files". Don't have time to try it this evening but it is a place to start for tomorrow. My 'plan B' will be R. Kern's suggestion of using gFortran, by which I assume he means use gfortran-bin.tar.gz located here: http:// hpc.sourceforge.net/. I did use the Universal Python build. Seems like there are more possible branching options in this installation than indicated on the installation page. I'll try these out over the next day or so and get back with my results. Thanks, Tom -- W.T. Bridgman, Ph.D. Physics & Astronomy From robert.kern at gmail.com Tue Oct 17 23:22:27 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 17 Oct 2006 22:22:27 -0500 Subject: [SciPy-user] numpy-1.0rc2 still does not build on MacOS X 10.4.7 (PPC) In-Reply-To: <3F072FD0-CC2F-458C-A919-23B42B237B26@mac.com> References: <3F072FD0-CC2F-458C-A919-23B42B237B26@mac.com> Message-ID: <45359DF3.5010502@gmail.com> Tom Bridgman wrote: > My 'plan B' will be R. Kern's suggestion of using gFortran, Actually, my suggestion is to use gcc 4.0. This should be your "Plan A" since gcc 3.3 *cannot* be used to build extensions for the Universal build of Python. Don't bother futzing with the Makefile. A side effect of this is that you must use gfortran instead of g77 since g77 was never upgraded to work with gcc 4.0. Of course, your FORTRAN compiler does not matter when building numpy, just scipy. > by which > I assume he means use gfortran-bin.tar.gz located here: http:// > hpc.sourceforge.net/. Yes, if you like. I use the one from MacPorts, but whatever works for you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ahoover at eecs.berkeley.edu Tue Oct 17 23:52:39 2006 From: ahoover at eecs.berkeley.edu (Aaron Hoover) Date: Tue, 17 Oct 2006 20:52:39 -0700 Subject: [SciPy-user] one-liner to retrieve indices of n smallest values from an array? Message-ID: <4627044B-0FE0-42CE-99EA-279675F470BF@eecs.berkeley.edu> Hi all, Perhaps this is better suited for the NumPy list, but is there a simple (one line, perhaps) way to get the indices of the n smallest elements of an array? Something along the lines of argmin() that could take a parameter indicating how many minimal values to return? Thanks, Aaron From robert.kern at gmail.com Tue Oct 17 23:52:51 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 17 Oct 2006 22:52:51 -0500 Subject: [SciPy-user] one-liner to retrieve indices of n smallest values from an array? In-Reply-To: <4627044B-0FE0-42CE-99EA-279675F470BF@eecs.berkeley.edu> References: <4627044B-0FE0-42CE-99EA-279675F470BF@eecs.berkeley.edu> Message-ID: <4535A513.70406@gmail.com> Aaron Hoover wrote: > Hi all, > > Perhaps this is better suited for the NumPy list, but is there a > simple (one line, perhaps) way to get the indices of the n smallest > elements of an array? Something along the lines of argmin() that > could take a parameter indicating > how many minimal values to return? a.argsort()[:n] -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ahoover at eecs.berkeley.edu Wed Oct 18 00:09:14 2006 From: ahoover at eecs.berkeley.edu (Aaron Hoover) Date: Tue, 17 Oct 2006 21:09:14 -0700 Subject: [SciPy-user] one-liner to retrieve indices of n smallest values from an array? In-Reply-To: <4535A513.70406@gmail.com> References: <4627044B-0FE0-42CE-99EA-279675F470BF@eecs.berkeley.edu> <4535A513.70406@gmail.com> Message-ID: Hi Robert, Thanks, is there a simple way to extend this to 2D arrays, or does that require the use of ravel()? -Aaron On Oct 17, 2006, at 8:52 PM, Robert Kern wrote: > Aaron Hoover wrote: >> Hi all, >> >> Perhaps this is better suited for the NumPy list, but is there a >> simple (one line, perhaps) way to get the indices of the n smallest >> elements of an array? Something along the lines of argmin() that >> could take a parameter indicating >> how many minimal values to return? > > a.argsort()[:n] > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Wed Oct 18 01:47:47 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 17 Oct 2006 23:47:47 -0600 Subject: [SciPy-user] Can't run IPython from SuperPack. ?? In-Reply-To: <452FDA43.9040409@anvil.nrl.navy.mil> References: <452FDA43.9040409@anvil.nrl.navy.mil> Message-ID: Hi Louis, On 10/13/06, Louis Pecora wrote: > I installed the full SuperPack from SciPy. Things seem to work well for > pylab and scipy, but I want to use pylab/matplotlib in interactive > mode. The current wisdom is to do that in IPython. The superpack > intalled IPython down in /Lib/Framework/python...blah, blah/site-packages/. > > But typing ipython, IPython or any variations in the Terminal does > nothing. I don't see ipython in /bin /usr/local/bin or other places I > find python or pythonw executables. > > Any idea what I have to do to get up and running with IPython? > > MacOS X 10.4 on PPC laptop, python2.4, latest SciPy, Pylab and Matplotlib. > > Thanks for any help. > > By the way. My congratulations and endorsement to all who put together > the SuperPack and its contents. That's the way to do it for us python > users. Almost one-stop shopping. Very nicely done. sorry for the delayed reply... I'm not a mac user so I'm not 100% sure why the superpack isn't giving you the ipython startup script. This sounds like a build problem with the SP, but fortunately one you can easily remedy manually. Just grab http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/scripts/ipython?format=raw and drop it anywwhere in your $PATH (chmod +x). This startup script hasn't changed in years and is deliberately trivial, so you should be fine just copying it by hand. I hope this helps, f From ctmedra at unizar.es Wed Oct 18 02:37:09 2006 From: ctmedra at unizar.es (Carlos Medrano) Date: Wed, 18 Oct 2006 06:37:09 +0000 (UTC) Subject: [SciPy-user] loadmat behaviour References: <1e2af89e0610171010w4f47d82eu22956604c4b11924@mail.gmail.com> Message-ID: Matthew Brett gmail.com> writes: > > Hi, > > > When I try to use io.loadmat it reads numbers as unsigned integers > > > > >>> ret=io.loadmat('datafile.mat') > > >>> ret['L1'] > > array([ 10, 4294966944, 97346], dtype=uint32) > > > > > > How can I do to read the original data type? > > This is because matlab saves the data in that format. You could try > this hardly documented feature: > > import scipy.io as sio > MR = sio.MatFile5Reader(open('datafile.mat'), matlab_compatible=True) > ret = MR.get_variables() > > Best, > > Matthew > Hi If I try your code I get the following error: >>> import scipy.io as sio >>> MR = sio.MatFile5Reader(open('datafile_v7.mat'), matlab_compatible=True) Traceback (most recent call last): File "", line 1, in ? AttributeError: 'module' object has no attribute 'MatFile5Reader' Anyway, you give me a valuable information. If this is caused by the way matlab saves the data, this means I was not doing anything wrong. So perhaps the best way to solve this kind of problems is to use some other way of saving data in matlab (fprintf and friends). Thank you Carlos Medrano From nwagner at iam.uni-stuttgart.de Wed Oct 18 02:42:08 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 18 Oct 2006 08:42:08 +0200 Subject: [SciPy-user] loadmat behaviour In-Reply-To: References: <1e2af89e0610171010w4f47d82eu22956604c4b11924@mail.gmail.com> Message-ID: <4535CCC0.5060405@iam.uni-stuttgart.de> Carlos Medrano wrote: > Matthew Brett gmail.com> writes: > > >> Hi, >> >> >>> When I try to use io.loadmat it reads numbers as unsigned integers >>> >>> >>>>>> ret=io.loadmat('datafile.mat') >>>>>> ret['L1'] >>>>>> >>> array([ 10, 4294966944, 97346], dtype=uint32) >>> >>> >>> How can I do to read the original data type? >>> >> This is because matlab saves the data in that format. You could try >> this hardly documented feature: >> >> import scipy.io as sio >> MR = sio.MatFile5Reader(open('datafile.mat'), matlab_compatible=True) >> ret = MR.get_variables() >> >> Best, >> >> Matthew >> >> > > Hi > > If I try your code I get the following error: > >>>> import scipy.io as sio >>>> MR = sio.MatFile5Reader(open('datafile_v7.mat'), matlab_compatible=True) >>>> > Traceback (most recent call last): > File "", line 1, in ? > AttributeError: 'module' object has no attribute 'MatFile5Reader' > > Anyway, you give me a valuable information. If this is caused by the way matlab > saves the data, this means I was not doing anything wrong. So perhaps the best > way to solve this kind of problems is to use some other way of saving data in > matlab (fprintf and friends). > > Thank you > > Carlos Medrano > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > >>> io.MatFile5Reader >>> scipy.__version__ '0.5.2.dev2289' I guess you need a recent version of scipy. Nils From excellent.frizou at gmail.com Wed Oct 18 05:11:39 2006 From: excellent.frizou at gmail.com (Sam frizou) Date: Wed, 18 Oct 2006 11:11:39 +0200 Subject: [SciPy-user] Shape Problem Message-ID: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> Hi, I use a matrix M of shape (20,25). If I do N=M[18:,18:] I correctly obtain the submatrix of M of shape (2,7). Now, if I do N=M[19:,19:], I expect to have the submatrix of M of shape (1,6) but the shape of N is (6,1). Is there any way to obtain the correct shape in any case ? Thank you very much for your help. Sam From david at ar.media.kyoto-u.ac.jp Wed Oct 18 05:12:44 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 18 Oct 2006 18:12:44 +0900 Subject: [SciPy-user] loadmat behaviour In-Reply-To: References: <1e2af89e0610171010w4f47d82eu22956604c4b11924@mail.gmail.com> Message-ID: <4535F00C.3010904@ar.media.kyoto-u.ac.jp> Carlos Medrano wrote: > > Anyway, you give me a valuable information. If this is caused by the way matlab > saves the data, this means I was not doing anything wrong. So perhaps the best > way to solve this kind of problems is to use some other way of saving data in > matlab (fprintf and friends). > > A better and easier solution would be to use the hd5 support available in recent matlabs: files exported this way can be imported by pytables, and exported back. This is what I am using to plumb matlab and python code, when I don't have (yet) a scipy replacement for some matlab codes. hdf5 file format is an established standard, with a C library available under a liberal license, which means many different applications can use it. cheers, David From andy at insectnation.org Wed Oct 18 08:04:33 2006 From: andy at insectnation.org (Andy Buckley) Date: Wed, 18 Oct 2006 13:04:33 +0100 Subject: [SciPy-user] Studentized range distribution? Message-ID: <45361851.9030908@insectnation.org> Hi, I'm afraid this may be only a tenuously SciPy questions, so apologies. That said, it is a stats method that I'm trying to apply using SciPy, so I'd be very grateful if someone could help :) Here goes: I need to compute some statistics from a Studentized range distribution in SciPy, but it doesn't seem to be one of the implemented distributions according to http://www.scipy.org/doc/api_docs/scipy.stats.distributions.html. I've tried searching the Web for a closed form definition of the PDF, CDF etc., but all I find is tables like this: http://cse.niaes.affrc.go.jp/miwa/probcalc/s-range/srng_tbl.html (It's a generalised t distribution for k sets of means and df degrees of freedom, so equal to stats.distributions.t for k = 2) So, does anyone here happen to a) know a closed form representation of the SRD b) know how to implement it as a distribution in SciPy? :) Specifically, I'll need the percent point function: I don't know if that's calculated by numerically inverting the CDF or if there's a closed form expression for it. Thanks! Andy From pecora at anvil.nrl.navy.mil Wed Oct 18 08:17:49 2006 From: pecora at anvil.nrl.navy.mil (Louis Pecora) Date: Wed, 18 Oct 2006 08:17:49 -0400 Subject: [SciPy-user] Can't run IPython from SuperPack. ?? In-Reply-To: References: <452FDA43.9040409@anvil.nrl.navy.mil> Message-ID: <45361B6D.4080008@anvil.nrl.navy.mil> Fernando Perez wrote: > Hi Louis, > > On 10/13/06, Louis Pecora wrote: > >> I installed the full SuperPack from SciPy. Things seem to work well for >> pylab and scipy, but I want to use pylab/matplotlib in interactive >> mode. The current wisdom is to do that in IPython. The superpack >> intalled IPython down in /Lib/Framework/python...blah, blah/site-packages/. >> >> But typing ipython, IPython or any variations in the Terminal does >> nothing. I don't see ipython in /bin /usr/local/bin or other places I >> find python or pythonw executables. >> > sorry for the delayed reply... > > I'm not a mac user so I'm not 100% sure why the superpack isn't giving > you the ipython startup script. This sounds like a build problem with > the SP, but fortunately one you can easily remedy manually. > > Just grab > > http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/scripts/ipython?format=raw > > and drop it anywwhere in your $PATH (chmod +x). > > This startup script hasn't changed in years and is deliberately > trivial, so you should be fine just copying it by hand. > > I hope this helps, > > f > Thanks, Ferdnando. You are essentially right. I was told in a NG by someone in the know that the superpack installation of IPython is broken. So I will do it from the command line. -- Cheers, Lou Pecora Code 6362 Naval Research Lab Washington, DC 20375 USA Ph: +202-767-6002 email: pecora at anvil.nrl.navy.mil From pecora at anvil.nrl.navy.mil Wed Oct 18 08:22:01 2006 From: pecora at anvil.nrl.navy.mil (Louis Pecora) Date: Wed, 18 Oct 2006 08:22:01 -0400 Subject: [SciPy-user] Shape Problem In-Reply-To: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> References: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> Message-ID: <45361C69.7090009@anvil.nrl.navy.mil> Sam frizou wrote: > Hi, > > I use a matrix M of shape (20,25). > > If I do N=M[18:,18:] I correctly obtain the submatrix of M of shape (2,7). > > Now, if I do N=M[19:,19:], I expect to have the submatrix of M of > shape (1,6) but the shape of N is (6,1). > Is there any way to obtain the correct shape in any case ? > > Thank you very much for your help. > > Sam > I just tried this (MacOS X 10.4, Python 2.4, NumPy) and got the right answer (1,6). What modules are you using? Might need an upgrade there. -- Cheers, Lou Pecora Code 6362 Naval Research Lab Washington, DC 20375 USA Ph: +202-767-6002 email: pecora at anvil.nrl.navy.mil From gonzalezmancera+scipy at gmail.com Wed Oct 18 10:45:49 2006 From: gonzalezmancera+scipy at gmail.com (Andres GM) Date: Wed, 18 Oct 2006 10:45:49 -0400 Subject: [SciPy-user] numpy-1.0rc2 still does not build on MacOS X Message-ID: Hi tom, I had the same problem when I upgraded to a Universal Binary version of Python (ie. Python 2.5). My solution was to install Numpy using gcc 4.0 and gfortran as explained in the scipy documentation for Macintels. I was succesful with this installation and everything seems to work fine. I used the gfortran from hpc.sourceforge.net and the latest version of the Developer tools from Apple. I hope this works. Andres From excellent.frizou at gmail.com Wed Oct 18 10:53:44 2006 From: excellent.frizou at gmail.com (Sam frizou) Date: Wed, 18 Oct 2006 16:53:44 +0200 Subject: [SciPy-user] Shape Problem In-Reply-To: <45361C69.7090009@anvil.nrl.navy.mil> References: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> <45361C69.7090009@anvil.nrl.navy.mil> Message-ID: <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> Actually, my matrix M comes from a ndarray N: M=asmatrix(N) So, M is not a numpy.matrix object (for which I do not have this shape problem), but a numpy.core.defmatrix.matrix object (for wich I have the shape problem). Maybe there exists a way to really cast a ndarray into a numpy.matrix object ? Thanks On 10/18/06, Louis Pecora wrote: > Sam frizou wrote: > > Hi, > > > > I use a matrix M of shape (20,25). > > > > If I do N=M[18:,18:] I correctly obtain the submatrix of M of shape (2,7). > > > > Now, if I do N=M[19:,19:], I expect to have the submatrix of M of > > shape (1,6) but the shape of N is (6,1). > > Is there any way to obtain the correct shape in any case ? > > > > Thank you very much for your help. > > > > Sam > > > > I just tried this (MacOS X 10.4, Python 2.4, NumPy) and got the right > answer (1,6). What modules are you using? Might need an upgrade there. > > -- > Cheers, > > Lou Pecora > > Code 6362 > Naval Research Lab > Washington, DC 20375 > USA > Ph: +202-767-6002 > email: pecora at anvil.nrl.navy.mil > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- On obtient beaucoup plus avec un mot gentil et une arme qu'avec seulement un mot gentil - "Al Capone". From hchase at ball.com Wed Oct 18 11:32:07 2006 From: hchase at ball.com (Chase, Holden) Date: Wed, 18 Oct 2006 09:32:07 -0600 Subject: [SciPy-user] More elegant way to get indices of the max value in an array? Message-ID: Hi, I would like to get the array indices of the max value in an array. Is there a more elegant way than what I have implemented below using argmax()? This routine returns the indices and the max value. Thanks, Holden ----------------------------- import numpy #Could be numarray as well def arrayMaxInd(data): xMax = data.argmax(1) maxVal = 0 xMaxInd = 0 yMaxInd = 0 for yInd in range(data.shape[0]): if data[yInd][xMax[yInd]] > maxVal: maxVal = data[yInd][xMax[yInd]] yMaxInd = yInd xMaxInd = xMax[yInd] return [xMaxInd, yMaxInd, maxVal] # test the function testdata = numpy.random.rand(4,6) print arrayMaxInd(testdata) print testdata ----------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Oct 18 11:47:30 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 18 Oct 2006 11:47:30 -0400 Subject: [SciPy-user] =?utf-8?q?More_elegant_way_to_get_indices_of_the_max?= =?utf-8?q?_value_in_an=09array=3F?= In-Reply-To: References: Message-ID: On Wed, 18 Oct 2006, Holden Chase apparently wrote: > I would like to get the array indices of the max value in an array. Is > there a more elegant way than what I have implemented below using > argmax()? This routine returns the indices and the max value. Untested: def arrayMaxInd(data): a = asarray(data) argmax1d = a.argmax() rs,cs = a.shape r,c = argmax1d//cs, argmax1d%cs return (r,c,argmax1d) hth, Alan Isaac From stefan at sun.ac.za Wed Oct 18 12:13:50 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 18 Oct 2006 18:13:50 +0200 Subject: [SciPy-user] More elegant way to get indices of the max value in an?array? In-Reply-To: References: Message-ID: <20061018161350.GX17998@mentat.za.net> On Wed, Oct 18, 2006 at 11:47:30AM -0400, Alan G Isaac wrote: > On Wed, 18 Oct 2006, Holden Chase apparently wrote: > > I would like to get the array indices of the max value in an array. Is > > there a more elegant way than what I have implemented below using > > argmax()? This routine returns the indices and the max value. > > > Untested: > def arrayMaxInd(data): > a = asarray(data) > argmax1d = a.argmax() > rs,cs = a.shape > r,c = argmax1d//cs, argmax1d%cs > return (r,c,argmax1d) How about numpy.unravel_index? Cheers St?fan From lists.steve at arachnedesign.net Wed Oct 18 12:24:28 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Wed, 18 Oct 2006 12:24:28 -0400 Subject: [SciPy-user] More elegant way to get indices of the max value in an?array? In-Reply-To: <20061018161350.GX17998@mentat.za.net> References: <20061018161350.GX17998@mentat.za.net> Message-ID: > On Wed, Oct 18, 2006 at 11:47:30AM -0400, Alan G Isaac wrote: >> On Wed, 18 Oct 2006, Holden Chase apparently wrote: >>> I would like to get the array indices of the max value in an >>> array. Is >>> there a more elegant way than what I have implemented below using >>> argmax()? This routine returns the indices and the max value. >> >> >> Untested: >> def arrayMaxInd(data): >> a = asarray(data) >> argmax1d = a.argmax() >> rs,cs = a.shape >> r,c = argmax1d//cs, argmax1d%cs >> return (r,c,argmax1d) > > How about numpy.unravel_index If you're just looking for the indices of the max value in your array, why not just do something like: --- import numpy as np # ... # let some_array be your array that you want to find max val for maxval_indices = np.where(some_array == some_array.max()) --- That would do the trick, too .. no? -steve From aisaac at american.edu Wed Oct 18 12:43:36 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 18 Oct 2006 12:43:36 -0400 Subject: [SciPy-user] =?iso-8859-1?q?More_elegant_way_to_get_indices_of_th?= =?iso-8859-1?q?e_max_value=09in_an=3Farray=3F?= In-Reply-To: <20061018161350.GX17998@mentat.za.net> References: <20061018161350.GX17998@mentat.za.net> Message-ID: On Wed, 18 Oct 2006, alan wrote > return (r,c,argmax1d) That should have been a[r][c] or a[r,c] of course. But better ideas have been floated. Cheers, Alan From pgmdevlist at gmail.com Wed Oct 18 12:44:37 2006 From: pgmdevlist at gmail.com (P GM) Date: Wed, 18 Oct 2006 12:44:37 -0400 Subject: [SciPy-user] More elegant way to get indices of the max value in an?array? In-Reply-To: References: <20061018161350.GX17998@mentat.za.net> Message-ID: <777651ce0610180944g4dbd1b69u4a7ddb57c1e02ef2@mail.gmail.com> > How about numpy.unravel_index? Sweet ! I didn't know this function existed ! Thx a lot. So that gives us: def argmax_coords(data): data = numpy.asarray(data) imax = data.argmax() return list(numpy.unravel_index(imax,data.shape) + [data.max(),] On 10/18/06, Alan G Isaac wrote: > On Wed, 18 Oct 2006, alan wrote > > return (r,c,argmax1d) > > That should have been a[r][c] or a[r,c] of course. > But better ideas have been floated. > > Cheers, > Alan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From hchase at ball.com Wed Oct 18 13:11:06 2006 From: hchase at ball.com (Chase, Holden) Date: Wed, 18 Oct 2006 11:11:06 -0600 Subject: [SciPy-user] More elegant way to get indices of the max value value in an array? Message-ID: Thanks to all for the great suggestions. Holden -------------- next part -------------- An HTML attachment was scrubbed... URL: From pecora at anvil.nrl.navy.mil Wed Oct 18 13:25:07 2006 From: pecora at anvil.nrl.navy.mil (Louis Pecora) Date: Wed, 18 Oct 2006 13:25:07 -0400 Subject: [SciPy-user] Shape Problem In-Reply-To: <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> References: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> <45361C69.7090009@anvil.nrl.navy.mil> <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> Message-ID: <45366373.5000300@anvil.nrl.navy.mil> Sam frizou wrote: > Actually, my matrix M comes from a ndarray N: > > M=asmatrix(N) > > So, M is not a numpy.matrix object (for which I do not have this shape > problem), but a numpy.core.defmatrix.matrix object (for wich I have > the shape problem). > Oh. I don't know about ndarrays. > Maybe there exists a way to really cast a ndarray into a numpy.matrix object ? > > Only thing I can think of is the usual array(M), but not sure that would work and it creates a numpy copy of M. -- Cheers, Lou Pecora Code 6362 Naval Research Lab Washington, DC 20375 USA Ph: +202-767-6002 email: pecora at anvil.nrl.navy.mil From oliphant.travis at ieee.org Wed Oct 18 13:41:19 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 18 Oct 2006 11:41:19 -0600 Subject: [SciPy-user] Shape Problem In-Reply-To: <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> References: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> <45361C69.7090009@anvil.nrl.navy.mil> <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> Message-ID: <4536673F.2070500@ieee.org> Sam frizou wrote: > Actually, my matrix M comes from a ndarray N: > > M=asmatrix(N) > > So, M is not a numpy.matrix object (for which I do not have this shape > problem), but a numpy.core.defmatrix.matrix object (for wich I have > the shape problem). > These are the same thing. There is only one matrix object. Please show the code that is giving you the problem as I cannot reproduce it. -Travis From excellent.frizou at gmail.com Wed Oct 18 14:57:23 2006 From: excellent.frizou at gmail.com (Sam frizou) Date: Wed, 18 Oct 2006 20:57:23 +0200 Subject: [SciPy-user] Shape Problem In-Reply-To: <4536673F.2070500@ieee.org> References: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> <45361C69.7090009@anvil.nrl.navy.mil> <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> <4536673F.2070500@ieee.org> Message-ID: <1d2f6e8c0610181157y69f84025j6fbe40fd649fa7c8@mail.gmail.com> from scipy import * N=zeros((20,25)) print N[19:,19:].shape # >>> (1,6) M=matrix(N) print M[19:,19:].shape # >>> (6,1) On 10/18/06, Travis Oliphant wrote: > Sam frizou wrote: > > Actually, my matrix M comes from a ndarray N: > > > > M=asmatrix(N) > > > > So, M is not a numpy.matrix object (for which I do not have this shape > > problem), but a numpy.core.defmatrix.matrix object (for wich I have > > the shape problem). > > > These are the same thing. There is only one matrix object. Please > show the code that is giving you the problem as I cannot reproduce it. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- On obtient beaucoup plus avec un mot gentil et une arme qu'avec seulement un mot gentil - "Al Capone". From jimmy.touma.ctr at eglin.af.mil Wed Oct 18 17:26:40 2006 From: jimmy.touma.ctr at eglin.af.mil (Touma Jimmy E CTR USAF AFRL/MNGG) Date: Wed, 18 Oct 2006 16:26:40 -0500 Subject: [SciPy-user] interp1d question Message-ID: Hi all, I am trying to interpolate in 1d along a columns as opposed to rows and it doesn't seem to work. The code below emulates reading data where interpolation should be done column wise. The error when running this code follows. If you could please tell me what I am doing wrong and how to interpolate along columns. Thanks, Jimmy Here is my code: ################################################################# import scipy import numpy x = numpy.arange(0,10) xnew = numpy.arange(0, 9, 0.5) x = x.reshape(len(x),1) xnew = xnew.reshape(len(xnew),1) y = scipy.exp(-x/3.0) z = scipy.exp(-x/2.0) v = scipy.exp(-x) y = numpy.asanyarray(y) z = numpy.asanyarray(z) v = numpy.asanyarray(v) y=y.reshape(len(y),1) z=z.reshape(len(z),1) v=v.reshape(len(v),1) u = numpy.hstack((y,z,v)) f=scipy.interpolate.interp1d(x,u,kind='linear', axis=0) ############################################################## I get the following error when I run it: --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/touma/d1.py 24 25 ---> 26 f=scipy.interpolate.interp1d(x,u,kind='linear', axis=0) 27 28 /usr/lib/python2.4/site-packages/scipy/interpolate/interpolate.py in __init__(self, x, y, kind, axis, copy, bounds_error, fill_value) 143 len_x,len_y = shape(oriented_x)[interp_axis], shape(oriented_y)[interp_axis] 144 if len_x != len_y: --> 145 raise ValueError, "x and y arrays must be equal in length along "\ 146 "interpolation axis." 147 if len_x < 2 or len_y < 2: ValueError: x and y arrays must be equal in length along interpolation axis. WARNING: Failure executing file: In [3]: numpy.__version__ Out[3]: '1.0b5' In [4]: scipy.__version__ Out[4]: '0.3.2' -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Oct 18 17:34:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 18 Oct 2006 16:34:56 -0500 Subject: [SciPy-user] interp1d question In-Reply-To: References: Message-ID: <45369E00.4010703@gmail.com> Touma Jimmy E CTR USAF AFRL/MNGG wrote: > Hi all, > > > I am trying to interpolate in 1d along a columns as opposed to rows > and it doesn't seem to work. The code below emulates reading data > where interpolation should be done column wise. The error when > running this code follows. If you could please tell me what I am doing > wrong and how to interpolate along columns. > /usr/lib/python2.4/site-packages/scipy/interpolate/interpolate.py in > __init__(self, x, y, kind, axis, copy, bounds_error, fill_value) > > 143 len_x,len_y = shape(oriented_x)[interp_axis], > shape(oriented_y)[interp_axis] > 144 if len_x != len_y: > --> 145 raise ValueError, "x and y arrays must be equal in > length along "\ > 146 "interpolation axis." > 147 if len_x < 2 or len_y < 2: > > ValueError: x and y arrays must be equal in length along interpolation > axis. > WARNING: Failure executing file: > > In [3]: numpy.__version__ > Out[3]: '1.0b5' > > In [4]: scipy.__version__ > Out[4]: '0.3.2' You have a *very* old version of scipy. It uses Numeric, not numpy. Please upgrade. In recent versions, interp1d does not accept x arrays with more than one dimension. I'm not entirely sure why the old version did; it makes no sense. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josegomez at gmx.net Wed Oct 18 17:52:59 2006 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Wed, 18 Oct 2006 23:52:59 +0200 Subject: [SciPy-user] Dataframe In-Reply-To: <200610160946.19215.lroubeyrie@limair.asso.fr> References: <200610160946.19215.lroubeyrie@limair.asso.fr> Message-ID: <20061018215259.310820@gmx.net> Salut Lionel! -------- Original-Nachricht -------- > Where can I find examples for using dataframe? Cookbook is pretty short > (http://www.scipy.org/Cookbook/DataFrame) and there no documentation in > the > file :-/ I started the wiki page, but I realised that I could throw in some documentation and examples as well. However, I got diverted by work, and I'm not around in the next couple of weeks. Hopefully, after that I'll be able to document it properly with examples. I learned by trial and error, and apart from optimisations, it is an excellent module. Sorry, it's me being slack in updating things! Jose -- GMX DSL-Flatrate 0,- Euro* - ?berall, wo DSL verf?gbar ist! NEU: Jetzt bis zu 16.000 kBit/s! http://www.gmx.net/de/go/dsl From oliphant at ee.byu.edu Wed Oct 18 17:55:28 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 18 Oct 2006 15:55:28 -0600 Subject: [SciPy-user] Shape Problem In-Reply-To: <1d2f6e8c0610181157y69f84025j6fbe40fd649fa7c8@mail.gmail.com> References: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> <45361C69.7090009@anvil.nrl.navy.mil> <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> <4536673F.2070500@ieee.org> <1d2f6e8c0610181157y69f84025j6fbe40fd649fa7c8@mail.gmail.com> Message-ID: <4536A2D0.30900@ee.byu.edu> Sam frizou wrote: >from scipy import * > > >N=zeros((20,25)) >print N[19:,19:].shape ># >>> (1,6) > > >M=matrix(N) >print M[19:,19:].shape ># >>> (6,1) > > Please run the following from scipy import * print repr(matrix) import scipy print scipy.__version__ import numpy print numpy.__version__ Thanks, -Travis From excellent.frizou at gmail.com Wed Oct 18 18:18:35 2006 From: excellent.frizou at gmail.com (Sam frizou) Date: Thu, 19 Oct 2006 00:18:35 +0200 Subject: [SciPy-user] Shape Problem In-Reply-To: <4536A2D0.30900@ee.byu.edu> References: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> <45361C69.7090009@anvil.nrl.navy.mil> <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> <4536673F.2070500@ieee.org> <1d2f6e8c0610181157y69f84025j6fbe40fd649fa7c8@mail.gmail.com> <4536A2D0.30900@ee.byu.edu> Message-ID: <1d2f6e8c0610181518g463365aau748f58c5fef7a7b7@mail.gmail.com> 0.5.0.2116 1.1.2880 On 10/18/06, Travis Oliphant wrote: > Sam frizou wrote: > > >from scipy import * > > > > > >N=zeros((20,25)) > >print N[19:,19:].shape > ># >>> (1,6) > > > > > >M=matrix(N) > >print M[19:,19:].shape > ># >>> (6,1) > > > > > > Please run the following > > from scipy import * > > print repr(matrix) > > import scipy > print scipy.__version__ > > import numpy > print numpy.__version__ > > > Thanks, > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- On obtient beaucoup plus avec un mot gentil et une arme qu'avec seulement un mot gentil - "Al Capone". From oliphant at ee.byu.edu Wed Oct 18 18:26:06 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 18 Oct 2006 16:26:06 -0600 Subject: [SciPy-user] Shape Problem In-Reply-To: <1d2f6e8c0610181518g463365aau748f58c5fef7a7b7@mail.gmail.com> References: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> <45361C69.7090009@anvil.nrl.navy.mil> <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> <4536673F.2070500@ieee.org> <1d2f6e8c0610181157y69f84025j6fbe40fd649fa7c8@mail.gmail.com> <4536A2D0.30900@ee.byu.edu> <1d2f6e8c0610181518g463365aau748f58c5fef7a7b7@mail.gmail.com> Message-ID: <4536A9FE.6090508@ee.byu.edu> Sam frizou wrote: > >0.5.0.2116 >1.1.2880 > > Your version of NumPy is old. There were some issues with Matrix indexing that were fixed a while ago. The current revision is 3352 (you are using revision 2880). The 1.1 was a mis-nomer and was changed in August I belived. The name for the most recent development version is 1.0.dev3359 -Travis From excellent.frizou at gmail.com Wed Oct 18 18:33:24 2006 From: excellent.frizou at gmail.com (Sam frizou) Date: Thu, 19 Oct 2006 00:33:24 +0200 Subject: [SciPy-user] Shape Problem In-Reply-To: <4536A9FE.6090508@ee.byu.edu> References: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> <45361C69.7090009@anvil.nrl.navy.mil> <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> <4536673F.2070500@ieee.org> <1d2f6e8c0610181157y69f84025j6fbe40fd649fa7c8@mail.gmail.com> <4536A2D0.30900@ee.byu.edu> <1d2f6e8c0610181518g463365aau748f58c5fef7a7b7@mail.gmail.com> <4536A9FE.6090508@ee.byu.edu> Message-ID: <1d2f6e8c0610181533t60b912c7j7b64ca5f9b3cdcf0@mail.gmail.com> In fact I installed numpy from "SciPy Superpack for Python 2.4 (Intel)" Thank you very much On 10/19/06, Travis Oliphant wrote: > Sam frizou wrote: > > > > >0.5.0.2116 > >1.1.2880 > > > > > > Your version of NumPy is old. There were some issues with Matrix > indexing that were fixed a while ago. The current revision is 3352 (you > are using revision 2880). The 1.1 was a mis-nomer and was changed in > August I belived. > > The name for the most recent development version is > > 1.0.dev3359 > > > > -Travis > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- On obtient beaucoup plus avec un mot gentil et une arme qu'avec seulement un mot gentil - "Al Capone". From oliphant at ee.byu.edu Wed Oct 18 18:39:54 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 18 Oct 2006 16:39:54 -0600 Subject: [SciPy-user] Shape Problem In-Reply-To: <1d2f6e8c0610181533t60b912c7j7b64ca5f9b3cdcf0@mail.gmail.com> References: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> <45361C69.7090009@anvil.nrl.navy.mil> <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> <4536673F.2070500@ieee.org> <1d2f6e8c0610181157y69f84025j6fbe40fd649fa7c8@mail.gmail.com> <4536A2D0.30900@ee.byu.edu> <1d2f6e8c0610181518g463365aau748f58c5fef7a7b7@mail.gmail.com> <4536A9FE.6090508@ee.byu.edu> <1d2f6e8c0610181533t60b912c7j7b64ca5f9b3cdcf0@mail.gmail.com> Message-ID: <4536AD3A.3080800@ee.byu.edu> Sam frizou wrote: >In fact I installed numpy from "SciPy Superpack for Python 2.4 (Intel)" > > It's too bad that this version of NumPy got placed in the Super-pack. I don't remember exactly how long the version number had a 1.1 on it but I think it was only a few days. When 1.0b1 was released I bumped up the version number of the tree and was brought to my senses a bit later and the whole dev# scheme for the SVN tree was put in-place. You have to look at the revision number to see where 1.1 fits in. It sounds like something pre 1.0b2 which would mean that it won't work with SciPy 0.5.1. -Travis From pecora at anvil.nrl.navy.mil Wed Oct 18 19:59:52 2006 From: pecora at anvil.nrl.navy.mil (Louis Pecora) Date: Wed, 18 Oct 2006 19:59:52 -0400 Subject: [SciPy-user] Shape Problem In-Reply-To: <1d2f6e8c0610181533t60b912c7j7b64ca5f9b3cdcf0@mail.gmail.com> References: <1d2f6e8c0610180211w302dcc58l82abbbee6ed798d5@mail.gmail.com> <45361C69.7090009@anvil.nrl.navy.mil> <1d2f6e8c0610180753k7c414655k3e721efb98f1cc60@mail.gmail.com> <4536673F.2070500@ieee.org> <1d2f6e8c0610181157y69f84025j6fbe40fd649fa7c8@mail.gmail.com> <4536A2D0.30900@ee.byu.edu> <1d2f6e8c0610181518g463365aau748f58c5fef7a7b7@mail.gmail.com> <4536A9FE.6090508@ee.byu.edu> <1d2f6e8c0610181533t60b912c7j7b64ca5f9b3cdcf0@mail.gmail.com> Message-ID: <4536BFF8.4010203@anvil.nrl.navy.mil> Sam frizou wrote: > In fact I installed numpy from "SciPy Superpack for Python 2.4 (Intel)" > > Thank you very much > > On 10/19/06, Travis Oliphant wrote: > >> Sam frizou wrote: >> >> >>> >>> 0.5.0.2116 >>> 1.1.2880 >>> >>> >>> >> Your version of NumPy is old. There were some issues with Matrix >> indexing that were fixed a while ago. The current revision is 3352 (you >> are using revision 2880). The 1.1 was a mis-nomer and was changed in >> August I belived. >> >> The name for the most recent development version is >> >> 1.0.dev3359 >> >> >> >> -Travis >> I have the same version, but on a PPC Python2.4 version and I do not have the same problem. Strange. -- Cheers, Lou Pecora Code 6362 Naval Research Lab Washington, DC 20375 USA Ph: +202-767-6002 email: pecora at anvil.nrl.navy.mil From cygnusx1 at mac.com Wed Oct 18 20:58:29 2006 From: cygnusx1 at mac.com (Tom Bridgman) Date: Wed, 18 Oct 2006 20:58:29 -0400 Subject: [SciPy-user] Partial success (was: numpy-1.0rc2 still does not build on MacOS X) Message-ID: <0CDC2899-9497-45CC-A584-3B67EA6C8F51@mac.com> Okay, I tried using gfortran instead of g77 for the Tiger PPC build (make sure /usr/bin/g77 is gone because scipy will try to use it if there). A number of scipy tests fail, some apparently in some more important modules: 'cblas module is empty' FAIL: check_rvs (scipy.stats.tests.test_distributions.test_randint) FAIL: check loadmat case vec FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) I'm concerned about some of these failures. Suggestions? Thanks, Tom Here's my general build instructions and scipy test output: > gFortran > % gunzip gfortran-bin.tar.gz > % sudo tar -xvf gfortran-bin.tar -C / > > FFTW Libraries > % sudo gcc_select 4.0 > % gnutar xzf fftw-3.1.2.tar.gz > % cd fftw-3.1.2 > % ./configure > % make > % sudo make install > % sudo ln -s /usr/local/lib/libfftw3.a /usr/local/lib/libfftw.a > % sudo ln -s /usr/local/lib/libfftw3.la /usr/local/lib/libfftw.la > % sudo ln -s /usr/local/include/fftw3.h /usr/local/include/fftw.h > > numpy > % gnutar xzf numpy-1.0rc2.tar.gz > % cd numpy-1.0rc2 > % sudo python setup.py install > % python > >>> import numpy > >>> numpy.test(1,1) > (ALL PASSED) > > % gnutar xzf scipy-0.5.1.tar.gz > % cd scipy-0.5.1 > % sudo ln -s /usr/lib/gcc/powerpc-apple-darwin8/4.0.1/libgcc.a /usr/ > local/lib/libcc_dynamic.a > % python setup.py build > % sudo python setup.py install > % python > >>> import scipy > >>> scipy.test(level=2) Here the result of the test run >>> scipy.test(level=2) Found 4 tests for scipy.io.array_import Found 128 tests for scipy.linalg.fblas Found 397 tests for scipy.ndimage Found 10 tests for scipy.integrate.quadpack Found 97 tests for scipy.stats.stats Found 47 tests for scipy.linalg.decomp Found 2 tests for scipy.integrate.quadrature Found 95 tests for scipy.sparse.sparse Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 5 tests for scipy.interpolate.fitpack Found 1 tests for scipy.interpolate Found 70 tests for scipy.stats.distributions Found 12 tests for scipy.io.mmio Found 10 tests for scipy.stats.morestats Found 4 tests for scipy.linalg.lapack Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.linsolve.umfpack Found 4 tests for scipy.optimize.zeros Found 17 tests for scipy.io.mio Found 41 tests for scipy.linalg.basic Found 2 tests for scipy.maxentropy.maxentropy Found 358 tests for scipy.special.basic Found 128 tests for scipy.lib.blas.fblas Found 7 tests for scipy.linalg.matfuncs **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42 tests for scipy.lib.lapack Found 1 tests for scipy.optimize.cobyla Found 16 tests for scipy.lib.blas Found 1 tests for scipy.integrate Found 14 tests for scipy.linalg.blas Found 4 tests for scipy.fftpack.helper Found 4 tests for scipy.signal.signaltools Found 0 tests for __main__ Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. .......caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ........................................................................ ........................................................................ ........................................................................ ........................................................................ ........................................................................ ........................................................................ ........................................................................ .................................................Took 13 points. ..........Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. ........................Use minimum degree ordering on A'+A. ...................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. .................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. ....................................../Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ interpolate/fitpack2.py:410: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ..................................................................F..... ..................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ........ **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ....................data-ftype: z compared to data D Calling _superlu.zgssv Use minimum degree ordering on A'+A. .data-ftype: c compared to data F Calling _superlu.cgssv Use minimum degree ordering on A'+A. .data-ftype: d compared to data d Calling _superlu.dgssv Use minimum degree ordering on A'+A. .data-ftype: s compared to data f Calling _superlu.sgssv Use minimum degree ordering on A'+A. .......E.............Warning: 184549376 bytes requested, 73 bytes read. F....................................................................... ........................................................................ ........................................................................ ........................................................................ ........................................................................ .............................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ...Result may be inaccurate, approximate err = 1.6582899877e-08 ...Result may be inaccurate, approximate err = 4.54747350886e-13 ......................................................F.......Residual: 1.05006926991e-07 . **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** .......F.............. ====================================================================== ERROR: check loadmat case cellnest ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/io/tests/test_mio.py", line 80, in cc self._check_case(name, expected) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case self._check_level(k_label, expected, matdict[k]) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/io/tests/test_mio.py", line 30, in _check_level assert len(expected) == len(actual), "Different list lengths at % s" % label TypeError: len() of unsized object ====================================================================== FAIL: check_rvs (scipy.stats.tests.test_distributions.test_randint) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/stats/tests/test_distributions.py", line 84, in check_rvs assert isinstance(val, numpy.ScalarType),`type(val)` AssertionError: ====================================================================== FAIL: check loadmat case vec ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/io/tests/test_mio.py", line 80, in cc self._check_case(name, expected) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/io/tests/test_mio.py", line 74, in _check_case assert k in matdict, "Missing key at %s" % k_label AssertionError: Missing key at Test 'vec', file:/Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/io/ tests/./data/testvec_4_GLNX86.mat, variable fit_params ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.9988694190979004+6.3421880632634224e-37j) DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.9988694190979004+6.3421880632634224e-37j) DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1569 tests in 10.511s FAILED (failures=4, errors=1) >>> From oliphant.travis at ieee.org Thu Oct 19 01:37:49 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 18 Oct 2006 23:37:49 -0600 Subject: [SciPy-user] Installing SciPy on Intel Macs Message-ID: <45370F2D.7060602@ieee.org> I've successfully installed SciPy on Intel Macs using gfortran and cctools 590.36 The last one is necessary to get rid of the __dso_handle error that I was getting with an older version of cctools. I'm not sure if the new version of cctools is in the latest Xcode release of Mac OS X, but I suspect it is. I just wanted to document this somewhere before I forget. -Travis From excellent.frizou at gmail.com Thu Oct 19 05:25:19 2006 From: excellent.frizou at gmail.com (Sam frizou) Date: Thu, 19 Oct 2006 11:25:19 +0200 Subject: [SciPy-user] Installing SciPy on Intel Macs In-Reply-To: <45370F2D.7060602@ieee.org> References: <45370F2D.7060602@ieee.org> Message-ID: <1d2f6e8c0610190225n55ad6379o7f8e703d6cda1fb9@mail.gmail.com> Hi, I installed succesfully the latest version of numpy and scipy on my macintel (as described on the scipy wesite: -Apple's developer tools -gcc-intel-bin.tar.gz -fftw version 3.1.2 -numpy 1.0.rc2 -scipy-0.5.1 ). But, when I do from scipy import * I have an error which I think is one you describe: ImportError: Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparsetools.so: Symbol not found: ___dso_handle Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparsetools.so Expected in: dynamic lookup So, I downloaded and installed cctools 590.36. (the dmg) Then reinstalled scipy doing python setup.py build sudo python serup.py install But I still have the same error when I do from scipy import * I read on previous mails that this can be avoid by commentting something concerning sparse matrices, but I need them. Can you please detail how you succeed in your installation please ? Thanks On 10/19/06, Travis Oliphant wrote: > > I've successfully installed SciPy on Intel Macs using gfortran and > cctools 590.36 > > The last one is necessary to get rid of the __dso_handle error that I > was getting with an older version of cctools. > > I'm not sure if the new version of cctools is in the latest Xcode > release of Mac OS X, but I suspect it is. I just wanted to document > this somewhere before I forget. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- On obtient beaucoup plus avec un mot gentil et une arme qu'avec seulement un mot gentil - "Al Capone". From marcos at burke.ath.cx Thu Oct 19 08:01:23 2006 From: marcos at burke.ath.cx (=?ISO-8859-1?Q?Marcos_S=E1nchez_Provencio_Bke?=) Date: Thu, 19 Oct 2006 14:01:23 +0200 Subject: [SciPy-user] python-scipy in amd64 ubuntu dapper Message-ID: <45376913.208@burke.ath.cx> To whom it may concern: https://launchpad.net/distros/ubuntu/+source/python-scipy/+bug/66904 Binary package hint: python-scipy In an amd64 dapper install python -c 'import scipy.xplt as s; s.plmesh' TypeError: Array can not be safely cast to required type Can probably be corrected (does not throw exception, but I have not checked correct results) with in /usr/lib/python2.4/site-packages/scipy/xplt/slice3:1578 mask = find_mask (below, _node_edges3 [itype].astype(Int32)) ///////////// Sorry if this has been reported or if it does not belong here. From nwagner at iam.uni-stuttgart.de Thu Oct 19 08:16:22 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 19 Oct 2006 14:16:22 +0200 Subject: [SciPy-user] python-scipy in amd64 ubuntu dapper In-Reply-To: <45376913.208@burke.ath.cx> References: <45376913.208@burke.ath.cx> Message-ID: <45376C96.3060606@iam.uni-stuttgart.de> Marcos S?nchez Provencio Bke wrote: > To whom it may concern: > https://launchpad.net/distros/ubuntu/+source/python-scipy/+bug/66904 > > Binary package hint: python-scipy > > In an amd64 dapper install > > python -c 'import scipy.xplt as s; s.plmesh' > > TypeError: Array can not be safely cast to required type > > Can probably be corrected (does not throw exception, but I have not > checked correct results) with > > in /usr/lib/python2.4/site-packages/scipy/xplt/slice3:1578 > mask = find_mask (below, _node_edges3 [itype].astype(Int32)) > > ///////////// > > Sorry if this has been reported or if it does not belong here. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > AFAIK the sandbox package xplt is not ready for 64-bit. You may switch over to matplotlib. Nils From jimmy.touma.ctr at eglin.af.mil Thu Oct 19 09:29:50 2006 From: jimmy.touma.ctr at eglin.af.mil (Touma Jimmy E CTR USAF AFRL/MNGG) Date: Thu, 19 Oct 2006 08:29:50 -0500 Subject: [SciPy-user] interp1d question Message-ID: Thanks Robert. I will update my version. However, x is an array with one dimension(Mx1). 'u' on the other hand is an array with MxN dimensions and I was wanting to interpolate column wise as opposed to row wise (that'll save me a couple of operation after reading the data in from a file) Thanks, Jimmy PS. It works row wise... -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Robert Kern Sent: Wednesday, October 18, 2006 4:35 PM To: SciPy Users List Subject: Re: [SciPy-user] interp1d question Touma Jimmy E CTR USAF AFRL/MNGG wrote: > Hi all, > > > I am trying to interpolate in 1d along a columns as opposed to rows > and it doesn't seem to work. The code below emulates reading data > where interpolation should be done column wise. The error when > running this code follows. If you could please tell me what I am doing > wrong and how to interpolate along columns. > /usr/lib/python2.4/site-packages/scipy/interpolate/interpolate.py in > __init__(self, x, y, kind, axis, copy, bounds_error, fill_value) > > 143 len_x,len_y = shape(oriented_x)[interp_axis], > shape(oriented_y)[interp_axis] > 144 if len_x != len_y: > --> 145 raise ValueError, "x and y arrays must be equal in > length along "\ > 146 "interpolation axis." > 147 if len_x < 2 or len_y < 2: > > ValueError: x and y arrays must be equal in length along interpolation > axis. > WARNING: Failure executing file: > > In [3]: numpy.__version__ > Out[3]: '1.0b5' > > In [4]: scipy.__version__ > Out[4]: '0.3.2' You have a *very* old version of scipy. It uses Numeric, not numpy. Please upgrade. In recent versions, interp1d does not accept x arrays with more than one dimension. I'm not entirely sure why the old version did; it makes no sense. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Thu Oct 19 09:48:08 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 19 Oct 2006 07:48:08 -0600 Subject: [SciPy-user] Installing SciPy on Intel Macs In-Reply-To: <1d2f6e8c0610190225n55ad6379o7f8e703d6cda1fb9@mail.gmail.com> References: <45370F2D.7060602@ieee.org> <1d2f6e8c0610190225n55ad6379o7f8e703d6cda1fb9@mail.gmail.com> Message-ID: <45378218.1010904@ieee.org> Sam frizou wrote: > Hi, > > I installed succesfully the latest version of numpy and scipy on my > macintel (as described on the scipy wesite: > -Apple's developer tools > -gcc-intel-bin.tar.gz > -fftw version 3.1.2 > -numpy 1.0.rc2 > -scipy-0.5.1 > ). > > But, when I do > from scipy import * > > I have an error which I think is one you describe: > > ImportError: Failure linking new module: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparsetools.so: > Symbol not found: ___dso_handle > Referenced from: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparsetools.so > Expected in: dynamic lookup > > > So, I downloaded and installed cctools 590.36. (the dmg) > > Then reinstalled scipy doing > > python setup.py build > sudo python serup.py install > You need to remove the build directory and redo the install. Otherwise the codes are not re-linked using the upgraded cctools. That's all I did and it seemed to work. > I read on previous mails that this can be avoid by commentting > something concerning sparse matrices, but I need them. > I think that's a separate issue. -Travis From excellent.frizou at gmail.com Thu Oct 19 10:51:47 2006 From: excellent.frizou at gmail.com (Sam frizou) Date: Thu, 19 Oct 2006 16:51:47 +0200 Subject: [SciPy-user] Invert of a sparse matrix Message-ID: <1d2f6e8c0610190751p6fab62fev5dc5b5bfcef5d5c3@mail.gmail.com> Hi, Is there a trick for computing the invert of a square SPARSE matrix ? Thanks From nwagner at iam.uni-stuttgart.de Thu Oct 19 10:58:53 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 19 Oct 2006 16:58:53 +0200 Subject: [SciPy-user] Invert of a sparse matrix In-Reply-To: <1d2f6e8c0610190751p6fab62fev5dc5b5bfcef5d5c3@mail.gmail.com> References: <1d2f6e8c0610190751p6fab62fev5dc5b5bfcef5d5c3@mail.gmail.com> Message-ID: <453792AD.1030204@iam.uni-stuttgart.de> Sam frizou wrote: > Hi, > > Is there a trick for computing the invert of a square SPARSE matrix ? > > Thanks > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > You can solve n equations where the i-th rhs is a unit-vector e_i in the corresponding direction. But why do you need the inverse of a sparse matrix ? Nils From excellent.frizou at gmail.com Thu Oct 19 11:12:53 2006 From: excellent.frizou at gmail.com (Sam frizou) Date: Thu, 19 Oct 2006 17:12:53 +0200 Subject: [SciPy-user] Invert of a sparse matrix In-Reply-To: <453792AD.1030204@iam.uni-stuttgart.de> References: <1d2f6e8c0610190751p6fab62fev5dc5b5bfcef5d5c3@mail.gmail.com> <453792AD.1030204@iam.uni-stuttgart.de> Message-ID: <1d2f6e8c0610190812r263d1cc3ocbedae46ed3eccca@mail.gmail.com> > But why do you need the inverse of a sparse matrix ? I compute generators of homology groups of combinatorial objects. For that I use HUGE and SPARSE matrices. thanks, From robert.kern at gmail.com Thu Oct 19 11:14:14 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 19 Oct 2006 10:14:14 -0500 Subject: [SciPy-user] interp1d question In-Reply-To: References: Message-ID: <45379646.8060003@gmail.com> Touma Jimmy E CTR USAF AFRL/MNGG wrote: > Thanks Robert. I will update my version. However, x is an array with one > dimension(Mx1). 'u' on the other hand is an array with MxN dimensions > and I was wanting to interpolate column wise as opposed to row wise > (that'll save me a couple of operation after reading the data in from a > file) No, x has two dimensions, (M, 1). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Thu Oct 19 11:46:34 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 19 Oct 2006 08:46:34 -0700 Subject: [SciPy-user] python-scipy in amd64 ubuntu dapper In-Reply-To: <45376C96.3060606@iam.uni-stuttgart.de> References: <45376913.208@burke.ath.cx> <45376C96.3060606@iam.uni-stuttgart.de> Message-ID: <45379DDA.1010808@astraw.com> Nils Wagner wrote: > Marcos S?nchez Provencio Bke wrote: > >> To whom it may concern: >> https://launchpad.net/distros/ubuntu/+source/python-scipy/+bug/66904 >> > AFAIK the sandbox package xplt is not ready for 64-bit. > You may switch over to matplotlib. > Furthermore, Ubuntu's official Dapper packages for scipy are 0.3.2, which are not going to receive much care from the developers, who are now focused on 0.5.x development. You're welcome to use my Dapper packages of the newer release, as described here: http://scipy.org/Installing_SciPy/Linux From cimrman3 at ntc.zcu.cz Thu Oct 19 11:53:38 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 19 Oct 2006 17:53:38 +0200 Subject: [SciPy-user] Invert of a sparse matrix In-Reply-To: <453792AD.1030204@iam.uni-stuttgart.de> References: <1d2f6e8c0610190751p6fab62fev5dc5b5bfcef5d5c3@mail.gmail.com> <453792AD.1030204@iam.uni-stuttgart.de> Message-ID: <45379F82.5010701@ntc.zcu.cz> Nils Wagner wrote: > Sam frizou wrote: >> Hi, >> >> Is there a trick for computing the invert of a square SPARSE matrix ? >> >> Thanks >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > You can solve n equations where the i-th rhs is a unit-vector e_i in the > corresponding direction. and for this you should first perform LU factorization and then use the pre-factorized matrix for the multiple-rhs problem, see scipy.linsolve docs. r. From excellent.frizou at gmail.com Thu Oct 19 12:49:11 2006 From: excellent.frizou at gmail.com (Sam frizou) Date: Thu, 19 Oct 2006 18:49:11 +0200 Subject: [SciPy-user] Sparse matrices operations Message-ID: <1d2f6e8c0610190949s3491b4e2qd4d5af42832ef111@mail.gmail.com> Hi, I use sparse matrices and I have to do row and column operations. 1) Is it possible to assign an entire row ? an entire column ? for example if R is a lil_matrix of shape (1,10) and M is a lil_matrix of shape (3,10) and I want to assign R to the second row of M. I am looking for something like: M[2,:]=R or M.setrow(2,R) 2) Is it possible to have the list of the empty coordinates ? ("where" does not seem to be implemented yet) Thanks From jkgruet at sandia.gov Thu Oct 19 15:14:43 2006 From: jkgruet at sandia.gov (James K. Gruetzner) Date: Thu, 19 Oct 2006 13:14:43 -0600 Subject: [SciPy-user] numpy - scipy version hell Message-ID: <200610191314.49095.jkgruet@sandia.gov> I'm using Fedora core 5, that insect-laden wonder, which includes python 2.4.3. I'm trying to install numpy (scipy-core) and scipy (the rest). I can't seem to get around package conflicts. I've tried two methods, one in which numpy fails and one in which scipy fails. This is the "scipy fails" one (I sent the numpy fails one to the numpy-discussions list). METHOD 2 (scipy fails) OK, remove the previously installed numpy packages from /usr/lib/python2.4/site-packages. Now use yum to install numpy 0.9.8-1.fc5 from the extras repository. Hooray, numpy imports fine! So, now time to download scipy, v.0.5.1. Untarball it and cd. IT fails to install, with the following (long) error message: non-existing path in 'Lib/maxentropy': 'doc' Traceback (most recent call last): File "setup.py", line 55, in ? setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 140, in setup config = configuration() File "setup.py", line 19, in configuration config.add_subpackage('Lib') File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./Lib/setup.py", line 22, in configuration config.add_subpackage('ndimage') File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 655, in _get_configuration_from_setup_py ('.py', 'U', 1)) File "Lib/ndimage/setup.py", line 3, in ? from numpy.numarray import get_numarray_include_dirs ImportError: No module named numarray I suspect that there's a bit of out-of-sync with the packages, but am not sure. Bottom line question: how do I get both installed at once? Thanks! James -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From oliphant at ee.byu.edu Thu Oct 19 15:17:29 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 19 Oct 2006 13:17:29 -0600 Subject: [SciPy-user] numpy - scipy version hell In-Reply-To: <200610191314.49095.jkgruet@sandia.gov> References: <200610191314.49095.jkgruet@sandia.gov> Message-ID: <4537CF49.8000008@ee.byu.edu> James K. Gruetzner wrote: >I'm using Fedora core 5, that insect-laden wonder, which includes python >2.4.3. I'm trying to install numpy (scipy-core) and scipy (the rest). I >can't seem to get around package conflicts. I've tried two methods, one in >which numpy fails and one in which scipy fails. This is the "scipy fails" >one (I sent the numpy fails one to the numpy-discussions list). > > > Very few packages will have the correct versions of numpy and scipy that work together. The latest that work together is Scipy 0.5.1 and NumPy 1.0rc2 Look for an updated SciPy release to work with NumPy 1.0 when it comes out next week. -Travis From robert.kern at gmail.com Thu Oct 19 15:19:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 19 Oct 2006 14:19:01 -0500 Subject: [SciPy-user] numpy - scipy version hell In-Reply-To: <200610191314.49095.jkgruet@sandia.gov> References: <200610191314.49095.jkgruet@sandia.gov> Message-ID: <4537CFA5.1040002@gmail.com> James K. Gruetzner wrote: > I'm using Fedora core 5, that insect-laden wonder, which includes python > 2.4.3. I'm trying to install numpy (scipy-core) and scipy (the rest). I > can't seem to get around package conflicts. I've tried two methods, one in > which numpy fails and one in which scipy fails. This is the "scipy fails" > one (I sent the numpy fails one to the numpy-discussions list). > > METHOD 2 (scipy fails) > OK, remove the previously installed numpy packages > from /usr/lib/python2.4/site-packages. Now use yum to install numpy > 0.9.8-1.fc5 from the extras repository. Hooray, numpy imports fine! > So, now time to download scipy, v.0.5.1. Untarball it and cd. IT fails to > install, with the following (long) error message: numpy 0.9.8 is quite old compared to scipy 0.5.1. You will want the recent 1.0rc3 release. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jkgruet at sandia.gov Thu Oct 19 15:25:22 2006 From: jkgruet at sandia.gov (James K. Gruetzner) Date: Thu, 19 Oct 2006 13:25:22 -0600 Subject: [SciPy-user] numpy - scipy version hell In-Reply-To: <4537CFA5.1040002@gmail.com> References: <200610191314.49095.jkgruet@sandia.gov> <4537CFA5.1040002@gmail.com> Message-ID: <200610191325.24097.jkgruet@sandia.gov> Thanks much, Robert. Alas, that's where METHOD 1 (numpy fails) has its problem. I can't get numpy 1.0rc3 to install: it gives a different error (undefined symbol: s_cat). Thanks again. James On Thursday 19 October 2006 13:19, you wrote: > James K. Gruetzner wrote: > > I'm using Fedora core 5, that insect-laden wonder, which includes python > > 2.4.3. I'm trying to install numpy (scipy-core) and scipy (the rest). > > I can't seem to get around package conflicts. I've tried two methods, > > one in which numpy fails and one in which scipy fails. This is the > > "scipy fails" one (I sent the numpy fails one to the numpy-discussions > > list). > > > > METHOD 2 (scipy fails) > > OK, remove the previously installed numpy packages > > from /usr/lib/python2.4/site-packages. Now use yum to install numpy > > 0.9.8-1.fc5 from the extras repository. Hooray, numpy imports fine! > > So, now time to download scipy, v.0.5.1. Untarball it and cd. IT fails > > to install, with the following (long) error message: > > numpy 0.9.8 is quite old compared to scipy 0.5.1. You will want the recent > 1.0rc3 release. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From cygnusx1 at mac.com Thu Oct 19 20:02:56 2006 From: cygnusx1 at mac.com (Tom Bridgman) Date: Thu, 19 Oct 2006 20:02:56 -0400 Subject: [SciPy-user] numpy-1.0rc3 on MacOS X 10.4 (PPC). Builds with errors, tests fail Message-ID: Still trying to get scipy working. Saw the new numpy and tried it under gFortran & fftw-3.1.2. Seems to build okay but tests generate many warnings, then fail. A number of errors appear in _configuretest.c:4 stage of the build. I had to truncate the listing as it exceeding the posting limit for this list. Here's the test output for numpy. Suggestions welcome. > Planck:/Users/Shared/ToBeRestored/Python bridgman$ python > Python 2.4.3 (#1, Apr 7 2006, 10:54:33) > [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> numpy.test(1,1) > Found 5 tests for numpy.distutils.misc_util > Found 3 tests for numpy.lib.getlimits > Found 31 tests for numpy.core.numerictypes > Found 32 tests for numpy.linalg > Found 13 tests for numpy.core.umath > Found 4 tests for numpy.core.scalarmath > Found 9 tests for numpy.lib.arraysetops > Found 42 tests for numpy.lib.type_check > Found 183 tests for numpy.core.multiarray > Found 3 tests for numpy.fft.helper > Found 36 tests for numpy.core.ma > Found 12 tests for numpy.lib.twodim_base > Found 10 tests for numpy.core.defmatrix > Found 1 tests for numpy.lib.ufunclike > Found 4 tests for numpy.ctypeslib > Found 41 tests for numpy.lib.function_base > Found 2 tests for numpy.lib.polynomial > Found 8 tests for numpy.core.records > Found 26 tests for numpy.core.numeric > Found 4 tests for numpy.lib.index_tricks > Found 47 tests for numpy.lib.shape_base > Found 0 tests for __main__ > ...................................................................... > .................................Warning: invalid value encountered > in divide > ..Warning: invalid value encountered in divide > ..Warning: divide by zero encountered in divide > .Warning: divide by zero encountered in divide > ..Warning: invalid value encountered in divide > .Warning: divide by zero encountered in divide > .Warning: divide by zero encountered in divide > .Warning: divide by zero encountered in divide > .Warning: divide by zero encountered in divide > ..Warning: invalid value encountered in divide > ..Warning: invalid value encountered in divide > ..Warning: divide by zero encountered in divide > .Warning: divide by zero encountered in divide > .Warning: divide by zero encountered in divide > .Warning: divide by zero encountered in divide > ........Warning: invalid value encountered in divide > .Warning: invalid value encountered in divide > ..Warning: divide by zero encountered in divide > ...................................................................... > ................................................F..................... > ............................................................Warning: d > ivide by zero encountered in divide > Warning: divide by zero encountered in divide > Warning: divide by zero encountered in divide > Warning: divide by zero encountered in divide > Warning: divide by zero encountered in divide > Warning: divide by zero encountered in divide > Warning: divide by zero encountered in divide > Warning: divide by zero encountered in divide > Warning: divide by zero encountered in divide > .....................Warning: invalid value encountered in sqrt > Warning: invalid value encountered in log > Warning: invalid value encountered in log10 > ..Warning: invalid value encountered in sqrt > Warning: invalid value encountered in sqrt > Warning: divide by zero encountered in log > Warning: divide by zero encountered in log > Warning: divide by zero encountered in log10 > Warning: divide by zero encountered in log10 > Warning: invalid value encountered in arcsin > Warning: invalid value encountered in arcsin > Warning: invalid value encountered in arccos > Warning: invalid value encountered in arccos > Warning: invalid value encountered in arccosh > Warning: invalid value encountered in arccosh > Warning: divide by zero encountered in arctanh > Warning: divide by zero encountered in arctanh > Warning: invalid value encountered in divide > Warning: invalid value encountered in true_divide > Warning: invalid value encountered in floor_divide > Warning: invalid value encountered in remainder > Warning: invalid value encountered in fmod > ...................................................................... > ...................................................................... > ................... > ====================================================================== > FAIL: Ticket #112 > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/numpy/core/tests/test_regression.py", line > 219, in check_longfloat_repr > assert(str(a)[1:9] == str(a[0])[:8]) > AssertionError > > ---------------------------------------------------------------------- > Ran 516 tests in 1.782s > > FAILED (failures=1) > > >>> > Here's the build output: Planck:/Users/Shared/ToBeRestored/Python/numpy-1.0rc3 bridgman$ python setup.py build Running from numpy source directory. F2PY Version 2_3364 blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] running build running config_fc running build_src building py_modules sources creating build creating build/src.macosx-10.4-fat-2.4 creating build/src.macosx-10.4-fat-2.4/numpy creating build/src.macosx-10.4-fat-2.4/numpy/distutils building extension "numpy.core.multiarray" sources creating build/src.macosx-10.4-fat-2.4/numpy/core Generating build/src.macosx-10.4-fat-2.4/numpy/core/config.h customize NAGFCompiler customize AbsoftFCompiler customize IbmFCompiler Could not locate executable g77 Could not locate executable f77 Could not locate executable f95 customize GnuFCompiler customize Gnu95FCompiler customize Gnu95FCompiler customize Gnu95FCompiler using config C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-I/Library/Frameworks/Python.framework/Versions/2.4/ include/python2.4 -Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c gcc _configtest.o -L/Library/Frameworks/Python.framework/Versions/2.4/ lib -L/usr/local/lib -L/usr/lib -o _configtest _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c gcc _configtest.o -o _configtest _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c gcc _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c gcc _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c gcc _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c gcc _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c gcc _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c gcc _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c gcc _configtest.o -o _configtest success! removing: _configtest.c _configtest.o _configtest C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: error: 'isnan' undeclared (first use in this function) _configtest.c:4: error: (Each undeclared identifier is reported only once _configtest.c:4: error: for each function it appears in.) _configtest.c: In function 'main': _configtest.c:4: error: 'isnan' undeclared (first use in this function) _configtest.c:4: error: (Each undeclared identifier is reported only once _configtest.c:4: error: for each function it appears in.) lipo: can't figure out the architecture type of: /var/tmp//ccpygvET.out _configtest.c: In function 'main': _configtest.c:4: error: 'isnan' undeclared (first use in this function) _configtest.c:4: error: (Each undeclared identifier is reported only once _configtest.c:4: error: for each function it appears in.) _configtest.c: In function 'main': _configtest.c:4: error: 'isnan' undeclared (first use in this function) _configtest.c:4: error: (Each undeclared identifier is reported only once _configtest.c:4: error: for each function it appears in.) lipo: can't figure out the architecture type of: /var/tmp//ccpygvET.out failure. removing: _configtest.c _configtest.o C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c _configtest.c: In function 'main': _configtest.c:4: error: 'isinf' undeclared (first use in this function) _configtest.c:4: error: (Each undeclared identifier is reported only once _configtest.c:4: error: for each function it appears in.) _configtest.c: In function 'main': _configtest.c:4: error: 'isinf' undeclared (first use in this function) _configtest.c:4: error: (Each undeclared identifier is reported only once _configtest.c:4: error: for each function it appears in.) lipo: can't figure out the architecture type of: /var/tmp//ccEsTb4T.out _configtest.c: In function 'main': _configtest.c:4: error: 'isinf' undeclared (first use in this function) _configtest.c:4: error: (Each undeclared identifier is reported only once _configtest.c:4: error: for each function it appears in.) _configtest.c: In function 'main': _configtest.c:4: error: 'isinf' undeclared (first use in this function) _configtest.c:4: error: (Each undeclared identifier is reported only once _configtest.c:4: error: for each function it appears in.) lipo: can't figure out the architecture type of: /var/tmp//ccEsTb4T.out failure. removing: _configtest.c _configtest.o C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 [truncated] From alan at ajackson.org Thu Oct 19 23:03:25 2006 From: alan at ajackson.org (Alan Jackson) Date: Thu, 19 Oct 2006 22:03:25 -0500 Subject: [SciPy-user] fft giving error Message-ID: <20061019220325.ed3fc35b.alan@ajackson.org> I'm trying to run the example from http://linuxgazette.net/115/andreasen.html to plot the frequency spectrum of sunspot numbers from ipython... from scipy import * from scipy import fftpack from scipy.io import read_array sunspot = read_array('sunspots.dat') year=sunspot[:,0] wolfer=sunspot[:,1] Y=fft(wolfer) and I get the following error.... In [68]: Y=fft(wolfer) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/ajackson/projects/cyclicity/ TypeError: 'module' object is not callable Running gentoo Linux, 2.6.16-gentoo-r3, AMD Athlon X2 64 bit, blas-atlas-3.7.17 lapack-atlas-3.7.17 scipy-0.5.1 numpy-1.0_rc1 python-2.4.3 Parenthetically, it all builds quite nicely with the gentoo emerge. Kudos to the gentoo package maintainers. -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From pgmdevlist at gmail.com Thu Oct 19 23:14:49 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 19 Oct 2006 23:14:49 -0400 Subject: [SciPy-user] fft giving error In-Reply-To: <20061019220325.ed3fc35b.alan@ajackson.org> References: <20061019220325.ed3fc35b.alan@ajackson.org> Message-ID: <200610192314.49503.pgmdevlist@gmail.com> On Thursday 19 October 2006 23:03, Alan Jackson wrote: > I'm trying to run the example from > http://linuxgazette.net/115/andreasen.html to plot the frequency spectrum > of sunspot numbers from ipython... > > from scipy import * > from scipy import fftpack When you type from scipy import * I think you also import the fft module from numpy (as there's a from numpy import * in scipy.__init__). So 'fft' is the package numpy.fft When you type from scipy import fftpack the fft routine is available as fftpack.fft So, if you wanna overwrite fft, you should do from scipy.fftpack import * From ckkart at hoc.net Fri Oct 20 03:06:29 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 20 Oct 2006 16:06:29 +0900 Subject: [SciPy-user] fft giving error In-Reply-To: <20061019220325.ed3fc35b.alan@ajackson.org> References: <20061019220325.ed3fc35b.alan@ajackson.org> Message-ID: <45387575.5070003@hoc.net> Alan Jackson wrote: > I'm trying to run the example from http://linuxgazette.net/115/andreasen.html > to plot the frequency spectrum of sunspot numbers from ipython... > > from scipy import * > from scipy import fftpack > > from scipy.io import read_array > sunspot = read_array('sunspots.dat') > > year=sunspot[:,0] > wolfer=sunspot[:,1] > Y=fft(wolfer) > > and I get the following error.... > > > In [68]: Y=fft(wolfer) > --------------------------------------------------------------------------- > exceptions.TypeError Traceback (most recent call last) > > /home/ajackson/projects/cyclicity/ > > TypeError: 'module' object is not callable I think, some numpy/scipy modules have been rearranged since that article was written. In fact the line from scipy import fftpack is not needed since fftpack is not used in the script. With from scipy import * you'll now import a *module* called fft. The function you're looking for is then called fft.fft. Try help(fft). Christian From trond.danielsen at gmail.com Fri Oct 20 05:17:06 2006 From: trond.danielsen at gmail.com (Trond Danielsen) Date: Fri, 20 Oct 2006 11:17:06 +0200 Subject: [SciPy-user] Building scipy on FC5 x86_64 Message-ID: <409676c70610200217g56e264d5t9f26fe644039e82b@mail.gmail.com> Hi! I am trying to build scipy on Fedora Core 5 (x86_64), but setup.py cannot find the required libraries. How can I add /usr/lib64 to the lib-path? I have followed the instructions on the website for installing on fc5, so the required libs are installed. scipy version is 0.4.8 and numpy is the latest from fedora extra. Best regards, -- Trond Danielsen From nwagner at iam.uni-stuttgart.de Fri Oct 20 05:24:32 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Oct 2006 11:24:32 +0200 Subject: [SciPy-user] Building scipy on FC5 x86_64 In-Reply-To: <409676c70610200217g56e264d5t9f26fe644039e82b@mail.gmail.com> References: <409676c70610200217g56e264d5t9f26fe644039e82b@mail.gmail.com> Message-ID: <453895D0.5050908@iam.uni-stuttgart.de> Trond Danielsen wrote: > Hi! > > I am trying to build scipy on Fedora Core 5 (x86_64), but setup.py > cannot find the required libraries. How can I add /usr/lib64 to the > lib-path? > > I have followed the instructions on the website for installing on fc5, > so the required libs are installed. scipy version is 0.4.8 and numpy > is the latest from fedora extra. > > Best regards, > AFAIK you can add an entry in the file system_info.py in line 140 (numpy/numpy/distutils). Nils From ndbecker2 at gmail.com Fri Oct 20 09:54:37 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 20 Oct 2006 09:54:37 -0400 Subject: [SciPy-user] Building scipy on FC5 x86_64 References: <409676c70610200217g56e264d5t9f26fe644039e82b@mail.gmail.com> <453895D0.5050908@iam.uni-stuttgart.de> Message-ID: Nils Wagner wrote: > Trond Danielsen wrote: >> Hi! >> >> I am trying to build scipy on Fedora Core 5 (x86_64), but setup.py >> cannot find the required libraries. How can I add /usr/lib64 to the >> lib-path? >> >> I have followed the instructions on the website for installing on fc5, >> so the required libs are installed. scipy version is 0.4.8 and numpy >> is the latest from fedora extra. >> >> Best regards, >> > AFAIK you can add an entry in the file system_info.py in line 140 > (numpy/numpy/distutils). > > Nils env CFLAGS="$RPM_OPT_FLAGS" ATLAS=%{_libdir} FFTW=%{_libdir} BLAS=%{_libdir} LAPACK=%{_libdir} python setup.py config_fc --fcompiler=gnu95 build From icy.flame.gm at gmail.com Fri Oct 20 10:20:21 2006 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Fri, 20 Oct 2006 15:20:21 +0100 Subject: [SciPy-user] scipy.test failures=22, errors=3; Looks like i am doing something horribly wrong again. Message-ID: Any thoughts are appreciated. Python 2.4.3 (#1, Jun 13 2006, 16:41:18) [GCC 4.0.2 20051125 (Red Hat 4.0.2-8)] on linux2 >>> import numpy.version >>> numpy.version.version '1.0rc3' >>> import scipy >>> scipy.version.version '0.5.1' >>> scipy.test(level=1) ====================================================================== ERROR: line-search Newton conjugate gradient optimization routine ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 104, in check_ncg full_output=False, disp=False, retall=False) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 971, in fmin_ncg alphak, fc, gc, old_fval = line_search_BFGS(f,xk,pk,gfk,old_fval) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 535, in line_search_BFGS phi_a2 = apply(f,(xk+alpha2*pk,)+args) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 94, in function_wrapper return function(x, *args) File "/usr/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 31, in func raise RuntimeError, "too many iterations in optimization routine" RuntimeError: too many iterations in optimization routine ====================================================================== ERROR: Solve: single precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py", line 25, in check_solve_complex_without_umfpack x = linsolve.spsolve(a, b) File "/usr/lib/python2.4/site-packages/scipy/linsolve/linsolve.py", line 76, in spsolve return gssv(N, lastel, data, index0, index1, b, csc, permc_spec)[0] TypeError: array cannot be safely cast to required type ====================================================================== ERROR: check loadmat case cellnest ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 80, in cc self._check_case(name, expected) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case self._check_level(k_label, expected, matdict[k]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 30, in _check_level assert len(expected) == len(actual), "Different list lengths at %s" % label TypeError: len() of unsized object ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_diff) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 85, in check_definition assert_array_almost_equal(diff(sin(x)),direct_diff(sin(x))) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 87.5%) x: array([ 1.00000000e+00, 9.23879533e-01, 7.07106781e-01, 3.82683432e-01, 8.80098148e-17, -3.82683432e-01, -7.07106781e-01, -9.23879533e-01, -1.00000000e-00,... y: array([ -0.00000000e+00, 5.00000000e-01, 7.76627939e-17, 1.29677356e-16, 1.24900090e-16, 2.16128927e-16, 1.49721655e-16, -9.71445147e-17, 0.00000000e+00,... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 301, in check_definition assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 87.5%) x: array([ 1.00000000e+00, 9.23879533e-01, 7.07106781e-01, 3.82683432e-01, 8.23044233e-17, -3.82683432e-01, -7.07106781e-01, -9.23879533e-01, -1.00000000e+00,... y: array([ -0.00000000e+00 +0.00000000e+00j, 5.00000000e-01 -7.31057196e-17j, 3.88313970e-17 -2.42387134e-17j,... ====================================================================== FAIL: check_random_even (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 332, in check_random_even assert_array_almost_equal(direct_hilbert(direct_ihilbert(f)),f) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.00000000e+00+0.j , -3.39208538e-04+0.00036363j, -1.06020671e-03-0.00021316j, -5.23844841e-04+0.00028867j, 3.09111343e-04-0.0020875j , 7.67990574e-04+0.00133127j,... y: array([-0.11060715, -0.49769099, 0.47834226, -0.33737298, 0.24581014, 0.02109089, -0.4631084 , 0.19285299, 0.47922014, -0.39063467, 0.22212061, 0.031721 , 0.04744712, -0.10681613, -0.04652146,... ====================================================================== FAIL: check_tilbert_relation (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 311, in check_tilbert_relation assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 62.5%) x: array([ 1.00000000e+00, 6.53281482e-01, 2.12809267e-16, -2.70598050e-01, -1.06260199e-16, 2.70598050e-01, 1.53930800e-16, -6.53281482e-01, -1.00000000e+00,... y: array([ -0.00000000e+00 +0.00000000e+00j, 2.50000000e-01 -3.52840045e-17j, -1.19052598e-18 +4.47468130e-17j,... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_ihilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 370, in check_definition assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 87.5%) x: array([ -1.00000000e+00, -9.23879533e-01, -7.07106781e-01, -3.82683432e-01, -8.23044233e-17, 3.82683432e-01, 7.07106781e-01, 9.23879533e-01, 1.00000000e+00,... y: array([ 0.00000000e+00 -0.00000000e+00j, -5.00000000e-01 +7.31057196e-17j, -3.88313970e-17 +2.42387134e-17j,... ====================================================================== FAIL: check_itilbert_relation (scipy.fftpack.tests.test_pseudo_diffs.test_ihilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 380, in check_itilbert_relation assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 62.5%) x: array([ -1.00000000e+00, -6.53281482e-01, -2.12809267e-16, 2.70598050e-01, 1.06260199e-16, -2.70598050e-01, -1.53930800e-16, 6.53281482e-01, 1.00000000e+00,... y: array([ 0.00000000e+00 -0.00000000e+00j, -2.50000000e-01 +3.52840045e-17j, 1.19052598e-18 -4.47468130e-17j,... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_itilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 288, in check_definition assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 87.5%) x: array([ -9.96679946e-02, -9.20812203e-02, -7.04759149e-02, -3.81412903e-02, -7.12619106e-18, 3.81412903e-02, 7.04759149e-02, 9.20812203e-02, 9.96679946e-02,... y: array([ -0.00000000e+00 +0.00000000e+00j, -4.98339973e-02 +7.28630047e-18j, -7.66435941e-18 +4.78412381e-18j,... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_shift) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 390, in check_definition assert_array_almost_equal(shift(sin(x),a),direct_shift(sin(x),a)) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.09983342, 0.19688015, 0.29203083, 0.38436909, 0.47300566, 0.55708693, 0.63580315, 0.70839623, 0.77416708, 0.83248227, 0.8827802 , 0.92457648, 0.95746858, 0.98113973, 0.99536197,... y: array([ 1.23217883e-17, 4.99167083e-02, 2.46487097e-19, 8.84658105e-18, 5.94125166e-18, 2.09940707e-17, 1.47767496e-17, 1.22028436e-17, -4.63296513e-18,... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_tilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 227, in check_definition assert_array_almost_equal (y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 87.5%) x: array([ 1.00333111e+01, 9.26957080e+00, 7.09462234e+00, 3.83958194e+00, 1.10138648e-15, -3.83958194e+00, -7.09462234e+00, -9.26957080e+00, -1.00333111e+01,... y: array([ -0.00000000e+00 +0.00000000e+00j, 5.01665557e+00 -7.33492430e-16j, 1.96738867e-16 -1.22805188e-16j,... ====================================================================== FAIL: check_random_even (scipy.fftpack.tests.test_pseudo_diffs.test_tilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 240, in check_random_even assert_array_almost_equal(direct_tilbert(direct_itilbert(f,h),h),f) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ -0.00000000e+00 +0.00000000e+00j, 1.43984280e-03 +1.83732630e-03j, -8.76803558e-04 +9.96589953e-04j,... y: array([ 0.50751095, -0.23808924, -0.20386413, -0.30474938, -0.21144356, 0.45414021, -0.11937844, -0.0768124 , -0.15613727, 0.10529999, 0.03414752, -0.22981309, -0.38280197, 0.06423689, 0.06691425,... ====================================================================== FAIL: check_rvs (scipy.stats.tests.test_distributions.test_randint) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/stats/tests/test_distributions.py", line 84, in check_rvs assert isinstance(val, numpy.ScalarType),`type(val)` AssertionError: ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_fft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 99, in check_definition assert_array_almost_equal(y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.+0.j, 2.+0.j, 3.+0.j, 4.+1.j, 1.+0.j, 2.+0.j, 3.+0.j, 4.+2.j]) y: array([ 20. +3.j , -0.70710678+0.70710678j, -7. +4.j , -0.70710678-0.70710678j, -4. -3.j , 0.70710678-0.70710678j, -1. -4.j , 0.70710678+0.70710678j]) ====================================================================== FAIL: check_djbfft (scipy.fftpack.tests.test_basic.test_fft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 125, in check_djbfft assert_array_almost_equal(y,y2) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.+0.j, 1.+0.j, 2.+0.j, 3.+0.j]) y: array([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j]) ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_fftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 380, in check_definition assert_array_almost_equal(fftn(x),direct_dftn(x)) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([[[[ 6.00785656e+02 +0.j , -5.43773337e+00 +3.46229661j, -1.23088437e+01 -2.42889835j, ..., -1.26123816e+01-16.10100662j, -1.23088437e+01 +2.42889835j,... y: array([[[[ 1.56754048e+02+0.j , -1.02378379e+00-3.98450277j, -5.04603298e+00+0.77949078j, ..., 6.41083848e-01-6.21599229j, -5.04603298e+00-0.77949078j, -1.02378379e+00+3.98450277j],... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 171, in check_definition assert_array_almost_equal(y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.125+0.j , 0.25 +0.j , 0.375+0.j , 0.5 +0.125j, 0.125+0.j , 0.25 +0.j , 0.375+0.j , 0.5 +0.25j ]) y: array([ 2.5 +0.375j , 0.08838835+0.08838835j, -0.125 -0.5j , 0.08838835-0.08838835j, -0.5 -0.375j , -0.08838835-0.08838835j, -0.875 +0.5j , -0.08838835+0.08838835j]) ====================================================================== FAIL: check_djbfft (scipy.fftpack.tests.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 187, in check_djbfft assert_array_almost_equal(y,y2) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0. +0.j, 0.25+0.j, 0.5 +0.j, 0.75+0.j]) y: array([ 1.5+0.j , -0.5-0.5j, -0.5+0.j , -0.5+0.5j]) ====================================================================== FAIL: check_random_complex (scipy.fftpack.tests.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 195, in check_random_complex assert_array_almost_equal (ifft(fft(x)),x) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 9.42894405e-03+0.00342265j, 1.14753943e-02+0.01060178j, 1.29591488e-03+0.00189321j, 5.93220438e-03+0.00153937j, 9.05895435e-03+0.00339772j, 1.54679173e-03+0.00557983j,... y: array([ 0.60345242+0.21904973j, 0.73442523+0.67851408j, 0.08293855+0.12116574j, 0.37966108+0.09851944j, 0.57977308+0.21745381j, 0.09899467+0.35710892j,... ====================================================================== FAIL: check_random_real (scipy.fftpack.tests.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 201, in check_random_real assert_array_almost_equal (ifft(fft(x)),x) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.50529514+0.j , -0.00891716+0.01194018j, -0.05414143-0.0252995j , 0.03278662+0.02075578j, 0.04225036+0.04154863j, -0.04497517+0.00314367j,... y: array([ 0.65301309, 0.59886448, 0.00535917, 0.42038218, 0.33614078, 0.25537539, 0.02969843, 0.47543031, 0.75088581, 0.77640457, 0.60916846, 0.44936911, 0.16806369, 0.67254209, 0.54249481,... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 541, in check_definition assert_array_almost_equal(ifftn(x),direct_idftn(x)) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([[[[ 4.99182623e-01 +0.00000000e+00j, 9.92310707e-05 -4.45232854e-04j, 1.38377527e-03 -1.81726474e-03j, ...,... y: array([[[[ 1.32547471e-01 +0.00000000e+00j, -6.49846621e-04 +3.22985157e-03j, -1.78142582e-05 -1.17333233e-03j, ...,... ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_irfft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 303, in check_definition assert_array_almost_equal(y,ifft(x1)) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 2.625 , -1.68566017, -0.375 , -1.18566017, 0.625 , 0.43566017, -0.375 , 0.93566017]) y: array([ 0.125+0.j , 0.25 +0.375j, 0.5 +0.125j, 0.25 +0.375j, 0.5 +0.j , 0.25 -0.375j, 0.5 -0.125j, 0.25 -0.375j]) ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (2.125+0j) DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (2.125+0j) DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1569 tests in 851.074s FAILED (failures=22, errors=3) -- iCy-fLaME ------------------------------------------------ The body maybe wounded, but it is the mind that hurts. From trond.danielsen at gmail.com Fri Oct 20 12:41:38 2006 From: trond.danielsen at gmail.com (Trond Danielsen) Date: Fri, 20 Oct 2006 18:41:38 +0200 Subject: [SciPy-user] Building scipy on FC5 x86_64 In-Reply-To: References: <409676c70610200217g56e264d5t9f26fe644039e82b@mail.gmail.com> <453895D0.5050908@iam.uni-stuttgart.de> Message-ID: <409676c70610200941w22d3d33ah6d13d5f648946a08@mail.gmail.com> On 10/20/06, Neal Becker wrote: > env CFLAGS="$RPM_OPT_FLAGS" ATLAS=%{_libdir} FFTW=%{_libdir} BLAS=%{_libdir} > LAPACK=%{_libdir} python setup.py config_fc --fcompiler=gnu95 build Thank you for taking the time to reply! I tried that command, but it still failed. Could you give a little more detailed instruction? Thanks in advance, -- Trond Danielsen From ndbecker2 at gmail.com Fri Oct 20 13:13:30 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 20 Oct 2006 13:13:30 -0400 Subject: [SciPy-user] Building scipy on FC5 x86_64 References: <409676c70610200217g56e264d5t9f26fe644039e82b@mail.gmail.com> <453895D0.5050908@iam.uni-stuttgart.de> Message-ID: Neal Becker wrote: > Nils Wagner wrote: > >> Trond Danielsen wrote: >>> Hi! >>> >>> I am trying to build scipy on Fedora Core 5 (x86_64), but setup.py >>> cannot find the required libraries. How can I add /usr/lib64 to the >>> lib-path? >>> >>> I have followed the instructions on the website for installing on fc5, >>> so the required libs are installed. scipy version is 0.4.8 and numpy >>> is the latest from fedora extra. >>> >>> Best regards, >>> >> AFAIK you can add an entry in the file system_info.py in line 140 >> (numpy/numpy/distutils). >> >> Nils > env CFLAGS="$RPM_OPT_FLAGS" ATLAS=%{_libdir} FFTW=%{_libdir} > BLAS=%{_libdir} LAPACK=%{_libdir} python setup.py config_fc > --fcompiler=gnu95 build This was pasted directly from an rpm spec file. Try: ATLAS=/usr/lib64 FFTW=/usr/lib64 BLAS=/usr/lib64 LAPACK=/usr/lib64 python setup.py config_fc --fcompiler=gnu95 build make sure you have numpy fftw2-devel, blas-devel, lapack-devel, gcc-gfortran, gcc-c++, libstdc++, python-devel From ryanlists at gmail.com Sat Oct 21 14:11:03 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 21 Oct 2006 13:11:03 -0500 Subject: [SciPy-user] signal.lti __mul__ and feedback Message-ID: As I continue to try to make scipy.signal a replacment for the Matlab controls toolbox, I need to be able to multiply signal.lti instances by other instances and by constants (to model a controller transfer function times a plant transfer function) and I need to create a feedback command (i.e. closed-loop tf= Gc*Gp/(1+Gc*Gp) if Gc is the controller transfer function and Gp is the plant transfer function - assuming unity feedback). I wanted to make sure these things haven't already been done and ask for any input before I get started. I was basically planning to define __mul__ and __rmul__ for multiplication by integers, floats, or other lti instances. When multpliying by other instances, I was going to just multiply the numerators and denominators seperately and let poly1d handle that. The only problem with that is what to do when there should be a cancelation (i.e. (s+1)/(s+2)*(s+3)/(s+1)= [(s+3)*(s+1)]/[(s+2)*(s+1)]). Once an __add__ method is defined as well, feedback is easy (I guess I would need a __div__ and probably an __rdiv_ as well). I had started a seperate controls module before I knew about the signal.lti stuff. It is attached. It would have to be seperated from some of my other modules to make it more portable, but most of what I am talking about could be done by making my TransferFunction class derive from signal.lti (or incorporate all of its methods into signal.lti). Any thoughts on this? Thanks, Ryan -------------- next part -------------- A non-text attachment was scrubbed... Name: controls.py Type: text/x-python Size: 9442 bytes Desc: not available URL: From peridot.faceted at gmail.com Sat Oct 21 15:34:36 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Sat, 21 Oct 2006 15:34:36 -0400 Subject: [SciPy-user] signal.lti __mul__ and feedback In-Reply-To: References: Message-ID: On 21/10/06, Ryan Krauss wrote: > As I continue to try to make scipy.signal a replacment for the Matlab > controls toolbox, I need to be able to multiply signal.lti instances > by other instances and by constants (to model a controller transfer > function times a plant transfer function) and I need to create a > feedback command (i.e. closed-loop tf= Gc*Gp/(1+Gc*Gp) if Gc is the > controller transfer function and Gp is the plant transfer function - > assuming unity feedback). I wanted to make sure these things haven't > already been done and ask for any input before I get started. I was > basically planning to define __mul__ and __rmul__ for multiplication > by integers, floats, or other lti instances. When multpliying by > other instances, I was going to just multiply the numerators and > denominators seperately and let poly1d handle that. The only problem > with that is what to do when there should be a cancelation (i.e. > (s+1)/(s+2)*(s+3)/(s+1)= [(s+3)*(s+1)]/[(s+2)*(s+1)]). Once an > __add__ method is defined as well, feedback is easy (I guess I would > need a __div__ and probably an __rdiv_ as well). One could deal with common factors by implementing a polynomial GCD (using Euclid's method), but unfortunately in the presence of roundoff error it becomes rather a sticky subject. When you do polynomial long division and get a remainder, when do you consider that remainder zero? What effect does this rounding have? Of course, if your coefficients are all integers, this problem does not arise, but I doubt that integer-only arrangements are of much use. Can you simply tolerate common factors? A. M. Archibald From hurak at control.felk.cvut.cz Sun Oct 22 12:23:53 2006 From: hurak at control.felk.cvut.cz (=?ISO-8859-2?Q?Zden=ECk_Hur=E1k?=) Date: Sun, 22 Oct 2006 18:23:53 +0200 Subject: [SciPy-user] signal.lti __mul__ and feedback References: Message-ID: Ryan Krauss wrote: > As I continue to try to make scipy.signal a replacment for the Matlab > controls toolbox, Honestly, I feel that the word "replacement" is too strong... > I need to be able to multiply signal.lti instances > by other instances ... [...] > The only problem > with that is what to do when there should be a cancelation (i.e. > (s+1)/(s+2)*(s+3)/(s+1)= [(s+3)*(s+1)]/[(s+2)*(s+1)]). Well, the problem of cancelling commong factors when manipulating transfer functions is one of fundamental ones in control computation. In Matlab, factors are not cancelled automatically except for a special case when both the pole and the zero are at EXACT 0. In all other instances, you have to tell Matlab explicitly that you want a "minimal realization" by calling minreal function. [...] > I had started a seperate controls module before I knew about the > signal.lti stuff. It is attached. It would have to be seperated from > some of my other modules to make it more portable, but most of what I > am talking about could be done by making my TransferFunction class > derive from signal.lti (or incorporate all of its methods into > signal.lti). Any thoughts on this? By the way, does your module support MIMO (multi-input-multi-output) systems? Without this, I am afraid, it will be useless. But starting a SISO (single-input-single-output) version without having a good plan about how to extend it later to MIMO case will surely bring you to dead ends. And the fact is, that computing with matrices of transfer functions is numerically badly inferior to state space method. The only alternative capable of competing is the so-called polynomial matrix fraction approach, when the natural extension of a rational function to MIMO systems is a fraction of two polynomial matrices. You might perhaps take some inspiration from a commercial Polynomial Toolbox for Matlab http://www.polyx.com (the manual is available) or free Polmat package for Mupad at http://polmat.wz.cz. Should you stick with state-space methods, be sure not to develop your own solvers for Riccati and Ljapunov equations and alike but to use somehow the SLICOT Fortran library http://www.slicot.de. Best, Zdenek > > Thanks, > > Ryan From jeff at taupro.com Sun Oct 22 14:20:19 2006 From: jeff at taupro.com (Jeff Rush) Date: Sun, 22 Oct 2006 13:20:19 -0500 Subject: [SciPy-user] A Call for SciPy Talks at PyCon 2007 Message-ID: <453BB663.6070803@taupro.com> We're rapidly approaching the deadline (Oct 31) for submitting a talk proposal for PyCon 2007, being held Feb 23-25 in Addison (Dallas), Texas. As co-chair of this year's conference I'd like to encourage the SciPy/NumPy community to submit a few talks. While we're in the early stages of keynote selection, it appears to be shaping up as a "Python Empowers Education" conference theme, with participants from the One Laptop Per Child project. And with one focus of SciPy being education, from the SciPy Developer Zone, "Today Python is used as a tool for research and higher education, but we would also like to see secondary-school students and their teachers choosing Python as an educational tool over commercial offerings." I think SciPy can be a significant player at PyCon. We also run half-day paid tutorials taught by community members on Feb 22 and the course submission deadline for those is Nov 15. It'd be really cool to have a half-day hands-out introduction to SciPy, and the teachers get paid up to $1500 per half-day, depending upon the number of students. For talk ideas check out our wiki page at: http://us.pycon.org/TX2007/TalkIdeas And to submit a talk proposal: http://us.pycon.org/TX2007/CallForProposals or a tutorial: http://us.pycon.org/TX2007/CallForTutorials Also please spread the word about the conference and encourage those in the scientific, engineering and education communities to attend. Jeff Rush PyCon 2007 Co-Chair From simon at arrowtheory.com Sun Oct 22 17:38:21 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sun, 22 Oct 2006 14:38:21 -0700 Subject: [SciPy-user] PyDX - announcement Message-ID: <20061022143821.3b085ca4.simon@arrowtheory.com> Apologies if this is too off-topic for these lists, but I hope some people here find this interesting! PyDX - first public announcement ================================ Overview -------- PyDX is a package for working with calculus (differential geometry), arbitrary precision arithmetic (using gmpy), and interval arithmetic. It provides: * multivariate automatic differentiation (to arbitrary order) * Tensor objects for computations in differential geometry * Interval scalars (based on libMPFI) for calculating rigorous bounds on arithmetic operations (validated numerics). * Arbitrary order validated ODE solver. PyDX uses lazy computation techniques to greatly enhance performance of the resulting functions. This code grew out of a research project at The Australian National University, Department of Physics, which involved computing bounds on geodesics in relativistic space-time's. Documentation ------------- http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/user-guide.html http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/api/index.html Subversion Repository --------------------- http://gr.anu.edu.au/svn/people/sdburton/pydx/ Simon Burton October 22 2006 The Australian National University From vinicius.lobosco at gmail.com Sun Oct 22 21:26:00 2006 From: vinicius.lobosco at gmail.com (Vinicius Lobosco) Date: Mon, 23 Oct 2006 03:26:00 +0200 Subject: [SciPy-user] Wrong plot with matplotlib In-Reply-To: References: Message-ID: Hi folks! I have a very strange problem. After some calculations, I try to plot a graphic with pylab. But it doesn't plot the right data. If i print the data to the screen (with print), they are allright, but not if I plot with pylab 0.83 on Mandriva. The same script on a Ubuntu machine with pylab 0.82 works fine. The code also works fine if vectorise the __call__ of a class function. Have anyone had simular problems with pylab? Thanks! Vinicius -- --------------------------------- Vinicius Lobosco, PhD www.paperplat.com Sv?rdl?ngsv?gen 39 SE-120 60 Stockholm +46 73 925 8476 -- --------------------------------- Vinicius Lobosco, PhD www.paperplat.com Sv?rdl?ngsv?gen 39 SE-120 60 Stockholm +46 73 925 8476 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Oct 22 21:32:16 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 22 Oct 2006 20:32:16 -0500 Subject: [SciPy-user] Wrong plot with matplotlib In-Reply-To: References: Message-ID: <453C1BA0.6060209@gmail.com> Vinicius Lobosco wrote: > Hi folks! > > I have a very strange problem. After some calculations, I try to plot a > graphic with pylab. But it doesn't plot the right data. If i print the > data to the screen (with print), they are allright, but not if I plot > with pylab 0.83 on Mandriva. The same script on a Ubuntu machine with > pylab 0.82 works fine. The code also works fine if vectorise the > __call__ of a class function. Have anyone had simular problems with pylab? You will need to ask on the matplotlib-users list. https://lists.sourceforge.net/lists/listinfo/matplotlib-users However, you will also need to provide more information. Please try to come up with a small example that displays the problem. Post the code and copy-and-paste any error messages that are printed. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From vinicius.lobosco at gmail.com Sun Oct 22 21:46:06 2006 From: vinicius.lobosco at gmail.com (Vinicius Lobosco) Date: Mon, 23 Oct 2006 03:46:06 +0200 Subject: [SciPy-user] Wrong plot with matplotlib In-Reply-To: <453C1BA0.6060209@gmail.com> References: <453C1BA0.6060209@gmail.com> Message-ID: Dear Robert! Thanks for your prompt response. You're right, I should have given betten exaple, but I think it tourned out to be a problem on my machine. Tried with xplt and got the same results. But the problem was solved magically :-D There was an available scipy package not already installed from urpmi although I already had scipy on my machine :-S When I installed it, it worked fine. I guess it was a kind of a old version of scipy conflict somehow. Maybe I've installed it from subversion before. Thanks anyway. Best regards, Vinicius 2006/10/23, Robert Kern : > > Vinicius Lobosco wrote: > > Hi folks! > > > > I have a very strange problem. After some calculations, I try to plot a > > graphic with pylab. But it doesn't plot the right data. If i print the > > data to the screen (with print), they are allright, but not if I plot > > with pylab 0.83 on Mandriva. The same script on a Ubuntu machine with > > pylab 0.82 works fine. The code also works fine if vectorise the > > __call__ of a class function. Have anyone had simular problems with > pylab? > > You will need to ask on the matplotlib-users list. > > https://lists.sourceforge.net/lists/listinfo/matplotlib-users > > However, you will also need to provide more information. Please try to > come up > with a small example that displays the problem. Post the code and > copy-and-paste > any error messages that are printed. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though > it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- --------------------------------- Vinicius Lobosco, PhD www.paperplat.com Sv?rdl?ngsv?gen 39 SE-120 60 Stockholm +46 73 925 8476 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sadrejeb at gmx.ch Mon Oct 23 02:26:29 2006 From: sadrejeb at gmx.ch (Sadruddin Rejeb) Date: Mon, 23 Oct 2006 08:26:29 +0200 Subject: [SciPy-user] Python-2.5 Win32 binaries Message-ID: <1161584789.4908.10.camel@localhost.localdomain> Hello, Is there any Python-2.5 windows binary package of scipy-0.5.1 planned to be released? Or will Python-2.5 binaries be introduced only with version 0.5.2? Thanks, Sad From trond.danielsen at gmail.com Mon Oct 23 03:57:09 2006 From: trond.danielsen at gmail.com (Trond Danielsen) Date: Mon, 23 Oct 2006 09:57:09 +0200 Subject: [SciPy-user] Building scipy on FC5 x86_64 In-Reply-To: References: <409676c70610200217g56e264d5t9f26fe644039e82b@mail.gmail.com> <453895D0.5050908@iam.uni-stuttgart.de> Message-ID: <409676c70610230057j10ef3f96y279ec2e18ab21275@mail.gmail.com> 2006/10/20, Neal Becker : > > This was pasted directly from an rpm spec file. Try: > ATLAS=/usr/lib64 FFTW=/usr/lib64 BLAS=/usr/lib64 LAPACK=/usr/lib64 python setup.py config_fc --fcompiler=gnu95 build > > make sure you have numpy fftw2-devel, blas-devel, lapack-devel, gcc-gfortran, gcc-c++, libstdc++, python-devel > Thank you! That worked great. Maybe the instructions on the wiki should be updated? -- Trond Danielsen From vmas at carabos.com Mon Oct 23 04:57:37 2006 From: vmas at carabos.com (Vicent Mas (V+)) Date: Mon, 23 Oct 2006 10:57:37 +0200 Subject: [SciPy-user] image processing Message-ID: <200610231057.37151.vmas@carabos.com> Hello, I'm a newcomer to process image and scipy. I'm trying to process a binary image in order to get the biggest connected component. In matlab one can use the bwareaopen to achieve this goal but I don't know how to do it with scipy. I've been looking at scipy.ndimage module with no luck. Could you help me, please? Thanks in advance. -- :: \ / Vicent Mas http://www.carabos.com 0;0 / \ C?rabos Coop. Enjoy Data V V " " From R.Springuel at umit.maine.edu Sat Oct 21 12:48:28 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Sat, 21 Oct 2006 12:48:28 -0400 Subject: [SciPy-user] Installation Problems on Mac OS 10.4 Message-ID: <453A4F5C.8040800@umit.maine.edu> Okay, I finally figured out the problem with the Fortran compiler, and was able to get Scipy to find and use gfortran on my system. However, I'm now getting the following error: gcc: unrecognized option '-no-cpp-precomp' cc1: error: unrecognized command line option "-mno-fused-madd" cc1: error: unrecognized command line option "-arch" cc1: error: unrecognized command line option "-arch" cc1: error: unrecognized command line option "-Wno-long-double" gcc: unrecognized option '-no-cpp-precomp' cc1: error: unrecognized command line option "-mno-fused-madd" cc1: error: unrecognized command line option "-arch" cc1: error: unrecognized command line option "-arch" cc1: error: unrecognized command line option "-Wno-long-double" error: Command "gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 -DUSE_VENDOR_BLAS=1 -c Lib/linsolve/SuperLU/SRC/clangs.c -o build/temp.macosx-10.3-fat-2.5/Lib/linsolve/SuperLU/SRC/clangs.o" failed with exit status 1 I get this error with both gcc 4.0 and 3.3. Does anyone know how to fix it? -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: Wednesday 2-3pm From w.richert at gmx.net Tue Oct 24 04:12:06 2006 From: w.richert at gmx.net (Willi Richert) Date: Tue, 24 Oct 2006 10:12:06 +0200 Subject: [SciPy-user] Error with 0.5.1 tests Message-ID: <200610241012.06718.w.richert@gmx.net> Hi, I installed numpy-1.0rc3 and scipy. At the python shell I did scipy.test() and got: ====================================================================== ERROR: Solve: single precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py", line 25, in check_solve_complex_without_umfpack x = linsolve.spsolve(a, b) File "/usr/lib/python2.4/site-packages/scipy/linsolve/linsolve.py", line 76, in spsolve return gssv(N, lastel, data, index0, index1, b, csc, permc_spec)[0] TypeError: array cannot be safely cast to required type ====================================================================== ERROR: check loadmat case cellnest ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 80, in cc self._check_case(name, expected) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case self._check_level(k_label, expected, matdict[k]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 30, in _check_level assert len(expected) == len(actual), "Different list lengths at %s" % label TypeError: len() of unsized object ====================================================================== FAIL: check_rvs (scipy.stats.tests.test_distributions.test_randint) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/stats/tests/test_distributions.py", line 84, in check_rvs assert isinstance(val, numpy.ScalarType),`type(val)` AssertionError: ---------------------------------------------------------------------- Ran 1569 tests in 4.551s FAILED (failures=1, errors=2) Any hints? wr PS: Here is the output head of "python setup.py install": fft_opt_info: fftw3_info: libraries fftw3 not found in /usr/local/lib libraries fftw3 not found in /usr/lib fftw3 not found NOT AVAILABLE fftw2_info: libraries rfftw,fftw not found in /usr/local/lib libraries rfftw,fftw not found in /usr/lib fftw2 not found NOT AVAILABLE dfftw_info: libraries drfftw,dfftw not found in /usr/local/lib libraries drfftw,dfftw not found in /usr/lib dfftw not found NOT AVAILABLE djbfft_info: NOT AVAILABLE NOT AVAILABLE blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas not found in /usr/local/lib Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib/atlas'] language = c include_dirs = ['/usr/include'] Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/lib/atlas -llapack -lblas -o _configtest ATLAS version 3.6.0 built by camm on Tue Jun 15 03:19:44 UTC 2004: UNAME : Linux intech131 2.4.20 #1 SMP Thu Jan 23 11:24:05 EST 2003 i686 GNU/Linux INSTFLG : MMDEF : ARCHDEF : F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 196608 F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.3.5 (Debian 1:3.3.5-1) F77FLAGS : -O CC : /usr/bin/gcc, version gcc (GCC) 3.3.5 (Debian 1:3.3.5-1) CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops MCC : /usr/bin/gcc, version gcc (GCC) 3.3.5 (Debian 1:3.3.5-1) MCCFLAGS : -fomit-frame-pointer -O success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib/atlas'] language = c define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] include_dirs = ['/usr/include'] ATLAS version 3.6.0 lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack,blas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/lib/atlas numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'lapack', 'blas'] library_dirs = ['/usr/lib/atlas'] language = f77 include_dirs = ['/usr/include'] customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/lib/atlas -llapack -llapack -lblas -o _configtest ATLAS version 3.6.0 built by camm on Tue Jun 15 03:19:44 UTC 2004: UNAME : Linux intech131 2.4.20 #1 SMP Thu Jan 23 11:24:05 EST 2003 i686 GNU/Linux INSTFLG : MMDEF : ARCHDEF : F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 196608 F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.3.5 (Debian 1:3.3.5-1) F77FLAGS : -O CC : /usr/bin/gcc, version gcc (GCC) 3.3.5 (Debian 1:3.3.5-1) CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops MCC : /usr/bin/gcc, version gcc (GCC) 3.3.5 (Debian 1:3.3.5-1) MCCFLAGS : -fomit-frame-pointer -O success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'lapack', 'blas'] library_dirs = ['/usr/lib/atlas'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] include_dirs = ['/usr/include'] ATLAS version 3.6.0 ATLAS version 3.6.0 non-existing path in 'Lib/linsolve': 'tests' umfpack_info: libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:401: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE non-existing path in 'Lib/maxentropy': 'doc' Warning: Subpackage 'Lib' configuration returned as 'scipy' running install running build running config_fc running build_src ... From ctmedra at unizar.es Tue Oct 24 04:25:41 2006 From: ctmedra at unizar.es (Carlos Medrano) Date: Tue, 24 Oct 2006 08:25:41 +0000 (UTC) Subject: [SciPy-user] image processing References: <200610231057.37151.vmas@carabos.com> Message-ID: Vicent Mas (V+ carabos.com> writes: > > Hello, > > I'm a newcomer to process image and scipy. I'm trying to process a > binary image in order to get the biggest connected component. In matlab > one can use the bwareaopen to achieve this goal but I don't know how to > do it with scipy. I've been looking at scipy.ndimage module with no > luck. Could you help me, please? > > Thanks in advance. I dont' know if this can help you, but check: OpenCV library (supposed to have a python interface) http://opencvlibrary.sourceforge.net/Welcome?highlight=%28%28Welcome%29%29 and PIL (Python Imaging library) http://www.pythonware.com/products/index.htm Carlos Medrano From vmas at carabos.com Tue Oct 24 05:54:57 2006 From: vmas at carabos.com (Vicent Mas (V+)) Date: Tue, 24 Oct 2006 11:54:57 +0200 Subject: [SciPy-user] image processing In-Reply-To: References: <200610231057.37151.vmas@carabos.com> Message-ID: <200610241154.57228.vmas@carabos.com> El Tuesday, 24 de October de 2006 10:25, Carlos Medrano escribi?: > Vicent Mas (V+ carabos.com> writes: > > Hello, > > > > I'm a newcomer to process image and scipy. I'm trying to process a > > binary image in order to get the biggest connected component. In > > matlab one can use the bwareaopen to achieve this goal but I don't > > know how to do it with scipy. I've been looking at scipy.ndimage > > module with no luck. Could you help me, please? > > > > Thanks in advance. > > I dont' know if this can help you, but check: > > OpenCV library (supposed to have a python interface) > > http://opencvlibrary.sourceforge.net/Welcome?highlight=%28%28Welcome% >29%29 > > and PIL (Python Imaging library) > > http://www.pythonware.com/products/index.htm > > > Carlos Medrano > It is exactly what I'm doing. Thanks for your advice. -- :: \ / Vicent Mas http://www.carabos.com 0;0 / \ C?rabos Coop. Enjoy Data V V " " From gruben at bigpond.net.au Tue Oct 24 07:41:13 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Tue, 24 Oct 2006 21:41:13 +1000 Subject: [SciPy-user] image processing In-Reply-To: References: <200610231057.37151.vmas@carabos.com> Message-ID: <453DFBD9.6060304@bigpond.net.au> I haven't actually tried this, but I think you should take a look at scipy.ndimage.measurements.label followed by scipy.ndimage.measurements.find_objects and scipy.ndimage.measurements.maximum These sound like they will do what you want. Gary R. Carlos Medrano wrote: > Vicent Mas (V+ carabos.com> writes: > >> Hello, >> >> I'm a newcomer to process image and scipy. I'm trying to process a >> binary image in order to get the biggest connected component. In matlab >> one can use the bwareaopen to achieve this goal but I don't know how to >> do it with scipy. I've been looking at scipy.ndimage module with no >> luck. Could you help me, please? >> >> Thanks in advance. > > I dont' know if this can help you, but check: > > OpenCV library (supposed to have a python interface) > > http://opencvlibrary.sourceforge.net/Welcome?highlight=%28%28Welcome%29%29 > > and PIL (Python Imaging library) > > http://www.pythonware.com/products/index.htm > > > Carlos Medrano From fredmfp at gmail.com Tue Oct 24 14:23:24 2006 From: fredmfp at gmail.com (fred) Date: Tue, 24 Oct 2006 20:23:24 +0200 Subject: [SciPy-user] wiki Message-ID: <453E5A1C.6040902@gmail.com> Hi the list, Only two shorts question: 1) How can I add a page ? I want to add Cookbook/MayaVi/InstallPythonStuffFromSource page. 2) How can I delete some png image files previously uploaded ? Thanks in advance. Cheers, -- http://scipy.org/FredericPetit From cookedm at physics.mcmaster.ca Tue Oct 24 14:46:16 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 24 Oct 2006 14:46:16 -0400 Subject: [SciPy-user] Installation Problems on Mac OS 10.4 In-Reply-To: <453A4F5C.8040800@umit.maine.edu> References: <453A4F5C.8040800@umit.maine.edu> Message-ID: <703B54E1-70AA-4BB4-8B2F-C2D399E39654@physics.mcmaster.ca> On Oct 21, 2006, at 12:48 , R. Padraic Springuel wrote: > Okay, I finally figured out the problem with the Fortran compiler, and > was able to get Scipy to find and use gfortran on my system. However, > I'm now getting the following error: > > gcc: unrecognized option '-no-cpp-precomp' > cc1: error: unrecognized command line option "-mno-fused-madd" > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-Wno-long-double" > gcc: unrecognized option '-no-cpp-precomp' > cc1: error: unrecognized command line option "-mno-fused-madd" > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-Wno-long-double" > error: Command "gcc -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double > -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 > -DUSE_VENDOR_BLAS=1 -c Lib/linsolve/SuperLU/SRC/clangs.c -o > build/temp.macosx-10.3-fat-2.5/Lib/linsolve/SuperLU/SRC/clangs.o" > failed > with exit status 1 > > I get this error with both gcc 4.0 and 3.3. Does anyone know how > to fix it? Are you sure you're using Apple's gcc? These look the errors that a non-Apple gcc would spew (especially the -arch and -no-cpp-precomp). Check what gcc --version gives you. You can also specify which one to use by setting the environment variable CC (as in, 'export CC=/usr/ bin/gcc-4.0'). -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Tue Oct 24 14:46:16 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 24 Oct 2006 14:46:16 -0400 Subject: [SciPy-user] Installation Problems on Mac OS 10.4 In-Reply-To: <453A4F5C.8040800@umit.maine.edu> References: <453A4F5C.8040800@umit.maine.edu> Message-ID: <703B54E1-70AA-4BB4-8B2F-C2D399E39654@physics.mcmaster.ca> On Oct 21, 2006, at 12:48 , R. Padraic Springuel wrote: > Okay, I finally figured out the problem with the Fortran compiler, and > was able to get Scipy to find and use gfortran on my system. However, > I'm now getting the following error: > > gcc: unrecognized option '-no-cpp-precomp' > cc1: error: unrecognized command line option "-mno-fused-madd" > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-Wno-long-double" > gcc: unrecognized option '-no-cpp-precomp' > cc1: error: unrecognized command line option "-mno-fused-madd" > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-arch" > cc1: error: unrecognized command line option "-Wno-long-double" > error: Command "gcc -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double > -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 > -DUSE_VENDOR_BLAS=1 -c Lib/linsolve/SuperLU/SRC/clangs.c -o > build/temp.macosx-10.3-fat-2.5/Lib/linsolve/SuperLU/SRC/clangs.o" > failed > with exit status 1 > > I get this error with both gcc 4.0 and 3.3. Does anyone know how > to fix it? Are you sure you're using Apple's gcc? These look the errors that a non-Apple gcc would spew (especially the -arch and -no-cpp-precomp). Check what gcc --version gives you. You can also specify which one to use by setting the environment variable CC (as in, 'export CC=/usr/ bin/gcc-4.0'). -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Tue Oct 24 14:54:08 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 24 Oct 2006 14:54:08 -0400 Subject: [SciPy-user] [Numpy-discussion] PyDX - announcement In-Reply-To: <20061022143821.3b085ca4.simon@arrowtheory.com> References: <20061022143821.3b085ca4.simon@arrowtheory.com> Message-ID: <0294D4F3-4E20-4BCB-A53E-5D2FDB9895C9@physics.mcmaster.ca> On Oct 22, 2006, at 17:38 , Simon Burton wrote: > > Apologies if this is too off-topic for these lists, but > I hope some people here find this interesting! It looks very nifty. Interval arithmetic for rigourous bounds is something that should be used more often, I think. You should add it to the topical software list at http:// www.scipy.org/Topical_Software. [and it's on topic, btw; announcements of Python numerical software is always welcome here]. > PyDX - first public announcement > ================================ > > Overview > -------- > > PyDX is a package for working with calculus (differential > geometry), > arbitrary precision arithmetic (using gmpy), and interval arithmetic. > > It provides: > * multivariate automatic differentiation (to arbitrary order) > * Tensor objects for computations in differential geometry > * Interval scalars (based on libMPFI) for calculating rigorous > bounds > on arithmetic operations (validated numerics). > * Arbitrary order validated ODE solver. > > PyDX uses lazy computation techniques to greatly enhance > performance of > the resulting functions. > > This code grew out of a research project at The Australian National > University, > Department of Physics, which involved computing bounds on geodesics in > relativistic space-time's. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From mforbes at phys.washington.edu Tue Oct 24 20:43:36 2006 From: mforbes at phys.washington.edu (Michael McNeil Forbes) Date: Tue, 24 Oct 2006 17:43:36 -0700 Subject: [SciPy-user] Subclassing array Message-ID: I have a question about the best way to subclass array types. I would like to create a subclass of the array types that allows me to name the entries of the array. Here is a sample of my desired usage: >>> from scipy import * >>> a = array([1.0,2.0]) >>> >>> class MyArray(a.__class__): >>> _names = ['x','y'] >>> ... (redefine __get__ etc. as needed) >>> >>> c = MyArray(x=2,y=3) >>> c.x, c.y (2, 3) etc. I want c to be an array with two elements, but essentially want to be able to change c.__class__ to be a modified class that can access elements as specified by c._names. Ideally, I would like this to work with any type that supports simple indexing, such as arrays, lists etc. but my first attempts seem to require redefining __new__ which varies for different types making the solution much less general. It seems like I should be able to get the desired behaviour with something like the following: def __new__(cls,**kwargs): self = deepcopy(a) self[:] = map(kwargs.get,cls._names) self.__class__ = cls return cls but this is not allowed because __class__ can only be set for heap types. Is there a standard and safe way to achieve this behaviour? Thanks, Michael. From strawman at astraw.com Tue Oct 24 23:03:28 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 24 Oct 2006 20:03:28 -0700 Subject: [SciPy-user] [JOB] hiring software engineer Message-ID: <453ED400.5020607@astraw.com> The lab I work in at Caltech is hiring a software engineer. I'd love to get someone with an interest in scientific computing with Python. If you might be interested, please check out our job posting: http://www.recruitingcenter.net/clients/CalTech/publicjobs/controller.cfm?jbaction=JobProfile&Job_Id=13370&esid=az Please pursue any formal inquiries through the URL above. -Andrew From bryan.cole at teraview.com Wed Oct 25 05:23:07 2006 From: bryan.cole at teraview.com (Bryan Cole) Date: Wed, 25 Oct 2006 10:23:07 +0100 Subject: [SciPy-user] [JOB] UK Posting: senior software engineer (C++ & Python) Message-ID: <1161768187.26636.14.camel@bryan.teraview.local> Teraview Ltd., UK is looking for a senior software engineer to lead the development of instrumentation software for our new range of spectrometer and imaging products. The formal job spec can be found at http://www.teraview.com/about/vacsoftware.html . Although the job-spec doesn't mention Python, we make extensive use of Python/Numeric/Scipy in our development, so a candidate with these skills would have a compelling advantage. However, experience with C++ and Windows programming is essential to maintain and extend our existing codebase. Anyone interested in this position should either send their CV directly to the address given in the posting, or email me directly for more information. cheers, Bryan -- Group Leader, Technical Development Group - Teraview Ltd. web: www.teraview.com From tom.denniston at alum.dartmouth.org Wed Oct 25 14:53:37 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Wed, 25 Oct 2006 13:53:37 -0500 Subject: [SciPy-user] lexsort in numpy crashing on strings Message-ID: Hi, I've been using the lexsort and it is really nice. Much simpler and faster than the implementation I had before. The only problem is it seems to crash on strings. Do I just need to upgrade numpy or is it a current bug? In [2]: import numpy nu In [3]: numpy.array(['a', 'b', 'c']) Out[3]: array([a, b, c], dtype='|S1') In [4]: strArray = numpy.array(['a', 'b', 'c']) In [5]: intArray = numpy.array([1,2,3]) In [6]: numpy.__version__ Out[6]: '1.0b5' In [7]: numpy.lexsort((intArray,)) Out[7]: array([0, 1, 2]) In [8]: numpy.lexsort((intArray,intArray)) Out[8]: array([0, 1, 2]) In [9]: numpy.lexsort((strArray,intArray)) Segmentation fault (core dumped) -------------- next part -------------- An HTML attachment was scrubbed... URL: From R.Springuel at umit.maine.edu Wed Oct 25 15:17:42 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Wed, 25 Oct 2006 15:17:42 -0400 Subject: [SciPy-user] Installation problems on Max OS X Message-ID: <453FB856.4060900@umit.maine.edu> gcc --version yields the following: i686-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5363) Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. I tried setting the environment variable as suggested and still get the following very similar error: g++: unrecognized option '-no-cpp-precomp' cc1plus: error: unrecognized command line option "-mno-fused-madd" cc1plus: error: unrecognized command line option "-arch" cc1plus: error: unrecognized command line option "-arch" cc1plus: error: unrecognized command line option "-Wno-long-double" g++: unrecognized option '-no-cpp-precomp' cc1plus: error: unrecognized command line option "-mno-fused-madd" cc1plus: error: unrecognized command line option "-arch" cc1plus: error: unrecognized command line option "-arch" cc1plus: error: unrecognized command line option "-Wno-long-double" error: Command "g++ -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c Lib/cluster/src/vq_wrap.cpp -o build/temp.macosx-10.3-fat-2.5/Lib/cluster/src/vq_wrap.o" failed with exit status 1 It's different at least, so that means I'm making progress, right? -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: Wednesday 2-3pm From coughlan at ski.org Wed Oct 25 15:39:40 2006 From: coughlan at ski.org (James Coughlan) Date: Wed, 25 Oct 2006 12:39:40 -0700 Subject: [SciPy-user] can't get weave to work Message-ID: <453FBD7C.2030609@ski.org> Hello, When I try to run a simple weave example (main.py): from scipy import weave a=1 weave.inline('printf("%d",a);',['a'],compiler='gcc') it gives me this error message: " CompileError: error: Command "g++ -mno-cygwin -mdll -O2 -w -IC:\Python24\lib\site-packages\scipy\wea ve -IC:\Python24\lib\site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\core\inclu de -IC:\Python24\include -IC:\Python24\PC -c C:\Python24\lib\site-packages\scipy\weave\scxx\weave_im p.cpp -o c:\docume~1\james\locals~1\temp\James\python24_intermediate\compiler_ad9404e4505a343c45a7cd 09360558c2\Release\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o" failed with exit status 2 WARNING: Failure executing file: " If I instead specify compiler='msvc' I get a different error message: " DistutilsPlatformError: The .NET Framework SDK needs to be installed before building extensions for Python. WARNING: Failure executing file: " When I ran weave.test() I get this report: "Ran 132 tests in 3.445s", but it also produced some errors, e.g. "specified build_dir '_bad_path_' does not exist or is not writable". I am running Python Enthought Edition -- Python 2.4.3 for Windows Enhanced Python Distribution, which includes SciPy version 0.5.0.2033. Weave *used* to work for me when I was running Python 2.3. Can anybody give me any suggestions, or suggest someone who might be able to help me? Thanks, James -- ------------------------------------------------------- James Coughlan, Ph.D., Associate Scientist Smith-Kettlewell Eye Research Institute Email: coughlan at ski.org URL: http://www.ski.org/Rehab/Coughlan_lab/ Phone: 415-345-2146 Fax: 415-345-8455 ------------------------------------------------------- From robert.kern at gmail.com Wed Oct 25 15:28:13 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Oct 2006 14:28:13 -0500 Subject: [SciPy-user] lexsort in numpy crashing on strings In-Reply-To: References: Message-ID: Tom Denniston wrote: > Hi, I've been using the lexsort and it is really nice. Much simpler and > faster than the implementation I had before. The only problem is it > seems to crash on strings. > > Do I just need to upgrade numpy or is it a current bug? Upgrade numpy. I believe this bug has been fixed. http://projects.scipy.org/scipy/numpy/ticket/298 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Wed Oct 25 16:36:26 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 25 Oct 2006 22:36:26 +0200 Subject: [SciPy-user] wiki In-Reply-To: <453E5A1C.6040902@gmail.com> References: <453E5A1C.6040902@gmail.com> Message-ID: <20061025203626.GG5588@clipper.ens.fr> On Tue, Oct 24, 2006 at 08:23:24PM +0200, fred wrote: > 1) How can I add a page ? > I want to add Cookbook/MayaVi/InstallPythonStuffFromSource page. Just create a link to this page from another one. Then you can click on this link, and edit the blank page. > 2) How can I delete some png image files previously uploaded ? You cannot. Ga?l From fredmfp at gmail.com Wed Oct 25 17:27:10 2006 From: fredmfp at gmail.com (fred) Date: Wed, 25 Oct 2006 23:27:10 +0200 Subject: [SciPy-user] wiki In-Reply-To: <20061025203626.GG5588@clipper.ens.fr> References: <453E5A1C.6040902@gmail.com> <20061025203626.GG5588@clipper.ens.fr> Message-ID: <453FD6AE.2050801@gmail.com> Gael Varoquaux a ?crit : >On Tue, Oct 24, 2006 at 08:23:24PM +0200, fred wrote: > > >>1) How can I add a page ? >>I want to add Cookbook/MayaVi/InstallPythonStuffFromSource page. >> >> > >Just create a link to this page from another one. Then you can click on >this link, and edit the blank page. > Ok, fine. I was missing the second ":" in my link. Sorry. >>2) How can I delete some png image files previously uploaded ? >> >> > >You cannot. > > I only want to upgrade some png file I have uploaded. Then, why is there a "delete" item beside images in info page ? Cheers, -- http://scipy.org/FredericPetit From coughlan at ski.org Wed Oct 25 18:05:12 2006 From: coughlan at ski.org (James Coughlan) Date: Wed, 25 Oct 2006 15:05:12 -0700 Subject: [SciPy-user] can't get weave to work (apologies for resend) Message-ID: <453FDF98.2020708@ski.org> Hello, When I try to run a simple weave example (main.py): from scipy import weave a=1 weave.inline('printf("%d",a);',['a'],compiler='gcc') it gives me this error message: " CompileError: error: Command "g++ -mno-cygwin -mdll -O2 -w -IC:\Python24\lib\site-packages\scipy\wea ve -IC:\Python24\lib\site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\core\inclu de -IC:\Python24\include -IC:\Python24\PC -c C:\Python24\lib\site-packages\scipy\weave\scxx\weave_im p.cpp -o c:\docume~1\james\locals~1\temp\James\python24_intermediate\compiler_ad9404e4505a343c45a7cd 09360558c2\Release\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o" failed with exit status 2 WARNING: Failure executing file: " If I instead specify compiler='msvc' I get a different error message: " DistutilsPlatformError: The .NET Framework SDK needs to be installed before building extensions for Python. WARNING: Failure executing file: " When I ran weave.test() I get this report: "Ran 132 tests in 3.445s", but it also produced some errors, e.g. "specified build_dir '_bad_path_' does not exist or is not writable". I am running Python Enthought Edition -- Python 2.4.3 for Windows Enhanced Python Distribution, which includes SciPy version 0.5.0.2033. Weave *used* to work for me when I was running Python 2.3. Can anybody give me any suggestions, or point me to someone who might be able to help me? Thanks, James -- ------------------------------------------------------- James Coughlan, Ph.D., Associate Scientist Smith-Kettlewell Eye Research Institute Email: coughlan at ski.org URL: http://www.ski.org/Rehab/Coughlan_lab/ Phone: 415-345-2146 Fax: 415-345-8455 ------------------------------------------------------- From scipy at mspacek.mm.st Thu Oct 26 02:43:24 2006 From: scipy at mspacek.mm.st (Martin Spacek) Date: Wed, 25 Oct 2006 23:43:24 -0700 Subject: [SciPy-user] Maximum entropy distribution for Ising model - setup? Message-ID: <4540590C.5090905@mspacek.mm.st> Hello, This is mostly directed to Ed Schofield, but maybe someone else might be able to help as well. I'm trying to use the scipy.maxentropy model but I've run into a snag, and I suspect I simply haven't set it up correctly for my situation. I'd like to find the maximum entropy distribution of an Ising model of magnetic spins (in my case, the spins are actually activity states of neurons in a cortical network). Spins can either be down or up (-1 or 1 respectively). Spins can interact with each other in a pairwise way. From what I understand, the Ising model is this: P(s1, s2, ..., sN) = 1 / Z * exp( sum(hi*si) + 0.5*sum(Jij*si*sj) ) i i!=j The si are the spins. I want to model roughly N=10 of them. The hi and Jij are the Lagrange multipliers. The hi are the local magnetic fields acting on each spin, and the Jij are the degree in which each pair of spins interacts. Z is the normalizing partition function. I'd like to be able to provide the average value for each spin in my system (a list of 10 floats in the range (-1, 1)), as well as the averages of all possible pairwise products of spins (a list of 10 choose 2 == 45 floats, also in the range (-1, 1)). Then, after running (whichever) maxentropy algorithm and fitting it to the provided data, I'd like to be able to pull out the probabilities of each and every possible state of the population of spins (say [-1, -1, 1, ...] or [1, -1, -1, ...]). I used scipy/maxentropy/examples/bergerexample.py to get started. I've got the following code: #----------- import neuropy as np from scipy import maxentropy def f1(x): return x==1 class Ising(object): """Ising maximum entropy model""" def __init__(self, means, pairmeans, algorithm='CG'): nbits = len(means) samplespace = [-1, 1] # non spiking or spiking f1s = [ f1 ]*nbits # Return True if they're either both spiking or both non-spiking # (correlated), otherwise, return False f2s = [ lambda x: f1s[i](x) == f1s[j](x) \ for i in range(0, nbits) for j in range(i+1, nbits) ] f = np.concatenate((f1s, f2s)) self.model = maxentropy.model(f, samplespace) self.model.verbose = True # Now set the desired feature expectations means = np.asarray(means) pairmeans = np.asarray(pairmeans) pairmeans /= 2.0 # add the one half in front of each coefficient K = np.concatenate((means, pairmeans)) # Fit the model self.model.fit(K, algorithm=algorithm) # Output the distribution print "\nFitted model parameters are:\n" \ + str(self.model.params) print "\nFitted distribution is:" p = self.model.probdist() for j in range(len(self.model.samplespace)): x = self.model.samplespace[j] print '\tx:%s, p(x):%s' % (x, p[j]) i = Ising(means, pairmeans) #-------------- means is a list of 10 floats that mostly hover around -0.8 to -0.5. pairmeans is a list of 45 floats that are mostly around 0.4 to 0.9. This runs, but gives me a DivergenceError: Grad eval #0 norm of gradient = 6.02170024163 Function eval # 0 dual is 0.69314718056 Function eval # 1 dual is -29.5692018412 Grad eval #1 norm of gradient = 5.03761439516 Function eval # 2 dual is -147.846016909 Grad eval #2 norm of gradient = 5.03761183138 Function eval # 3 dual is -620.953271018 Grad eval #3 norm of gradient = 5.03761183138 Function eval # 4 dual is -1478.46016909 Grad eval #4 norm of gradient = 5.03761183138 Iteration # 0 Function eval # 5 dual is -1478.46016909 Traceback (most recent call last): File "C:\bin\Python24\lib\site-packages\scipy\maxentropy\maxentropy.py", line 221, in fit callback=callback) File "C:\bin\Python24\Lib\site-packages\scipy\optimize\optimize.py", line 811, in fmin_cg callback(xk) File "C:\bin\Python24\lib\site-packages\scipy\maxentropy\maxentropy.py", line 397, in log raise DivergenceError, "dual is below the threshold 'mindual'" \ DivergenceError: "dual is below the threshold 'mindual' and may be diverging to -inf. Fix the constraints or lower the threshold!" I tried changing the model's mindual attrib from -100 to something ridiculous like -100000, but that didn't change much, just took more iterations before bombing out. I've also tried some of the algorithms other than 'CG', without any luck. I don't know much about maximum entropy modelling, especially the actual algorithms. I only have a general idea of what it's supposed to do, so the above code is really just a shot in the dark. I suspect the problem is in properly defining both the means and the pairwise means for the model. Or it may have something to do with continuous vs. discrete values. Any suggestions? Thanks, Martin P.S. For anyone interested, this is related to the recent Nature paper "Weak pairwise correlation imply strongly correlated network states in a neural population" by Schneidman et al: http://www.nature.com/nature/journal/v440/n7087/abs/nature04701.html From fullung at gmail.com Thu Oct 26 03:50:24 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu, 26 Oct 2006 09:50:24 +0200 Subject: [SciPy-user] Subclassing array In-Reply-To: Message-ID: Hey Michael Don't record arrays already solve your problem? See the following page on the wiki for some examples: http://www.scipy.org/RecordArrays Cheers, Albert > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] > On Behalf Of Michael McNeil Forbes > Sent: 25 October 2006 02:44 > To: scipy-user at scipy.org > Subject: [SciPy-user] Subclassing array > > I have a question about the best way to subclass array types. > > I would like to create a subclass of the array types that allows me to > name the entries of the array. Here is a sample of my desired usage: > > >>> from scipy import * > >>> a = array([1.0,2.0]) > >>> > >>> class MyArray(a.__class__): > >>> _names = ['x','y'] > >>> ... (redefine __get__ etc. as needed) > >>> > >>> c = MyArray(x=2,y=3) > >>> c.x, c.y > (2, 3) From davidlinke at tiscali.de Thu Oct 26 14:24:56 2006 From: davidlinke at tiscali.de (David Linke) Date: Thu, 26 Oct 2006 20:24:56 +0200 Subject: [SciPy-user] wiki In-Reply-To: <453FD6AE.2050801@gmail.com> References: <453E5A1C.6040902@gmail.com> <20061025203626.GG5588@clipper.ens.fr> <453FD6AE.2050801@gmail.com> Message-ID: <4540FD78.4000303@tiscali.de> fred wrote: > Gael Varoquaux a ?crit : > > >>On Tue, Oct 24, 2006 at 08:23:24PM +0200, fred wrote: >> >> >> >>>1) How can I add a page ? >>>I want to add Cookbook/MayaVi/InstallPythonStuffFromSource page. >>> >>> >> >>Just create a link to this page from another one. Then you can click on >>this link, and edit the blank page. >> > > Ok, fine. > > I was missing the second ":" in my link. > Sorry. This ":" syntax is an experimental syntax in MoinMoin. I am wondering why it is used on scipy wiki so often. The standard link to a page with a non-wikiname is ["Wikipage"] or [Self:Wikipage Other name for page]. The : notation is a replacement for the latter. Yet another way to create a page is to go to http://scipy.org/FindPage and enter the name of the new page in the field with the "Go To Page" button (and click the bottom ;-)). >>>2) How can I delete some png image files previously uploaded ? >>> >>> >> >>You cannot. >> > > I only want to upgrade some png file I have uploaded. > Then, why is there a "delete" item beside images in info page ? Ask someone listed on http://scipy.org/EditorsGroup to add your name to that page. Regards, David From fredmfp at gmail.com Thu Oct 26 16:47:27 2006 From: fredmfp at gmail.com (fred) Date: Thu, 26 Oct 2006 22:47:27 +0200 Subject: [SciPy-user] wiki In-Reply-To: <4540FD78.4000303@tiscali.de> References: <453E5A1C.6040902@gmail.com> <20061025203626.GG5588@clipper.ens.fr> <453FD6AE.2050801@gmail.com> <4540FD78.4000303@tiscali.de> Message-ID: <45411EDF.9080905@gmail.com> David Linke a ?crit : [snip] >Ask someone listed on http://scipy.org/EditorsGroup to add your name to >that page. > > Thanks again, David ! -- http://scipy.org/FredericPetit From coughlan at ski.org Thu Oct 26 18:08:31 2006 From: coughlan at ski.org (James Coughlan) Date: Thu, 26 Oct 2006 15:08:31 -0700 Subject: [SciPy-user] can't get weave to work (apologies for resend) Message-ID: <454131DF.7080405@ski.org> Hello, When I try to run a simple weave example (main.py): from scipy import weave a=1 weave.inline('printf("%d",a);',['a'],compiler='gcc') it gives me this error message: " CompileError: error: Command "g++ -mno-cygwin -mdll -O2 -w -IC:\Python24\lib\site-packages\scipy\wea ve -IC:\Python24\lib\site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\core\inclu de -IC:\Python24\include -IC:\Python24\PC -c C:\Python24\lib\site-packages\scipy\weave\scxx\weave_im p.cpp -o c:\docume~1\james\locals~1\temp\James\python24_intermediate\compiler_ad9404e4505a343c45a7cd 09360558c2\Release\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o" failed with exit status 2 WARNING: Failure executing file: " If I instead specify compiler='msvc' I get a different error message: " DistutilsPlatformError: The .NET Framework SDK needs to be installed before building extensions for Python. WARNING: Failure executing file: " When I ran weave.test() I get this report: "Ran 132 tests in 3.445s", but it also produced some errors, e.g. "specified build_dir '_bad_path_' does not exist or is not writable". I am running Python Enthought Edition -- Python 2.4.3 for Windows Enhanced Python Distribution, which includes SciPy version 0.5.0.2033. Weave *used* to work for me when I was running Python 2.3. Can anybody give me any suggestions, or point me to someone who might be able to help me? Thanks, James -- ------------------------------------------------------- James Coughlan, Ph.D., Associate Scientist Smith-Kettlewell Eye Research Institute Email: coughlan at ski.org URL: http://www.ski.org/Rehab/Coughlan_lab/ Phone: 415-345-2146 Fax: 415-345-8455 ------------------------------------------------------- From oliphant at ee.byu.edu Thu Oct 26 19:28:39 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 26 Oct 2006 17:28:39 -0600 Subject: [SciPy-user] can't get weave to work (apologies for resend) In-Reply-To: <454131DF.7080405@ski.org> References: <454131DF.7080405@ski.org> Message-ID: <454144A7.4040100@ee.byu.edu> James Coughlan wrote: >Hello, > >When I try to run a simple weave example (main.py): > >from scipy import weave >a=1 >weave.inline('printf("%d",a);',['a'],compiler='gcc') > >it gives me this error message: >" >CompileError: error: Command "g++ -mno-cygwin -mdll -O2 -w >-IC:\Python24\lib\site-packages\scipy\wea >ve -IC:\Python24\lib\site-packages\scipy\weave\scxx >-IC:\Python24\lib\site-packages\numpy\core\inclu >de -IC:\Python24\include -IC:\Python24\PC -c >C:\Python24\lib\site-packages\scipy\weave\scxx\weave_im >p.cpp -o >c:\docume~1\james\locals~1\temp\James\python24_intermediate\compiler_ad9404e4505a343c45a7cd > > >09360558c2\Release\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o" >failed with exit status >2 >WARNING: Failure executing file: >" > > It looks like your compiler is not working. Go to the temporary directory where weave has stored the source code it is compiling and try to compile it at the command line. Perhaps we are not seeing all the error messages that would identify what is wrong with your set-up. From tim.leslie at gmail.com Fri Oct 27 01:45:38 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Fri, 27 Oct 2006 15:45:38 +1000 Subject: [SciPy-user] Maximum entropy distribution for Ising model - setup? In-Reply-To: <4540590C.5090905@mspacek.mm.st> References: <4540590C.5090905@mspacek.mm.st> Message-ID: On 10/26/06, Martin Spacek wrote: > Hello, > > This is mostly directed to Ed Schofield, but maybe someone else might be > able to help as well. > > I'm trying to use the scipy.maxentropy model but I've run into a snag, > and I suspect I simply haven't set it up correctly for my situation. > > I'd like to find the maximum entropy distribution of an Ising model of > magnetic spins (in my case, the spins are actually activity states of > neurons in a cortical network). Spins can either be down or up (-1 or 1 > respectively). Spins can interact with each other in a pairwise way. > > From what I understand, the Ising model is this: > > P(s1, s2, ..., sN) = 1 / Z * exp( sum(hi*si) + 0.5*sum(Jij*si*sj) ) > i i!=j > > The si are the spins. I want to model roughly N=10 of them. The hi and > Jij are the Lagrange multipliers. The hi are the local magnetic fields > acting on each spin, and the Jij are the degree in which each pair of > spins interacts. Z is the normalizing partition function. > I could be wrong here, but doesn't the Ising model only consider nearest neighbour interactions? Would changing your setup to only consider these interactions make things work? Tim > I'd like to be able to provide the average value for each spin in my > system (a list of 10 floats in the range (-1, 1)), as well as the > averages of all possible pairwise products of spins (a list of 10 choose > 2 == 45 floats, also in the range (-1, 1)). Then, after running > (whichever) maxentropy algorithm and fitting it to the provided data, > I'd like to be able to pull out the probabilities of each and every > possible state of the population of spins (say [-1, -1, 1, ...] or [1, > -1, -1, ...]). > > I used scipy/maxentropy/examples/bergerexample.py to get started. I've > got the following code: > > #----------- > import neuropy as np > from scipy import maxentropy > > def f1(x): > return x==1 > > class Ising(object): > """Ising maximum entropy model""" > def __init__(self, means, pairmeans, algorithm='CG'): > nbits = len(means) > samplespace = [-1, 1] # non spiking or spiking > f1s = [ f1 ]*nbits > > # Return True if they're either both spiking or both non-spiking > # (correlated), otherwise, return False > f2s = [ lambda x: f1s[i](x) == f1s[j](x) \ > for i in range(0, nbits) for j in range(i+1, nbits) ] > f = np.concatenate((f1s, f2s)) > self.model = maxentropy.model(f, samplespace) > self.model.verbose = True > > # Now set the desired feature expectations > means = np.asarray(means) > pairmeans = np.asarray(pairmeans) > pairmeans /= 2.0 # add the one half in front of each coefficient > K = np.concatenate((means, pairmeans)) > > # Fit the model > self.model.fit(K, algorithm=algorithm) > > # Output the distribution > print "\nFitted model parameters are:\n" \ > + str(self.model.params) > print "\nFitted distribution is:" > p = self.model.probdist() > for j in range(len(self.model.samplespace)): > x = self.model.samplespace[j] > print '\tx:%s, p(x):%s' % (x, p[j]) > > i = Ising(means, pairmeans) > > #-------------- > means is a list of 10 floats that mostly hover around -0.8 to -0.5. > pairmeans is a list of 45 floats that are mostly around 0.4 to 0.9. This > runs, but gives me a DivergenceError: > > Grad eval #0 > norm of gradient = 6.02170024163 > Function eval # 0 > dual is 0.69314718056 > Function eval # 1 > dual is -29.5692018412 > Grad eval #1 > norm of gradient = 5.03761439516 > Function eval # 2 > dual is -147.846016909 > Grad eval #2 > norm of gradient = 5.03761183138 > Function eval # 3 > dual is -620.953271018 > Grad eval #3 > norm of gradient = 5.03761183138 > Function eval # 4 > dual is -1478.46016909 > Grad eval #4 > norm of gradient = 5.03761183138 > Iteration # 0 > Function eval # 5 > dual is -1478.46016909 > Traceback (most recent call last): > > File > "C:\bin\Python24\lib\site-packages\scipy\maxentropy\maxentropy.py", line > 221, in fit > callback=callback) > File "C:\bin\Python24\Lib\site-packages\scipy\optimize\optimize.py", > line 811, in fmin_cg > callback(xk) > File > "C:\bin\Python24\lib\site-packages\scipy\maxentropy\maxentropy.py", line > 397, in log > raise DivergenceError, "dual is below the threshold 'mindual'" \ > DivergenceError: "dual is below the threshold 'mindual' and may be > diverging to -inf. Fix the constraints or lower the threshold!" > > I tried changing the model's mindual attrib from -100 to something > ridiculous like -100000, but that didn't change much, just took more > iterations before bombing out. I've also tried some of the algorithms > other than 'CG', without any luck. > > I don't know much about maximum entropy modelling, especially the actual > algorithms. I only have a general idea of what it's supposed to do, so > the above code is really just a shot in the dark. I suspect the problem > is in properly defining both the means and the pairwise means for the > model. Or it may have something to do with continuous vs. discrete > values. Any suggestions? > > Thanks, > > Martin > > > P.S. For anyone interested, this is related to the recent Nature paper > "Weak pairwise correlation imply strongly correlated network states in a > neural population" by Schneidman et al: > > http://www.nature.com/nature/journal/v440/n7087/abs/nature04701.html > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From scipy at mspacek.mm.st Fri Oct 27 04:51:36 2006 From: scipy at mspacek.mm.st (Martin Spacek) Date: Fri, 27 Oct 2006 01:51:36 -0700 Subject: [SciPy-user] Maximum entropy distribution for Ising model - setup? In-Reply-To: References: <4540590C.5090905@mspacek.mm.st> Message-ID: <4541C898.3040109@mspacek.mm.st> Tim Leslie wrote: > I could be wrong here, but doesn't the Ising model only consider > nearest neighbour interactions? Would changing your setup to only > consider these interactions make things work? Yeah, that's what I've read as well, so I'm a bit confused over why the paper I mentioned calls this an Ising model. What we're doing is considering interactions among all pairs, although most of those are very weak. Anyway, what I need is to run maxentropy on this system: P(s1, s2, ..., sN) = 1 / Z * exp( sum(hi*si) + 0.5*sum(Jij*si*sj) ) i i!=j regardless of whether it's Ising or not. I think I can see how to do it in scipy.maxentropy if it were simply: P(s1, s2, ..., sN) = 1 / Z * exp( sum(hi*si) ) i But getting it to work with the pairwise terms is just guesswork on my part... Martin From david at ar.media.kyoto-u.ac.jp Fri Oct 27 05:46:03 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 27 Oct 2006 18:46:03 +0900 Subject: [SciPy-user] pyaudio, a module to make noise from numpy arrays Message-ID: <4541D55B.9090505@ar.media.kyoto-u.ac.jp> Hi, I announce the first release of pyaudio, a module to make noise from numpy arrays (read, write and play audio files with numpy arrays). * WHAT FOR ?: The Goal is to give to a numpy/scipy environmenet some basic audio IO facilities (ala sound, wavread, wavwrite of matlab). With pyaudio, you should be able to read and write most common audio files from and to numpy arrays. The underlying IO operations are done using libsndfile from Erik Castro Lopo (http://www.mega-nerd.com/libsndfile/), so he is the one who did the hard work. As libsndfile has support for a vast number of audio files (including wav, aiff, but also htk, ircam, and flac, an open source lossless codec), pyaudio enables the import from and export to a fairly large number of file formats. There is also a really crude player, which uses tempfile to play audio, and which only works for linux-alsa anyway. I intend to add better support at least for linux, and for other platforms if this does not involve too much hassle. So basically, if you are lucky enough to use a recent linux system, pyaudio already gives you the equivalent of wavread, wavwrite and sound. * DOWNLOAD: http://www.ar.media.kyoto-u.ac.jp/members/david/pyaudio.tar.gz * INSTALLATION INSTRUCTIONS: Just untar the package and drop it into scipy/Lib/sandbox, and add the two following lines to scipy/Lib/sandbox/setup.py: # Package to make some noise using numpy config.add_subpackage('pyaudio') (if libsndfile.so is not in /usr/lib, a fortiori if you are a windows user, you should also change set the right location for libsndfile in pyaudio/pysndfile.py, at the line _snd.cdll.LoadLibrary('/usr/lib/libsndfile.so') ) * EXAMPLE USAGE == Reading example == # Reading from '/home/david/blop.flac' from scipy.sandbox.pyaudio import sndfile a = sndfile('/home/david/blop.flac') print a tmp = a.read_frames_float(1024) --> Prints: File : /home/david/blop.flac Sample rate : 44100 Channels : 2 Frames : 9979776 Format : 0x00170002 Sections : 1 Seekable : True Duration : 00:03:46.298 And put into tmp the 1024 first frames (a frame is the equivalent of a sample, but taking into account the number of channels: so 1024 frames gives you 2048 samples here). == Writing example == # Writing to a wavfile: from scipy.sandbox.pyaudio import sndfile import numpy as N noise = N.random.randn((44100)) a = sndfile('/home/david/blop.flac', sfm['SFM_WRITE'], sf_format['SF_FORMAT_WAV'] | sf_format['SF_FORMAT_PCM16'], 1, 44100) a.write_frames(noise, 44100) a.close() -> should gives you a lossless compressed white noise ! This is really a first release, not really tested, not much documentation, I can just say it works for me. I haven't found a good way to emulate enumerations, which libsndfile uses a lot, so I am using dictionaries generated from the library C header to get a relation enum label <=> value. If someone has a better idea, I am open to suggestions ! Cheers, David From coughlan at ski.org Fri Oct 27 16:11:39 2006 From: coughlan at ski.org (James Coughlan) Date: Fri, 27 Oct 2006 13:11:39 -0700 Subject: [SciPy-user] Maximum entropy distribution for Ising model - setup? In-Reply-To: <4540590C.5090905@mspacek.mm.st> References: <4540590C.5090905@mspacek.mm.st> Message-ID: <454267FB.1060806@ski.org> Hi, I've done some max. ent. modeling, although I'm unfamiliar with the scipy maxentropy package. A few points: 1.) If N=10 spins suffices, then there are only 2^10=1024 possible configurations of your model, so you can calculate Z and thus the average spin (and spin product ) values exactly by hand, and completely bypass the maxentropy package. On the other hand, maybe scipy maxentropy can save you a lot of work! 2.) If you are doing max. ent. modeling, then you are given *empirical* (measured) values of _{emp} and _{emp} and are trying to find model parameters hi and Jij to match these values, i.e. =_{emp} and =_{emp} where the LHS's are the averages w.r.t. the model. You can solve for the model parameters by iterating these gradient descent equations: hi_{new}=hi_{old} + K * ( - _{emp}) Jij_{new}=Jij_{old}+ K' * ( - _{emp}) where K and K' are pos. "step size" constants. (On the RHS, and are w.r.t. hi_{old} and Jij_{old}.) But this process can be slow. It can be sped up by choosing K, K' adaptively; maybe the maxentropy package does this more intelligently. 3.) In a typical Ising model, interactions are nearest neighbor, which means that Jij is sparse (i.e. it only =1 for neighboring spins i and j). But in principal your model can certainly handle arbitrary Jij. However, it might be easier (and less prone to overfitting) if you keep Jij sparse. Hope this helps. Best, James From ivan.nazarenko at gmail.com Fri Oct 27 17:12:59 2006 From: ivan.nazarenko at gmail.com (Ivan Nazarenko) Date: Fri, 27 Oct 2006 21:12:59 +0000 (UTC) Subject: [SciPy-user] numpy - scipy version hell References: <200610191314.49095.jkgruet@sandia.gov> <4537CF49.8000008@ee.byu.edu> Message-ID: Travis Oliphant ee.byu.edu> writes: > Very few packages will have the correct versions of numpy and scipy that > work together. > > The latest that work together is Scipy 0.5.1 and NumPy 1.0rc2 > > Look for an updated SciPy release to work with NumPy 1.0 when it comes > out next week. > > -Travis I had a similar problem, and OK, I will wait until next Scipy release. But it is a bit odd that we can't download the 1.0rc2 version easily; sourceforge is only hosting 1.0 (final), at least for linux. I downloaded the most recent versions of numpy and scipy and can't make them work. And can't even download a compatible numpy version!!! Go figure... Ivan From oliphant.travis at ieee.org Fri Oct 27 17:31:44 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 27 Oct 2006 15:31:44 -0600 Subject: [SciPy-user] numpy - scipy version hell In-Reply-To: References: <200610191314.49095.jkgruet@sandia.gov> <4537CF49.8000008@ee.byu.edu> Message-ID: <45427AC0.4000905@ieee.org> Ivan Nazarenko wrote: > Travis Oliphant ee.byu.edu> writes: > > >> Very few packages will have the correct versions of numpy and scipy that >> work together. >> >> The latest that work together is Scipy 0.5.1 and NumPy 1.0rc2 >> >> Look for an updated SciPy release to work with NumPy 1.0 when it comes >> out next week. >> >> -Travis >> > > I had a similar problem, and OK, I will wait until next Scipy release. But it > is a bit odd that we can't download the 1.0rc2 version easily; sourceforge is > only hosting 1.0 (final), at least for linux. > If you are building from scratch, then you don't need the binaries. You can build 1.0 and 0.5.1 and they will work together. You will have a few unit tests fail, but they are pretty inconsequential. > I downloaded the most recent versions of numpy and scipy and can't make them > work. And can't even download a compatible numpy version!!! Go figure... > what is the error you are getting? They should work fine when you build them yourself. The only issue is binary compatibility which is the reason 1.0rc2 binaries are made available. From ckkart at hoc.net Fri Oct 27 21:47:01 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sat, 28 Oct 2006 10:47:01 +0900 Subject: [SciPy-user] packaging with numpy.distutils Message-ID: <4542B695.8040806@hoc.net> Hi, I posted this to the numpy list but maybe it has been overlooked. Apologizes for asking here. I am creating a binary package on windows (bdist_wininst) making use of numpy.distutils and I would like to have something installed into PREFIX. With standard distutils this can be achieved by using the data_files keyword with setup. With numpy.distutils however those data end up in site-packages. Any ideas? Christian From scipy at mspacek.mm.st Sat Oct 28 08:23:36 2006 From: scipy at mspacek.mm.st (Martin Spacek) Date: Sat, 28 Oct 2006 05:23:36 -0700 Subject: [SciPy-user] Maximum entropy distribution for Ising model - setup? In-Reply-To: <454267FB.1060806@ski.org> References: <4540590C.5090905@mspacek.mm.st> <454267FB.1060806@ski.org> Message-ID: <45434BC8.9000909@mspacek.mm.st> Hi James, Bear with me, I don't exactly have a complete understanding of this. James Coughlan wrote: > Hi, > > I've done some max. ent. modeling, although I'm unfamiliar with the > scipy maxentropy package. A few points: > > 1.) If N=10 spins suffices, then there are only 2^10=1024 possible > configurations of your model, so you can calculate Z and thus the > average spin (and spin product ) values exactly by hand, and > completely bypass the maxentropy package. On the other hand, maybe scipy > maxentropy can save you a lot of work! Hmm. I don't really follow you. I already have the average spin values and average spin product values (I get those from experiment). I want to find a combination of the individual spins and spin products that maximizes the entropy = -sum( pi*log(pi) ) (where each pi is the model's predicted probability of one of 1024 states) of the probability distribution for all possible combinations of spin states (up, up, down.... , or up, down, up...). > 2.) If you are doing max. ent. modeling, then you are given *empirical* > (measured) values of _{emp} and _{emp} and are trying to find > model parameters hi and Jij to match these values, i.e. =_{emp} > and =_{emp} where the LHS's are the averages w.r.t. the > model. You can solve for the model parameters by iterating these > gradient descent equations: > > hi_{new}=hi_{old} + K * ( - _{emp}) > Jij_{new}=Jij_{old}+ K' * ( - _{emp}) > > where K and K' are pos. "step size" constants. (On the RHS, and > are w.r.t. hi_{old} and Jij_{old}.) > > But this process can be slow. It can be sped up by choosing K, K' > adaptively; maybe the maxentropy package does this more intelligently. I thought the whole magic of running maximum entropy is not just that you end up with a set of hi and Jij that give you a probability distribution that matches the expectation values that you asked for (to within some tolerance), but that you also choose those hi and Jij such that the distribution's entropy is maximized, thereby minimizing any assumptions about it that weren't specified by the constraints (such as third, or fourth, or higher order statistics). > 3.) In a typical Ising model, interactions are nearest neighbor, which > means that Jij is sparse (i.e. it only =1 for neighboring spins i and > j). But in principal your model can certainly handle arbitrary Jij. > However, it might be easier (and less prone to overfitting) if you keep > Jij sparse. In this case, there's no measure of nearness of neighbours that I can use. They're all equally potential neighbours of each other. I'm not sure this is a typical nearest neighbour Ising model. After some more reading, I almost suspect that the authors of the Nature paper I mentioned shouldn't have chosen to call it Ising. I think many of the Jij will be quite small, but I want that to come out of the model. I don't have any way to justify setting some of them to zero ahead of time. I certainly doubt that any of the Jij would end up equal to 1. Martin From coughlan at ski.org Sat Oct 28 13:18:45 2006 From: coughlan at ski.org (James Coughlan) Date: Sat, 28 Oct 2006 10:18:45 -0700 Subject: [SciPy-user] Maximum entropy distribution for Ising model - setup? In-Reply-To: <45434BC8.9000909@mspacek.mm.st> References: <4540590C.5090905@mspacek.mm.st> <454267FB.1060806@ski.org> <45434BC8.9000909@mspacek.mm.st> Message-ID: <454390F5.5020806@ski.org> Martin, Hope this makes sense. If not, why don't we take this discussion offline. Best, James 1.) "I want to find a combination of the individual spins and spin products that maximizes the entropy" No, you want to find a *probability distribution* of spins that maximizes the entropy. Remember, the entropy of a random variable is defined for a distribution of the variable (e.g. configuration of spins), not for a particular value of the variable. 2.) "I thought the whole magic of running maximum entropy is not just that you end up with a set of hi and Jij that give you a probability distribution that matches the expectation values that you asked for (to within some tolerance), but that you also choose those hi and Jij such that the distribution's entropy is maximized" True -- see the summary below. 3.) "I think many of the Jij will be quite small, but I want that to come out of the model. I don't have any way to justify setting some of them to zero ahead of time." No problem, as long as your empirical values are measured accurately enough (and assuming the max.entr. framework makes sense for your application). 4.) Maxent summary: Case 1. A single scalar empirical statistic Given a variable x (e.g. N spin states) and empirical stastic _{emp}, the max. ent. distr. whose statistic matches the empirical statistic is: P(x) = e^{a*f(x)} / Z(a) where a is chosen such that = _{emp}. Entropy is defined as H = -\sum_x P(x) log P(x). Proof (sketch): Use lagrange multiplier to maximize entropy subject to constraint = _{emp}: E = H - a( - _{emp}) where a is the lagrange multiplier. Maximize E wrt P(x) and a, get P(x) = const * e^{a*f(x)}, where we call const Z(a) since it depends on a, and where a is chosen to satisfy = _{emp}. Case 2. More than one scalar empirical statistic Now f(x) can be a scalar or vector function (one component for each empirically observed scalar value) of x. In your spin example, it is a vector with many components: f(s)=(s_1, ... s_N; s_12, ... s_1N, s_22, ... s_2N, s_N1, ... s_NN), in which case the lagrange multipliers are the h_i and J_ij 's. Similar result is obtained. From arserlom at gmail.com Sat Oct 28 14:30:36 2006 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Sat, 28 Oct 2006 20:30:36 +0200 Subject: [SciPy-user] How to calculate minors. In-Reply-To: References: Message-ID: Thanks to both for your answers and excuse me for being so slow in my reply, I've been quite busy lately. Stephen: I didn't know you could calculate minors that way, thanks for the tip. Anyway, it looks like it's not the most efficient way of calculating them. I also think a minor() function should be included in numpy. Archibald: I've built two python functions, but I was not able to write them with just indexing, I had to use hstack and vstack also. Here are the functions from numpy import * def erase(array,row,column): return vstack((hstack((array[:row,:column], array[:row,column+1:])), hstack((array[row+1:,:column], array[row+1:,column+1:])))) def minor(array,index): return linalg.det(erase(array,index,index)) Is it possible to avoid using hstack and vstack in erase(), it looks like there must be an easier way of doing the same thing. Anyway, in my opinion, both functions are homemade workarounds for things that should be included in numpy. Armando Serrano. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fiepye at atlas.cz Sun Oct 29 03:36:42 2006 From: fiepye at atlas.cz (fiepye at atlas.cz) Date: Sun, 29 Oct 2006 09:36:42 +0100 Subject: [SciPy-user] Parallel OO programming in Python. Message-ID: Hello. I am interested in parallel computing in Python. Except other modules I would like to use new modules for vector and matrix operations and scientific computing SciPy and NumPy. I have already installed LAPACK and BLAS libraries. It works well. For object oriented parallel programming in Python on a single machine I can use techniques such as Bulk Synchronous Parallelism (BSP) or Message Passing Interface (MPI). There are mentioned some modules and packages for Python on http://www.scipy.org/Topical_Software Article: Parallel and distributed programming. After reading prerequisites and limitations I thing that the following ones could be good for me: PyMPI Pypar MPI for Python But I can't distinguish which one brings fewer limitations, is efficient in application and will develop in future. My favorite is BSP, but I can't find a package for present SciPy and NumPy. Could anybody give me a recommendation, which way I should go. Thanks. Fie Pye --------------------------------------- M?me dal?? spot P.?tvrtn??ka. Pod?vejte se www.neuservis.cz From sodomau at postech.ac.kr Sun Oct 29 05:44:48 2006 From: sodomau at postech.ac.kr (Sunghyun Cho) Date: Sun, 29 Oct 2006 18:44:48 +0800 Subject: [SciPy-user] Nonlinear optimization with linear constraints Message-ID: <5143a98b0610290244t3a9dd406ma9dd64cf3d06a35e@mail.gmail.com> Hello, Scipy users. I'm currently looking for a way to optimize nonlinear problem with linear constraints. I read documents of scipy, but I found there are just optimization methods with bound constraints which just use min and max values. If you know any packages related to this problem, please let me know. I'll deeply appreciate your reply. THANK YOU! -- Sunghyun Cho sodomau at postech.ac.kr http://sodomau.plus.or.kr Computer Graphics Lab. POSTECH -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgk at et.uni-magdeburg.de Sun Oct 29 08:32:23 2006 From: hgk at et.uni-magdeburg.de (Dr. Hans Georg Krauthaeuser) Date: Sun, 29 Oct 2006 14:32:23 +0100 Subject: [SciPy-user] Combinatoric/Statistics: length of longest sequence Message-ID: Dear All, I have a 2d-array A of p-values (coming from pairwise statistical tests of N datasets). N is typically in the range of 100-1000. Now, given a certain threshold, I want to find the longest subset S of range(N), so that A[i][j]>threshold for all i,j from S. Unfortunately, the array A is not 'sparse', i.e. a lot of pairs (i,j) will fulfill A[i][j]>threshold (typically ~50% of the elements). All I can think of are brute force algorithms that probably will run forever because of the large number of possible combinations to check. It would be great if someone could give a pointer to a smart algorithm. If it is not possible to give the exact answer, what algorithm could be used to get a good approximative answer? Regards Hans Georg From ellisonbg.net at gmail.com Sun Oct 29 14:16:44 2006 From: ellisonbg.net at gmail.com (Brian Granger) Date: Sun, 29 Oct 2006 12:16:44 -0700 Subject: [SciPy-user] Parallel OO programming in Python. In-Reply-To: References: Message-ID: <6ce0ac130610291116rf184702i6d1f5ccc476ae886@mail.gmail.com> > For object oriented parallel programming in Python on a single machine I can use techniques such as Bulk Synchronous Parallelism (BSP) or Message Passing Interface (MPI). There are mentioned some modules and packages for Python on > http://www.scipy.org/Topical_Software > Article: Parallel and distributed programming. > After reading prerequisites and limitations I thing that the following ones could be good for me: > PyMPI > Pypar > MPI for Python I think mpi4py is the best option. It is a very nice implementation and the developer is working really hard to make it a great package. Also, I don't think pympi and pypar are being developed actively anymore. Another thing you might check out is IPython's parallel computing facilities that let you use mpi4py (and other mpi bindings) interactively: http://ipython.scipy.org/moin/Parallel_Computing I have never used BPS, but am familiar with the model. Can probably get it working interactively with IPython as well. Cheers, Brian > But I can't distinguish which one brings fewer limitations, is efficient in application and will develop in future. > My favorite is BSP, but I can't find a package for present SciPy and NumPy. > > Could anybody give me a recommendation, which way I should go. > > Thanks. > Fie Pye > > > > > --------------------------------------- > M?me dal?? spot P.?tvrtn??ka. Pod?vejte se www.neuservis.cz > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From haase at msg.ucsf.edu Sun Oct 29 18:04:38 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 29 Oct 2006 15:04:38 -0800 Subject: [SciPy-user] Nonlinear optimization with linear constraints In-Reply-To: <5143a98b0610290244t3a9dd406ma9dd64cf3d06a35e@mail.gmail.com> References: <5143a98b0610290244t3a9dd406ma9dd64cf3d06a35e@mail.gmail.com> Message-ID: <45453386.6020002@msg.ucsf.edu> Sunghyun Cho wrote: > Hello, Scipy users. > > I'm currently looking for a way to optimize nonlinear problem with > linear constraints. > I read documents of scipy, but I found there are just optimization > methods with bound constraints which just use min and max values. > If you know any packages related to this problem, please let me know. > I'll deeply appreciate your reply. > > THANK YOU! > > -- > Sunghyun Cho > sodomau at postech.ac.kr > http://sodomau.plus.or.kr > Computer Graphics Lab. > POSTECH > > Would the cobyla function help you ? It is in scipy.optimize. Look also in the archive of this list for it. -Sebastian From peridot.faceted at gmail.com Sun Oct 29 18:14:00 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Sun, 29 Oct 2006 19:14:00 -0400 Subject: [SciPy-user] Combinatoric/Statistics: length of longest sequence In-Reply-To: References: Message-ID: On 29/10/06, Dr. Hans Georg Krauthaeuser wrote: > Dear All, > > I have a 2d-array A of p-values (coming from pairwise statistical tests > of N datasets). N is typically in the range of 100-1000. Now, given a > certain threshold, I want to find the longest subset S of range(N), so > that A[i][j]>threshold for all i,j from S. It's not quite clear, here, whether you want the longest *contiguous* subset ({2,3,4,5,6}) or the largest arbitrary subset ({2,17,21,22}). > Unfortunately, the array A is not 'sparse', i.e. a lot of pairs (i,j) > will fulfill A[i][j]>threshold (typically ~50% of the elements). > > All I can think of are brute force algorithms that probably will run > forever because of the large number of possible combinations to check. If you want the longest contiguous subset, for N=1000 that's only of order a million operations for a brute-force algorithm (for each starting element, see how long the run is), so unless you want to do it lots of times I wouldn't worry. If you do, some Numeric cleverness might allow you to move some of the heavy lifting to C, though for simple indexing calculations pure python can be plenty fast. If you want the largest arbitrary subset, that's going to be expensive, and it bears an unnerving resemblance to the subset-sum problem. I can imagine an expensive recursive algorithm that would do it without too much pain, though the space to search is so huge you're liable to be taking a very long time. However you go about it, you may be better off using python's set datatype (for element i, form the set of all elements j for which A[i][j]>threshold). I've attached a demonstration program that solves both interpretations of your problem; the difficult one appears to bog down seriously around N=200 (with p=0.5) but the easy one runs into memory problems before taking too long. A. M. Archibald -------------- next part -------------- A non-text attachment was scrubbed... Name: settrick.py Type: text/x-python Size: 1055 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Sun Oct 29 18:39:53 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 30 Oct 2006 00:39:53 +0100 Subject: [SciPy-user] Error on the wiki Message-ID: <20061029233953.GC25515@clipper.ens.fr> It looks like someone made an error on the wiki and killed the end of the TentativeNumpyTutorial. I do not know how to revert the change, could someone less clumsy then me do it. Cheers, Ga?l From robert.kern at gmail.com Sun Oct 29 18:55:44 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 29 Oct 2006 17:55:44 -0600 Subject: [SciPy-user] Error on the wiki In-Reply-To: <20061029233953.GC25515@clipper.ens.fr> References: <20061029233953.GC25515@clipper.ens.fr> Message-ID: <45453F80.5010605@gmail.com> Gael Varoquaux wrote: > It looks like someone made an error on the wiki and killed the end of > the TentativeNumpyTutorial. I do not know how to revert the change, > could someone less clumsy then me do it. I've reverted back to revision 42. The trick is to click on the "info" icon (the white "i" on a blue square). You will see a table with the whole revision history. You can view the diffs between revisions by choosing revisions in the "Diff" column (left radio button is the "from" revision, right is "to") and clicking on the "Diff" button. You can see that revisions 43 and 44 deleted a whole bunch of text by accident. So, to revert, I went to the revision 42 row and click on the "revert" link in the rightmost column. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dahl.joachim at gmail.com Mon Oct 30 01:48:49 2006 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Mon, 30 Oct 2006 07:48:49 +0100 Subject: [SciPy-user] Nonlinear optimization with linear constraints In-Reply-To: <45453386.6020002@msg.ucsf.edu> References: <5143a98b0610290244t3a9dd406ma9dd64cf3d06a35e@mail.gmail.com> <45453386.6020002@msg.ucsf.edu> Message-ID: <47347f490610292248p77e39679m1c98c41ef1ebcdda@mail.gmail.com> On 10/30/06, Sebastian Haase wrote: > > Sunghyun Cho wrote: > > Hello, Scipy users. > > > > I'm currently looking for a way to optimize nonlinear problem with > > linear constraints. > > I read documents of scipy, but I found there are just optimization > > methods with bound constraints which just use min and max values. > > If you know any packages related to this problem, please let me know. > > I'll deeply appreciate your reply. > > If your cost-function is convex you could try CVXOPT, which is a general package for solving convex optimization problems; it works quite well with SciPy via the array interface protocol. - Joachim > THANK YOU! > > > > -- > > Sunghyun Cho > > sodomau at postech.ac.kr > > http://sodomau.plus.or.kr > > Computer Graphics Lab. > > POSTECH > > > > > > Would the cobyla function help you ? It is in scipy.optimize. > Look also in the archive of this list for it. > > -Sebastian > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgk at et.uni-magdeburg.de Mon Oct 30 03:03:04 2006 From: hgk at et.uni-magdeburg.de (Dr. Hans Georg Krauthaeuser) Date: Mon, 30 Oct 2006 09:03:04 +0100 Subject: [SciPy-user] Combinatoric/Statistics: length of longest sequence In-Reply-To: References: Message-ID: A. M. Archibald wrote: > On 29/10/06, Dr. Hans Georg Krauthaeuser wrote: >> Dear All, >> >> I have a 2d-array A of p-values (coming from pairwise statistical tests >> of N datasets). N is typically in the range of 100-1000. Now, given a >> certain threshold, I want to find the longest subset S of range(N), so >> that A[i][j]>threshold for all i,j from S. > > It's not quite clear, here, whether you want the longest *contiguous* > subset ({2,3,4,5,6}) or the largest arbitrary subset ({2,17,21,22}). Oh well, sorry for being not precise enough. I'm searching for the largest arbitrary subset. > >> Unfortunately, the array A is not 'sparse', i.e. a lot of pairs (i,j) >> will fulfill A[i][j]>threshold (typically ~50% of the elements). >> >> All I can think of are brute force algorithms that probably will run >> forever because of the large number of possible combinations to check. > > If you want the longest contiguous subset, for N=1000 that's only of > order a million operations for a brute-force algorithm (for each > starting element, see how long the run is), so unless you want to do > it lots of times I wouldn't worry. If you do, some Numeric cleverness > might allow you to move some of the heavy lifting to C, though for > simple indexing calculations pure python can be plenty fast. > > If you want the largest arbitrary subset, that's going to be > expensive, and it bears an unnerving resemblance to the subset-sum > problem. I can imagine an expensive recursive algorithm that would do > it without too much pain, though the space to search is so huge you're > liable to be taking a very long time. However you go about it, you may > be better off using python's set datatype (for element i, form the set > of all elements j for which A[i][j]>threshold). > > I've attached a demonstration program that solves both interpretations > of your problem; the difficult one appears to bog down seriously > around N=200 (with p=0.5) but the easy one runs into memory problems > before taking too long. > > A. M. Archibald > > Thanks a lot for your hints and for the example code. I will check what I can do with it. I'll be back when I have some results. Regards Hans Georg From peridot.faceted at gmail.com Mon Oct 30 04:04:59 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Mon, 30 Oct 2006 04:04:59 -0500 Subject: [SciPy-user] Combinatoric/Statistics: length of longest sequence In-Reply-To: References: Message-ID: On 30/10/06, Dr. Hans Georg Krauthaeuser wrote: > A. M. Archibald wrote: > > On 29/10/06, Dr. Hans Georg Krauthaeuser wrote: > >> Dear All, > >> > >> I have a 2d-array A of p-values (coming from pairwise statistical tests > >> of N datasets). N is typically in the range of 100-1000. Now, given a > >> certain threshold, I want to find the longest subset S of range(N), so > >> that A[i][j]>threshold for all i,j from S. > > > > It's not quite clear, here, whether you want the longest *contiguous* > > subset ({2,3,4,5,6}) or the largest arbitrary subset ({2,17,21,22}). > > Oh well, sorry for being not precise enough. I'm searching for the > largest arbitrary subset. Ah, then you are definitely looking at an NP-complete problem: http://en.wikipedia.org/wiki/Clique_problem They suggest a heuristic, but I don't know how effective it is likely to be; it probably depends on the structure of your graph (matrix). It's possible, though unlikely, that a more continuous version of your problem (one that took into account the actual values of A[i][j], for example) would be more tractable - in fact, the eigenvector corresponding to the maximal eigenvalue of A should be a node weighting which has larger values for the "more connected" nodes (in a particular sense). > Thanks a lot for your hints and for the example code. I will check what > I can do with it. I'll be back when I have some results. It's just a quick hack, and can probably be significantly improved, but good luck. A. M. Archibald From nwagner at iam.uni-stuttgart.de Mon Oct 30 07:20:11 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 30 Oct 2006 13:20:11 +0100 Subject: [SciPy-user] CVXOPT In-Reply-To: <47347f490610292248p77e39679m1c98c41ef1ebcdda@mail.gmail.com> References: <5143a98b0610290244t3a9dd406ma9dd64cf3d06a35e@mail.gmail.com> <45453386.6020002@msg.ucsf.edu> <47347f490610292248p77e39679m1c98c41ef1ebcdda@mail.gmail.com> Message-ID: <4545EDFB.1050900@iam.uni-stuttgart.de> Joachim Dahl wrote: > > > On 10/30/06, *Sebastian Haase* > wrote: > > Sunghyun Cho wrote: > > Hello, Scipy users. > > > > I'm currently looking for a way to optimize nonlinear problem with > > linear constraints. > > I read documents of scipy, but I found there are just optimization > > methods with bound constraints which just use min and max values. > > If you know any packages related to this problem, please let me > know. > > I'll deeply appreciate your reply. > > > > > If your cost-function is convex you could try CVXOPT, which is a general > package for solving convex optimization problems; it works quite well > with > SciPy via the array interface protocol. > > - Joachim > > > THANK YOU! > > > > -- > > Sunghyun Cho > > sodomau at postech.ac.kr > > > > http://sodomau.plus.or.kr > > Computer Graphics Lab. > > POSTECH > > > > > > Would the cobyla function help you ? It is in scipy.optimize. > Look also in the archive of this list for it. > > -Sebastian > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I have installed CVXOPT. Where can I get a pdf version of the documentation ? http://www.ee.ucla.edu/~vandenbe/cvxopt/doc/cvxopt.html A make in cvxopt-0.8/doc failed mkhowto: Command not found Nils From fccoelho at gmail.com Mon Oct 30 07:27:35 2006 From: fccoelho at gmail.com (Flavio Coelho) Date: Mon, 30 Oct 2006 09:27:35 -0300 Subject: [SciPy-user] Parallel OO programming in Python. In-Reply-To: <6ce0ac130610291116rf184702i6d1f5ccc476ae886@mail.gmail.com> References: <6ce0ac130610291116rf184702i6d1f5ccc476ae886@mail.gmail.com> Message-ID: Thank you for this post and many thanks to Fernando Perez, for the awesome tool that Ipython1 is becomming!!! I am beginning to use it today!! Fl?vio On 10/29/06, Brian Granger wrote: > > > For object oriented parallel programming in Python on a single machine I > can use techniques such as Bulk Synchronous Parallelism (BSP) or Message > Passing Interface (MPI). There are mentioned some modules and packages for > Python on > > http://www.scipy.org/Topical_Software > > Article: Parallel and distributed programming. > > After reading prerequisites and limitations I thing that the following > ones could be good for me: > > PyMPI > > Pypar > > MPI for Python > > I think mpi4py is the best option. It is a very nice implementation > and the developer is working really hard to make it a great package. > Also, I don't think pympi and pypar are being developed actively > anymore. > > Another thing you might check out is IPython's parallel computing > facilities that let you use mpi4py (and other mpi bindings) > interactively: > > http://ipython.scipy.org/moin/Parallel_Computing > > I have never used BPS, but am familiar with the model. Can probably > get it working interactively with IPython as well. > > Cheers, > > Brian > > > But I can't distinguish which one brings fewer limitations, is efficient > in application and will develop in future. > > My favorite is BSP, but I can't find a package for present SciPy and > NumPy. > > > > Could anybody give me a recommendation, which way I should go. > > > > Thanks. > > Fie Pye > > > > > > > > > > --------------------------------------- > > M?me dal?? spot P.?tvrtn??ka. Pod?vejte se www.neuservis.cz > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Fl?vio Code?o Coelho registered Linux user # 386432 --------------------------- "Laws are like sausages. It's better not to see them being made." Otto von Bismark -------------- next part -------------- An HTML attachment was scrubbed... URL: From Giovanni.Samaey at cs.kuleuven.be Mon Oct 30 09:03:26 2006 From: Giovanni.Samaey at cs.kuleuven.be (Giovanni Samaey) Date: Mon, 30 Oct 2006 15:03:26 +0100 Subject: [SciPy-user] Parallel OO programming in Python. In-Reply-To: References: <6ce0ac130610291116rf184702i6d1f5ccc476ae886@mail.gmail.com> Message-ID: <4546062E.1020501@cs.kuleuven.ac.be> Hi, I also looked at the entry on Ipython1 and it looks really promising. Did I understand correctly that Ipython1 is able to run parallel python code, regardless of which "MPI for python" package one eventually uses? Giovanni Flavio Coelho wrote: > Thank you for this post and many thanks to Fernando Perez, for the > awesome tool that Ipython1 is becomming!!! I am beginning to use it > today!! > > Fl?vio > > On 10/29/06, * Brian Granger* > wrote: > > > For object oriented parallel programming in Python on a single > machine I can use techniques such as Bulk Synchronous Parallelism > (BSP) or Message Passing Interface (MPI). There are mentioned some > modules and packages for Python on > > http://www.scipy.org/Topical_Software > > Article: Parallel and distributed programming. > > After reading prerequisites and limitations I thing that the > following ones could be good for me: > > PyMPI > > Pypar > > MPI for Python > > I think mpi4py is the best option. It is a very nice implementation > and the developer is working really hard to make it a great package. > Also, I don't think pympi and pypar are being developed actively > anymore. > > Another thing you might check out is IPython's parallel computing > facilities that let you use mpi4py (and other mpi bindings) > interactively: > > http://ipython.scipy.org/moin/Parallel_Computing > > I have never used BPS, but am familiar with the model. Can probably > get it working interactively with IPython as well. > > Cheers, > > Brian > > > But I can't distinguish which one brings fewer limitations, is > efficient in application and will develop in future. > > My favorite is BSP, but I can't find a package for present SciPy > and NumPy. > > > > Could anybody give me a recommendation, which way I should go. > > > > Thanks. > > Fie Pye > > > > > > > > > > --------------------------------------- > > M?me dal?? spot P.?tvrtn??ka. Pod?vejte se www.neuservis.cz > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -- > Fl?vio Code?o Coelho > registered Linux user # 386432 > --------------------------- > "Laws are like sausages. It's better not to see them being made." > Otto von Bismark > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ellisonbg.net at gmail.com Mon Oct 30 11:01:53 2006 From: ellisonbg.net at gmail.com (Brian Granger) Date: Mon, 30 Oct 2006 09:01:53 -0700 Subject: [SciPy-user] Parallel OO programming in Python. In-Reply-To: <4546062E.1020501@cs.kuleuven.ac.be> References: <6ce0ac130610291116rf184702i6d1f5ccc476ae886@mail.gmail.com> <4546062E.1020501@cs.kuleuven.ac.be> Message-ID: <6ce0ac130610300801s5707a6a3h3480ce08899ecd46@mail.gmail.com> > I also looked at the entry on Ipython1 and it looks really promising. > Did I understand correctly that Ipython1 is able to run parallel python > code, regardless of which "MPI for python" package one eventually uses? In principle yes. The IPython engines have a configuration option that you use to import the mpi bindings of your choice. But, to work properly, the import statement must cause MPI_Init() to be called. If the bindings you are using don't do that, it should b easy to write a wrapper module that imports and also calls MPI_Init(). Also, some of this depends on what MPI implementation you are using. The more moderns ones (like openmpi) are really nice and pretty forgiving. The older ones, though, sometimes require that Python itself (not your script) call MPI_Init(). This is also very easy to work around. The bottom line is that we can help you get it to work with whatever mpi binding and implementation you want. Withh that said, I highly do highly recommend mpi4py and openmpi - we use these regularly with ipython and it all works well. Brian > Giovanni > > Flavio Coelho wrote: > > Thank you for this post and many thanks to Fernando Perez, for the > > awesome tool that Ipython1 is becomming!!! I am beginning to use it > > today!! > > > > Fl?vio > > > > On 10/29/06, * Brian Granger* > > wrote: > > > > > For object oriented parallel programming in Python on a single > > machine I can use techniques such as Bulk Synchronous Parallelism > > (BSP) or Message Passing Interface (MPI). There are mentioned some > > modules and packages for Python on > > > http://www.scipy.org/Topical_Software > > > Article: Parallel and distributed programming. > > > After reading prerequisites and limitations I thing that the > > following ones could be good for me: > > > PyMPI > > > Pypar > > > MPI for Python > > > > I think mpi4py is the best option. It is a very nice implementation > > and the developer is working really hard to make it a great package. > > Also, I don't think pympi and pypar are being developed actively > > anymore. > > > > Another thing you might check out is IPython's parallel computing > > facilities that let you use mpi4py (and other mpi bindings) > > interactively: > > > > http://ipython.scipy.org/moin/Parallel_Computing > > > > I have never used BPS, but am familiar with the model. Can probably > > get it working interactively with IPython as well. > > > > Cheers, > > > > Brian > > > > > But I can't distinguish which one brings fewer limitations, is > > efficient in application and will develop in future. > > > My favorite is BSP, but I can't find a package for present SciPy > > and NumPy. > > > > > > Could anybody give me a recommendation, which way I should go. > > > > > > Thanks. > > > Fie Pye > > > > > > > > > > > > > > > --------------------------------------- > > > M?me dal?? spot P.?tvrtn??ka. Pod?vejte se www.neuservis.cz > > > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > -- > > Fl?vio Code?o Coelho > > registered Linux user # 386432 > > --------------------------- > > "Laws are like sausages. It's better not to see them being made." > > Otto von Bismark > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fccoelho at gmail.com Mon Oct 30 12:53:54 2006 From: fccoelho at gmail.com (Flavio Coelho) Date: Mon, 30 Oct 2006 15:53:54 -0200 Subject: [SciPy-user] Parallel OO programming in Python. In-Reply-To: <6ce0ac130610300801s5707a6a3h3480ce08899ecd46@mail.gmail.com> References: <6ce0ac130610291116rf184702i6d1f5ccc476ae886@mail.gmail.com> <4546062E.1020501@cs.kuleuven.ac.be> <6ce0ac130610300801s5707a6a3h3480ce08899ecd46@mail.gmail.com> Message-ID: Hi Brian, Can Ipython1 take advantage of multiple core cpus or SMP machines in the same way it does for a networked cluster? It would be nice to have that since quad-core cpu are arriving on the market and SMP machines are pretty common already. Thanks! Fl?vio On 10/30/06, Brian Granger wrote: > > > I also looked at the entry on Ipython1 and it looks really promising. > > Did I understand correctly that Ipython1 is able to run parallel python > > code, regardless of which "MPI for python" package one eventually uses? > > In principle yes. The IPython engines have a configuration option > that you use to import the mpi bindings of your choice. But, to work > properly, the import statement must cause MPI_Init() to be called. If > the bindings you are using don't do that, it should b easy to write a > wrapper module that imports and also calls MPI_Init(). Also, some of > this depends on what MPI implementation you are using. The more > moderns ones (like openmpi) are really nice and pretty forgiving. The > older ones, though, sometimes require that Python itself (not your > script) call MPI_Init(). This is also very easy to work around. The > bottom line is that we can help you get it to work with whatever mpi > binding and implementation you want. > > Withh that said, I highly do highly recommend mpi4py and openmpi - we > use these regularly with ipython and it all works well. > > Brian > > > > Giovanni > > > > Flavio Coelho wrote: > > > Thank you for this post and many thanks to Fernando Perez, for the > > > awesome tool that Ipython1 is becomming!!! I am beginning to use it > > > today!! > > > > > > Fl?vio > > > > > > On 10/29/06, * Brian Granger* > > > wrote: > > > > > > > For object oriented parallel programming in Python on a single > > > machine I can use techniques such as Bulk Synchronous Parallelism > > > (BSP) or Message Passing Interface (MPI). There are mentioned some > > > modules and packages for Python on > > > > http://www.scipy.org/Topical_Software > > > > Article: Parallel and distributed programming. > > > > After reading prerequisites and limitations I thing that the > > > following ones could be good for me: > > > > PyMPI > > > > Pypar > > > > MPI for Python > > > > > > I think mpi4py is the best option. It is a very nice > implementation > > > and the developer is working really hard to make it a great > package. > > > Also, I don't think pympi and pypar are being developed actively > > > anymore. > > > > > > Another thing you might check out is IPython's parallel computing > > > facilities that let you use mpi4py (and other mpi bindings) > > > interactively: > > > > > > http://ipython.scipy.org/moin/Parallel_Computing > > > > > > I have never used BPS, but am familiar with the model. Can > probably > > > get it working interactively with IPython as well. > > > > > > Cheers, > > > > > > Brian > > > > > > > But I can't distinguish which one brings fewer limitations, is > > > efficient in application and will develop in future. > > > > My favorite is BSP, but I can't find a package for present SciPy > > > and NumPy. > > > > > > > > Could anybody give me a recommendation, which way I should go. > > > > > > > > Thanks. > > > > Fie Pye > > > > > > > > > > > > > > > > > > > > --------------------------------------- > > > > M?me dal?? spot P.?tvrtn??ka. Pod?vejte se www.neuservis.cz > > > > > > > > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > -- > > > Fl?vio Code?o Coelho > > > registered Linux user # 386432 > > > --------------------------- > > > "Laws are like sausages. It's better not to see them being made." > > > Otto von Bismark > > > > ------------------------------------------------------------------------ > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Fl?vio Code?o Coelho registered Linux user # 386432 --------------------------- "Laws are like sausages. It's better not to see them being made." Otto von Bismark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg.net at gmail.com Mon Oct 30 14:18:34 2006 From: ellisonbg.net at gmail.com (Brian Granger) Date: Mon, 30 Oct 2006 12:18:34 -0700 Subject: [SciPy-user] Parallel OO programming in Python. In-Reply-To: References: <6ce0ac130610291116rf184702i6d1f5ccc476ae886@mail.gmail.com> <4546062E.1020501@cs.kuleuven.ac.be> <6ce0ac130610300801s5707a6a3h3480ce08899ecd46@mail.gmail.com> Message-ID: <6ce0ac130610301118ncd8d3al32a4c68f6db3801@mail.gmail.com> > Can Ipython1 take advantage of multiple core cpus or SMP machines in the > same way it does for a networked cluster? It would be nice to have that > since quad-core cpu are arriving on the market and SMP machines are pretty > common already. This was one of the major usage cases we have had in mind. So yes. Some details though: In IPython1 computations are done by a separate process that we are calling the IPython engine. Multiple IPython engines can be run on multiple systems (distributed memory), a single (multi CPU and/or multi-core) system or a combination of the two. I have run two IPython engines on my dual core MacBook and seen 2x speedup with simple python scripts. I wasn't using MPI, but with MPi you should see a similar behavior as long as your algorithm scales linearly to 2 instances processes. I can't wait to use IPython1 with 4/8/16 cores on my workstation :) Please let us know if you have questions or problems. Brian On 10/30/06, Flavio Coelho wrote: > Hi Brian, > > Can Ipython1 take advantage of multiple core cpus or SMP machines in the > same way it does for a networked cluster? It would be nice to have that > since quad-core cpu are arriving on the market and SMP machines are pretty > common already. > > Thanks! > > Fl?vio > > > On 10/30/06, Brian Granger wrote: > > > I also looked at the entry on Ipython1 and it looks really promising. > > > Did I understand correctly that Ipython1 is able to run parallel python > > > code, regardless of which "MPI for python" package one eventually uses? > > > > In principle yes. The IPython engines have a configuration option > > that you use to import the mpi bindings of your choice. But, to work > > properly, the import statement must cause MPI_Init() to be called. If > > the bindings you are using don't do that, it should b easy to write a > > wrapper module that imports and also calls MPI_Init(). Also, some of > > this depends on what MPI implementation you are using. The more > > moderns ones (like openmpi) are really nice and pretty forgiving. The > > older ones, though, sometimes require that Python itself (not your > > script) call MPI_Init(). This is also very easy to work around. The > > bottom line is that we can help you get it to work with whatever mpi > > binding and implementation you want. > > > > Withh that said, I highly do highly recommend mpi4py and openmpi - we > > use these regularly with ipython and it all works well. > > > > Brian > > > > > > > Giovanni > > > > > > Flavio Coelho wrote: > > > > Thank you for this post and many thanks to Fernando Perez, for the > > > > awesome tool that Ipython1 is becomming!!! I am beginning to use it > > > > today!! > > > > > > > > Fl?vio > > > > > > > > On 10/29/06, * Brian Granger* < ellisonbg.net at gmail.com > > > > > wrote: > > > > > > > > > For object oriented parallel programming in Python on a single > > > > machine I can use techniques such as Bulk Synchronous Parallelism > > > > (BSP) or Message Passing Interface (MPI). There are mentioned some > > > > modules and packages for Python on > > > > > http://www.scipy.org/Topical_Software > > > > > Article: Parallel and distributed programming. > > > > > After reading prerequisites and limitations I thing that the > > > > following ones could be good for me: > > > > > PyMPI > > > > > Pypar > > > > > MPI for Python > > > > > > > > I think mpi4py is the best option. It is a very nice > implementation > > > > and the developer is working really hard to make it a great > package. > > > > Also, I don't think pympi and pypar are being developed actively > > > > anymore. > > > > > > > > Another thing you might check out is IPython's parallel computing > > > > facilities that let you use mpi4py (and other mpi bindings) > > > > interactively: > > > > > > > > http://ipython.scipy.org/moin/Parallel_Computing > > > > > > > > I have never used BPS, but am familiar with the model. Can > probably > > > > get it working interactively with IPython as well. > > > > > > > > Cheers, > > > > > > > > Brian > > > > > > > > > But I can't distinguish which one brings fewer limitations, is > > > > efficient in application and will develop in future. > > > > > My favorite is BSP, but I can't find a package for present SciPy > > > > and NumPy. > > > > > > > > > > Could anybody give me a recommendation, which way I should go. > > > > > > > > > > Thanks. > > > > > Fie Pye > > > > > > > > > > > > > > > > > > > > > > > > > --------------------------------------- > > > > > M?me dal?? spot P.?tvrtn??ka. Pod?vejte se www.neuservis.cz > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > SciPy-user mailing list > > > > > SciPy-user at scipy.org > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > > > > > > -- > > > > Fl?vio Code?o Coelho > > > > registered Linux user # 386432 > > > > --------------------------- > > > > "Laws are like sausages. It's better not to see them being made." > > > > Otto von Bismark > > > > > ------------------------------------------------------------------------ > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > > Fl?vio Code?o Coelho > registered Linux user # 386432 > --------------------------- > "Laws are like sausages. It's better not to see them being made." > Otto von Bismark > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From fperez.net at gmail.com Mon Oct 30 14:25:44 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 30 Oct 2006 12:25:44 -0700 Subject: [SciPy-user] Parallel OO programming in Python. In-Reply-To: References: <6ce0ac130610291116rf184702i6d1f5ccc476ae886@mail.gmail.com> Message-ID: On 10/30/06, Flavio Coelho wrote: > Thank you for this post and many thanks to Fernando Perez, for the awesome > tool that Ipython1 is becomming!!! I am beginning to use it today!! As much as I like to see my name attached to such statements, it's important to acknowledge that virtually all the credit for this effort should go to Brian Granger and not to me :) Without his interest, boundless energy, and his involvement of Benjamin Ragan-Kelley (who also deserves much credit for doing lots of the nasty Twisted coding over this summer), we wouldn't be anywhere near this. But thanks anyway ;) Cheers, f From icy.flame.gm at gmail.com Mon Oct 30 14:31:19 2006 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Mon, 30 Oct 2006 19:31:19 +0000 Subject: [SciPy-user] No longer able to load scipy Message-ID: Not sure since when my scipy gets broken, but it worked fine at least a week ago, so I removed both numpy and scipy, and download the lastest (numpy-1.0.tar.gz & scipy-0.5.1.tar.gz) then recompile from source. No complains while compiling and installing both numpy and scipy, but still unable to import scipy, while numpy works fine. I am using: python-2.4.3-18.fc6 >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/scipy/__init__.py", line 18, in ? import pkg_resources as _pr # activate namespace packages (manipulates __path__) File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 2470, in ? working_set = WorkingSet() File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 343, in __init__ self.add_entry(entry) File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 358, in add_entry for dist in find_distributions(entry, True): File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1565, in find_on_path path_item = _normalize_cached(path_item) File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1712, in _normalize_cached _cache[filename] = result = normalize_path(filename) File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1706, in normalize_path return os.path.normcase(os.path.realpath(filename)) File "/usr/lib/python2.4/posixpath.py", line 431, in realpath return abspath(filename) File "/usr/lib/python2.4/posixpath.py", line 404, in abspath path = join(os.getcwd(), path) OSError: [Errno 2] No such file or directory Any thoughts? From fccoelho at gmail.com Mon Oct 30 14:32:16 2006 From: fccoelho at gmail.com (Flavio Coelho) Date: Mon, 30 Oct 2006 17:32:16 -0200 Subject: [SciPy-user] Parallel OO programming in Python. In-Reply-To: References: <6ce0ac130610291116rf184702i6d1f5ccc476ae886@mail.gmail.com> Message-ID: Ooops, Sorry for that mistake! So I redirect my thanks to Brian !! Thanks Brian! Fl?vio On 10/30/06, Fernando Perez wrote: > > On 10/30/06, Flavio Coelho wrote: > > Thank you for this post and many thanks to Fernando Perez, for the > awesome > > tool that Ipython1 is becomming!!! I am beginning to use it today!! > > As much as I like to see my name attached to such statements, it's > important to acknowledge that virtually all the credit for this effort > should go to Brian Granger and not to me :) Without his interest, > boundless energy, and his involvement of Benjamin Ragan-Kelley (who > also deserves much credit for doing lots of the nasty Twisted coding > over this summer), we wouldn't be anywhere near this. > > But thanks anyway ;) > > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Fl?vio Code?o Coelho registered Linux user # 386432 --------------------------- "Laws are like sausages. It's better not to see them being made." Otto von Bismark -------------- next part -------------- An HTML attachment was scrubbed... URL: From fccoelho at gmail.com Mon Oct 30 14:47:10 2006 From: fccoelho at gmail.com (Flavio Coelho) Date: Mon, 30 Oct 2006 17:47:10 -0200 Subject: [SciPy-user] Parallel OO programming in Python. In-Reply-To: <6ce0ac130610301118ncd8d3al32a4c68f6db3801@mail.gmail.com> References: <6ce0ac130610291116rf184702i6d1f5ccc476ae886@mail.gmail.com> <4546062E.1020501@cs.kuleuven.ac.be> <6ce0ac130610300801s5707a6a3h3480ce08899ecd46@mail.gmail.com> <6ce0ac130610301118ncd8d3al32a4c68f6db3801@mail.gmail.com> Message-ID: Thanks Brian, I am playing with it right now! I am planning to run it soon on multiple machines with multiple CPUs. Will I have to install Ipython1 on every node or Will the controller host spawn the processes across the net? In the README you seem to be starting the engines locally on each host.... but before I get Ahead of myself , I am getting the following error when I try to start an engine on my localhost: fccoelho at sombra ~/Downloads/Twisted $ ipengine & [2] 32018 fccoelho at sombra ~/Downloads/Twisted $ 2006/10/30 16:44 -0200 [-] Log opened. 2006/10/30 16:44 -0200 [-] Starting factory < ipython1.kernel.enginevanilla.VanillaEngineClientFactoryFromEngineServiceobject at 0xb6706eac> 2006/10/30 16:44 -0200 [VanillaEngineServerProtocol,0,127.0.0.1] Unhandled Error Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/twisted/python/log.py", line 48, in callWithLogger return callWithContext({"system": lp}, func, *args, **kw) File "/usr/lib/python2.4/site-packages/twisted/python/log.py", line 33, in callWithContext return context.call({ILogContext: newCtx}, func, *args, **kw) File "/usr/lib/python2.4/site-packages/twisted/python/context.py", line 59, in callWithContext return self.currentContext().callWithContext(ctx, func, *args, **kw) File "/usr/lib/python2.4/site-packages/twisted/python/context.py", line 37, in callWithContext return func(*args,**kw) --- --- File "/usr/lib/python2.4/site-packages/twisted/internet/selectreactor.py", line 139, in _doReadOrWrite why = getattr(selectable, method)() File "/usr/lib/python2.4/site-packages/twisted/internet/tcp.py", line 362, in doRead return self.protocol.dataReceived(data) File "/usr/lib/python2.4/site-packages/twisted/protocols/basic.py", line 99, in dataReceived self.doData() File "/usr/lib/python2.4/site-packages/twisted/protocols/basic.py", line 62, in doData self.stringReceived(self.__buffer) File "/usr/lib/python2.4/site-packages/ipython1/kernel/enginevanilla.py", line 557, in stringReceived self.nextHandler(msg) File "/usr/lib/python2.4/site-packages/ipython1/kernel/enginevanilla.py", line 573, in dispatch f(args) File "/usr/lib/python2.4/site-packages/ipython1/kernel/enginevanilla.py", line 706, in handle_REGISTER qe = engineservice.QueuedEngine(self, keepUpToDate=True) File "/usr/lib/python2.4/site-packages/ipython1/kernel/engineservice.py", line 342, in __init__ self.registerMethods() File "/usr/lib/python2.4/site-packages/ipython1/kernel/engineservice.py", line 345, in registerMethods zi.alsoProvides(self, *self.engine.__provides__) exceptions.AttributeError: 'module' object has no attribute 'alsoProvides' 2006/10/30 16:44 -0200 [VanillaEngineServerProtocol,0,127.0.0.1] unregistered engine None 2006/10/30 16:44 -0200 [VanillaEngineClientProtocol,client] Stopping factory 2006/10/30 16:44 -0200 [VanillaEngineServerProtocol,0,127.0.0.1] Unhandled Error Traceback (most recent call last): File "/usr/bin/ipcontroller", line 51, in main reactor.run() File "/usr/lib/python2.4/site-packages/twisted/internet/posixbase.py", line 218, in run self.mainLoop() File "/usr/lib/python2.4/site-packages/twisted/internet/posixbase.py", line 229, in mainLoop self.doIteration(t) File "/usr/lib/python2.4/site-packages/twisted/internet/selectreactor.py", line 133, in doSelect _logrun(selectable, _drdw, selectable, method, dict) --- --- File "/usr/lib/python2.4/site-packages/twisted/python/log.py", line 48, in callWithLogger return callWithContext({"system": lp}, func, *args, **kw) File "/usr/lib/python2.4/site-packages/twisted/python/log.py", line 33, in callWithContext return context.call({ILogContext: newCtx}, func, *args, **kw) File "/usr/lib/python2.4/site-packages/twisted/python/context.py", line 59, in callWithContext return self.currentContext().callWithContext(ctx, func, *args, **kw) File "/usr/lib/python2.4/site-packages/twisted/python/context.py", line 37, in callWithContext return func(*args,**kw) File "/usr/lib/python2.4/site-packages/twisted/internet/selectreactor.py", line 149, in _doReadOrWrite self._disconnectSelectable(selectable, why, method=="doRead") File "/usr/lib/python2.4/site-packages/twisted/internet/posixbase.py", line 256, in _disconnectSelectable selectable.connectionLost(failure.Failure(why)) File "/usr/lib/python2.4/site-packages/twisted/internet/tcp.py", line 416, in connectionLost protocol.connectionLost(reason) File "/usr/lib/python2.4/site-packages/ipython1/kernel/enginevanilla.py", line 538, in connectionLost self.factory.unregisterEngine(self.id) File "/usr/lib/python2.4/site-packages/ipython1/kernel/enginevanilla.py", line 1023, in unregisterEngine return self.service.unregisterEngine(id) File "/usr/lib/python2.4/site-packages/ipython1/kernel/controllerservice.py", line 300, in unregisterEngine del self.engines[id] exceptions.KeyError: On 10/30/06, Brian Granger wrote: > > > Can Ipython1 take advantage of multiple core cpus or SMP machines in the > > same way it does for a networked cluster? It would be nice to have that > > since quad-core cpu are arriving on the market and SMP machines are > pretty > > common already. > > This was one of the major usage cases we have had in mind. So yes. > Some details though: > > In IPython1 computations are done by a separate process that we are > calling the IPython engine. Multiple IPython engines can be run on > multiple systems (distributed memory), a single (multi CPU and/or > multi-core) system or a combination of the two. I have run two > IPython engines on my dual core MacBook and seen 2x speedup with > simple python scripts. I wasn't using MPI, but with MPi you should > see a similar behavior as long as your algorithm scales linearly to 2 > instances processes. > > I can't wait to use IPython1 with 4/8/16 cores on my workstation :) > > Please let us know if you have questions or problems. > > Brian > > On 10/30/06, Flavio Coelho wrote: > > Hi Brian, > > > > Can Ipython1 take advantage of multiple core cpus or SMP machines in the > > same way it does for a networked cluster? It would be nice to have that > > since quad-core cpu are arriving on the market and SMP machines are > pretty > > common already. > > > > Thanks! > > > > Fl?vio > > > > > > On 10/30/06, Brian Granger wrote: > > > > I also looked at the entry on Ipython1 and it looks really > promising. > > > > Did I understand correctly that Ipython1 is able to run parallel > python > > > > code, regardless of which "MPI for python" package one eventually > uses? > > > > > > In principle yes. The IPython engines have a configuration option > > > that you use to import the mpi bindings of your choice. But, to work > > > properly, the import statement must cause MPI_Init() to be called. If > > > the bindings you are using don't do that, it should b easy to write a > > > wrapper module that imports and also calls MPI_Init(). Also, some of > > > this depends on what MPI implementation you are using. The more > > > moderns ones (like openmpi) are really nice and pretty forgiving. The > > > older ones, though, sometimes require that Python itself (not your > > > script) call MPI_Init(). This is also very easy to work around. The > > > bottom line is that we can help you get it to work with whatever mpi > > > binding and implementation you want. > > > > > > Withh that said, I highly do highly recommend mpi4py and openmpi - we > > > use these regularly with ipython and it all works well. > > > > > > Brian > > > > > > > > > > Giovanni > > > > > > > > Flavio Coelho wrote: > > > > > Thank you for this post and many thanks to Fernando Perez, for the > > > > > awesome tool that Ipython1 is becomming!!! I am beginning to use > it > > > > > today!! > > > > > > > > > > Fl?vio > > > > > > > > > > On 10/29/06, * Brian Granger* < ellisonbg.net at gmail.com > > > > > > wrote: > > > > > > > > > > > For object oriented parallel programming in Python on a > single > > > > > machine I can use techniques such as Bulk Synchronous > Parallelism > > > > > (BSP) or Message Passing Interface (MPI). There are mentioned > some > > > > > modules and packages for Python on > > > > > > http://www.scipy.org/Topical_Software > > > > > > Article: Parallel and distributed programming. > > > > > > After reading prerequisites and limitations I thing that the > > > > > following ones could be good for me: > > > > > > PyMPI > > > > > > Pypar > > > > > > MPI for Python > > > > > > > > > > I think mpi4py is the best option. It is a very nice > > implementation > > > > > and the developer is working really hard to make it a great > > package. > > > > > Also, I don't think pympi and pypar are being developed > actively > > > > > anymore. > > > > > > > > > > Another thing you might check out is IPython's parallel > computing > > > > > facilities that let you use mpi4py (and other mpi bindings) > > > > > interactively: > > > > > > > > > > http://ipython.scipy.org/moin/Parallel_Computing > > > > > > > > > > I have never used BPS, but am familiar with the model. Can > > probably > > > > > get it working interactively with IPython as well. > > > > > > > > > > Cheers, > > > > > > > > > > Brian > > > > > > > > > > > But I can't distinguish which one brings fewer limitations, > is > > > > > efficient in application and will develop in future. > > > > > > My favorite is BSP, but I can't find a package for present > SciPy > > > > > and NumPy. > > > > > > > > > > > > Could anybody give me a recommendation, which way I should > go. > > > > > > > > > > > > Thanks. > > > > > > Fie Pye > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --------------------------------------- > > > > > > M?me dal?? spot P.?tvrtn??ka. Pod?vejte se www.neuservis.cz > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > SciPy-user mailing list > > > > > > SciPy-user at scipy.org > > > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > SciPy-user mailing list > > > > > SciPy-user at scipy.org > > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > Fl?vio Code?o Coelho > > > > > registered Linux user # 386432 > > > > > --------------------------- > > > > > "Laws are like sausages. It's better not to see them being made." > > > > > Otto von Bismark > > > > > > > ------------------------------------------------------------------------ > > > > > > > > > > _______________________________________________ > > > > > SciPy-user mailing list > > > > > SciPy-user at scipy.org > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > -- > > > > Fl?vio Code?o Coelho > > registered Linux user # 386432 > > --------------------------- > > "Laws are like sausages. It's better not to see them being made." > > Otto von Bismark > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Fl?vio Code?o Coelho registered Linux user # 386432 --------------------------- "Laws are like sausages. It's better not to see them being made." Otto von Bismark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahoover at eecs.berkeley.edu Mon Oct 30 15:35:39 2006 From: ahoover at eecs.berkeley.edu (Aaron Hoover) Date: Mon, 30 Oct 2006 12:35:39 -0800 Subject: [SciPy-user] how to filter outliers from an array Message-ID: <27403C7D-BDFF-4475-8884-3E07C2DCABC3@eecs.berkeley.edu> Hi All, I'm doing feature matching between two images and as a result I get back an array of dx, dy shift values in pixel space for potential features that match between two images. I'd like to be able to throw out [dx, dy] vectors for which either value deviates significantly (more than one std deviation, perhaps) from the mean (I'm dealing with 2D rigid body translations only. I could do this by looping over the values and comparing them individually to the mean, but I'm wondering if there's a more compact (single line?) way of doing this perhaps using the scipy.stats package. Something along the lines of the filter built-in for lists, but that handles the higher dimensionality, would be nice. Thanks, Aaron From coughlan at ski.org Mon Oct 30 15:48:57 2006 From: coughlan at ski.org (James Coughlan) Date: Mon, 30 Oct 2006 12:48:57 -0800 Subject: [SciPy-user] how to filter outliers from an array In-Reply-To: <27403C7D-BDFF-4475-8884-3E07C2DCABC3@eecs.berkeley.edu> References: <27403C7D-BDFF-4475-8884-3E07C2DCABC3@eecs.berkeley.edu> Message-ID: <45466539.6030009@ski.org> How about this? #### dx=rand(5,5) #substitute whatever matrices you're actually dy=rand(5,5) #using here import scipy.stats #compute mean and std for dx and dy: mx=stats.mean(dx.flat) my=stats.mean(dy.flat) sx=stats.std(dx.flat) sy=stats.std(dy.flat) wx=abs(dx-mx)>sx wy=abs(dy-my)>sy print wx | wy # logical or #### Here wx and wy are arrays whose elements are set to True if they lie more than 1 std. from mean. So wx | xy will flag those elements for which either the x or y components are outliers. James Aaron Hoover wrote: >Hi All, > >I'm doing feature matching between two images and as a result I get >back an array of dx, dy shift values in pixel space for potential >features that match between two images. I'd like to be able to throw >out [dx, dy] vectors for which either value deviates significantly >(more than one std deviation, perhaps) from the mean (I'm dealing >with 2D rigid body translations only. I could do this by looping over >the values and comparing them individually to the mean, but I'm >wondering if there's a more compact (single line?) way of doing this >perhaps using the scipy.stats package. Something along the lines of >the filter built-in for lists, but that handles the higher >dimensionality, would be nice. > >Thanks, >Aaron > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > -- ------------------------------------------------------- James Coughlan, Ph.D., Associate Scientist Smith-Kettlewell Eye Research Institute Email: coughlan at ski.org URL: http://www.ski.org/Rehab/Coughlan_lab/ Phone: 415-345-2146 Fax: 415-345-8455 ------------------------------------------------------- From bhendrix at enthought.com Mon Oct 30 15:57:08 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Mon, 30 Oct 2006 14:57:08 -0600 Subject: [SciPy-user] packaging with numpy.distutils In-Reply-To: <4542B695.8040806@hoc.net> References: <4542B695.8040806@hoc.net> Message-ID: <45466724.7030706@enthought.com> What may be the easiest thing for you to do is to create a script that is ran as a post-install that just moves the files to where you want them. You have to add the script to the scripts argument to setup, then specify the script with --install-script on the command line. Bryce Christian Kristukat wrote: > Hi, > > I posted this to the numpy list but maybe it has been overlooked. Apologizes for > asking here. > I am creating a binary package on windows (bdist_wininst) making use of > numpy.distutils and I would like to have something installed into PREFIX. With > standard distutils this can be achieved by using the data_files keyword with > setup. With numpy.distutils however those data end up in site-packages. > Any ideas? > > Christian > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From vmas at carabos.com Mon Oct 30 15:59:17 2006 From: vmas at carabos.com (Vicent Mas (V+)) Date: Mon, 30 Oct 2006 21:59:17 +0100 Subject: [SciPy-user] Bug in find_objects? (Was: Re: image processing) In-Reply-To: <453DFBD9.6060304@bigpond.net.au> References: <200610231057.37151.vmas@carabos.com> <453DFBD9.6060304@bigpond.net.au> Message-ID: <200610302159.17749.vmas@carabos.com> El Tuesday, 24 de October de 2006 13:41, Gary Ruben escribi?: > I haven't actually tried this, but I think you should take a look at > scipy.ndimage.measurements.label > followed by > scipy.ndimage.measurements.find_objects > and > scipy.ndimage.measurements.maximum > > These sound like they will do what you want. > > Gary R. > > Carlos Medrano wrote: > > Vicent Mas (V+ carabos.com> writes: > >> Hello, > >> > >> I'm a newcomer to process image and scipy. I'm trying to process a > >> binary image in order to get the biggest connected component. In > >> matlab one can use the bwareaopen to achieve this goal but I don't > >> know how to do it with scipy. I've been looking at scipy.ndimage > >> module with no luck. Could you help me, please? > >> > >> Thanks in advance. > > >> [...] Hello, I'm trying to use Gary's receipt (thanks Gary, and sorry for answering so slowly :-( but I get the following error (as can be seen running the attached script): Traceback (most recent call last): File "finder.py", line 13, in ? measurements.find_objects(label_matrix) File "/usr/local/lib/python2.4/site-packages/scipy/ndimage/measurements.py", line 89, in find_objects return _nd_image.find_objects(input, max_label) RuntimeError: data type not supported It seems to be a bug. Am I right? Or am I doing something stupid? Thanks in advance for your help. -- :: \ / Vicent Mas http://www.carabos.com 0;0 / \ C?rabos Coop. Enjoy Data V V " " -------------- next part -------------- A non-text attachment was scrubbed... Name: finder.py Type: application/x-python Size: 340 bytes Desc: not available URL: From conor.robinson at gmail.com Mon Oct 30 16:38:19 2006 From: conor.robinson at gmail.com (Conor Robinson) Date: Mon, 30 Oct 2006 13:38:19 -0800 Subject: [SciPy-user] how to filter outliers from an array In-Reply-To: <45466539.6030009@ski.org> References: <27403C7D-BDFF-4475-8884-3E07C2DCABC3@eecs.berkeley.edu> <45466539.6030009@ski.org> Message-ID: I have found scipy.where() useful, although James' solution is very nice as well. On 10/30/06, James Coughlan wrote: > How about this? > > #### > dx=rand(5,5) #substitute whatever matrices you're actually > dy=rand(5,5) #using here > > import scipy.stats > #compute mean and std for dx and dy: > mx=stats.mean(dx.flat) > my=stats.mean(dy.flat) > sx=stats.std(dx.flat) > sy=stats.std(dy.flat) > > wx=abs(dx-mx)>sx > wy=abs(dy-my)>sy > print wx | wy # logical or > #### > > Here wx and wy are arrays whose elements are set to True if they lie > more than 1 std. from mean. So wx | xy will flag those elements for > which either the x or y components are outliers. > > James > > > > Aaron Hoover wrote: > > >Hi All, > > > >I'm doing feature matching between two images and as a result I get > >back an array of dx, dy shift values in pixel space for potential > >features that match between two images. I'd like to be able to throw > >out [dx, dy] vectors for which either value deviates significantly > >(more than one std deviation, perhaps) from the mean (I'm dealing > >with 2D rigid body translations only. I could do this by looping over > >the values and comparing them individually to the mean, but I'm > >wondering if there's a more compact (single line?) way of doing this > >perhaps using the scipy.stats package. Something along the lines of > >the filter built-in for lists, but that handles the higher > >dimensionality, would be nice. > > > >Thanks, > >Aaron > > > > > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > ------------------------------------------------------- > James Coughlan, Ph.D., Associate Scientist > > Smith-Kettlewell Eye Research Institute > > Email: coughlan at ski.org > URL: http://www.ski.org/Rehab/Coughlan_lab/ > Phone: 415-345-2146 > Fax: 415-345-8455 > ------------------------------------------------------- > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ellisonbg.net at gmail.com Mon Oct 30 17:00:22 2006 From: ellisonbg.net at gmail.com (Brian Granger) Date: Mon, 30 Oct 2006 15:00:22 -0700 Subject: [SciPy-user] Parallel OO programming in Python. In-Reply-To: References: <6ce0ac130610291116rf184702i6d1f5ccc476ae886@mail.gmail.com> <4546062E.1020501@cs.kuleuven.ac.be> <6ce0ac130610300801s5707a6a3h3480ce08899ecd46@mail.gmail.com> <6ce0ac130610301118ncd8d3al32a4c68f6db3801@mail.gmail.com> Message-ID: <6ce0ac130610301400l491fae58n96e1df7518977845@mail.gmail.com> It looks like you are using an older version of zope.interface. I would grab this version and try it. http://www.zope.org/Products/ZopeInterface/3.1.0c1/ZopeInterface-3.1.0c1.tgz also, I would use a recent (I alway use the svn trunk) version of twisted. Let me know if that helps. > I am playing with it right now! I am planning to run it soon on multiple > machines with multiple CPUs. Will I have to install Ipython1 on every node > or Will the controller host spawn the processes across the net? In the > README you seem to be starting the engines locally on each host.... Yes, you will need to install everything on each host and start the engines on each machine. If you have mpi installed, you can start the engines using mpirun and this will handle it. Let us know how it goes. Brian > but before I get Ahead of myself , I am getting the following error when I > try to start an engine on my localhost: > fccoelho at sombra ~/Downloads/Twisted $ ipengine & > [2] 32018 > fccoelho at sombra ~/Downloads/Twisted $ 2006/10/30 16:44 -0200 [-] Log opened. > 2006/10/30 16:44 -0200 [-] Starting factory > object at 0xb6706eac> > 2006/10/30 16:44 -0200 [VanillaEngineServerProtocol,0, 127.0.0.1] Unhandled > Error > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/twisted/python/log.py", > line 48, in callWithLogger > return callWithContext({"system": lp}, func, *args, **kw) > File > "/usr/lib/python2.4/site-packages/twisted/python/log.py", > line 33, in callWithContext > return context.call({ILogContext: newCtx}, func, *args, **kw) > File > "/usr/lib/python2.4/site-packages/twisted/python/context.py", > line 59, in callWithContext > return self.currentContext().callWithContext(ctx, func, *args, > **kw) > File > "/usr/lib/python2.4/site-packages/twisted/python/context.py", > line 37, in callWithContext > return func(*args,**kw) > --- --- > File > "/usr/lib/python2.4/site-packages/twisted/internet/selectreactor.py", > line 139, in _doReadOrWrite > why = getattr(selectable, method)() > File > "/usr/lib/python2.4/site-packages/twisted/internet/tcp.py", > line 362, in doRead > return self.protocol.dataReceived(data) > File > "/usr/lib/python2.4/site-packages/twisted/protocols/basic.py", > line 99, in dataReceived > self.doData() > File > "/usr/lib/python2.4/site-packages/twisted/protocols/basic.py", > line 62, in doData > self.stringReceived(self.__buffer) > File > "/usr/lib/python2.4/site-packages/ipython1/kernel/enginevanilla.py", > line 557, in stringReceived > self.nextHandler(msg) > File > "/usr/lib/python2.4/site-packages/ipython1/kernel/enginevanilla.py", > line 573, in dispatch > f(args) > File > "/usr/lib/python2.4/site-packages/ipython1/kernel/enginevanilla.py", > line 706, in handle_REGISTER > qe = engineservice.QueuedEngine(self, keepUpToDate=True) > File > "/usr/lib/python2.4/site-packages/ipython1/kernel/engineservice.py", > line 342, in __init__ > self.registerMethods () > File > "/usr/lib/python2.4/site-packages/ipython1/kernel/engineservice.py", > line 345, in registerMethods > zi.alsoProvides(self, *self.engine.__provides__) > exceptions.AttributeError : 'module' object has no attribute > 'alsoProvides' > > 2006/10/30 16:44 -0200 [VanillaEngineServerProtocol,0,127.0.0.1] > unregistered engine None > 2006/10/30 16:44 -0200 [VanillaEngineClientProtocol,client] > Stopping factory < > ipython1.kernel.enginevanilla.VanillaEngineClientFactoryFromEngineService > object at 0xb6706eac> > 2006/10/30 16:44 -0200 [VanillaEngineServerProtocol,0,127.0.0.1] Unhandled > Error > Traceback (most recent call last): > File "/usr/bin/ipcontroller", line 51, in main > reactor.run() > File > "/usr/lib/python2.4/site-packages/twisted/internet/posixbase.py", > line 218, in run > self.mainLoop() > File > "/usr/lib/python2.4/site-packages/twisted/internet/posixbase.py", > line 229, in mainLoop > self.doIteration(t) > File > "/usr/lib/python2.4/site-packages/twisted/internet/selectreactor.py", > line 133, in doSelect > _logrun(selectable, _drdw, selectable, method, dict) > --- --- > File > "/usr/lib/python2.4/site-packages/twisted/python/log.py", > line 48, in callWithLogger > return callWithContext({"system": lp}, func, *args, **kw) > File > "/usr/lib/python2.4/site-packages/twisted/python/log.py", > line 33, in callWithContext > return context.call ({ILogContext: newCtx}, func, *args, **kw) > File > "/usr/lib/python2.4/site-packages/twisted/python/context.py", > line 59, in callWithContext > return self.currentContext().callWithContext(ctx, func, *args, > **kw) > File > "/usr/lib/python2.4/site-packages/twisted/python/context.py", > line 37, in callWithContext > return func(*args,**kw) > File > "/usr/lib/python2.4/site-packages/twisted/internet/selectreactor.py", > line 149, in _doReadOrWrite > self._disconnectSelectable(selectable, why, > method=="doRead") > File > "/usr/lib/python2.4/site-packages/twisted/internet/posixbase.py", > line 256, in _disconnectSelectable > selectable.connectionLost(failure.Failure(why)) > File > "/usr/lib/python2.4/site-packages/twisted/internet/tcp.py", > line 416, in connectionLost > protocol.connectionLost(reason) > File > "/usr/lib/python2.4/site-packages/ipython1/kernel/enginevanilla.py", > line 538, in connectionLost > self.factory.unregisterEngine(self.id) > File > "/usr/lib/python2.4/site-packages/ipython1/kernel/enginevanilla.py", > line 1023, in unregisterEngine > return self.service.unregisterEngine(id) > File > "/usr/lib/python2.4/site-packages/ipython1/kernel/controllerservice.py", > line 300, in unregisterEngine > del self.engines[id] > exceptions.KeyError : > > > > On 10/30/06, Brian Granger wrote: > > > > > Can Ipython1 take advantage of multiple core cpus or SMP machines in the > > > same way it does for a networked cluster? It would be nice to have that > > > since quad-core cpu are arriving on the market and SMP machines are > pretty > > > common already. > > > > This was one of the major usage cases we have had in mind. So yes. > > Some details though: > > > > In IPython1 computations are done by a separate process that we are > > calling the IPython engine. Multiple IPython engines can be run on > > multiple systems (distributed memory), a single (multi CPU and/or > > multi-core) system or a combination of the two. I have run two > > IPython engines on my dual core MacBook and seen 2x speedup with > > simple python scripts. I wasn't using MPI, but with MPi you should > > see a similar behavior as long as your algorithm scales linearly to 2 > > instances processes. > > > > I can't wait to use IPython1 with 4/8/16 cores on my workstation :) > > > > Please let us know if you have questions or problems. > > > > Brian > > > > On 10/30/06, Flavio Coelho wrote: > > > Hi Brian, > > > > > > Can Ipython1 take advantage of multiple core cpus or SMP machines in the > > > same way it does for a networked cluster? It would be nice to have that > > > since quad-core cpu are arriving on the market and SMP machines are > pretty > > > common already. > > > > > > Thanks! > > > > > > Fl?vio > > > > > > > > > On 10/30/06, Brian Granger wrote: > > > > > I also looked at the entry on Ipython1 and it looks really > promising. > > > > > Did I understand correctly that Ipython1 is able to run parallel > python > > > > > code, regardless of which "MPI for python" package one eventually > uses? > > > > > > > > In principle yes. The IPython engines have a configuration option > > > > that you use to import the mpi bindings of your choice. But, to work > > > > properly, the import statement must cause MPI_Init() to be called. If > > > > the bindings you are using don't do that, it should b easy to write a > > > > wrapper module that imports and also calls MPI_Init(). Also, some of > > > > this depends on what MPI implementation you are using. The more > > > > moderns ones (like openmpi) are really nice and pretty forgiving. The > > > > older ones, though, sometimes require that Python itself (not your > > > > script) call MPI_Init(). This is also very easy to work around. The > > > > bottom line is that we can help you get it to work with whatever mpi > > > > binding and implementation you want. > > > > > > > > Withh that said, I highly do highly recommend mpi4py and openmpi - we > > > > use these regularly with ipython and it all works well. > > > > > > > > Brian > > > > > > > > > > > > > Giovanni > > > > > > > > > > Flavio Coelho wrote: > > > > > > Thank you for this post and many thanks to Fernando Perez, for the > > > > > > awesome tool that Ipython1 is becomming!!! I am beginning to use > it > > > > > > today!! > > > > > > > > > > > > Fl?vio > > > > > > > > > > > > On 10/29/06, * Brian Granger* < ellisonbg.net at gmail.com > > > > > > > wrote: > > > > > > > > > > > > > For object oriented parallel programming in Python on a > single > > > > > > machine I can use techniques such as Bulk Synchronous > Parallelism > > > > > > (BSP) or Message Passing Interface (MPI). There are mentioned > some > > > > > > modules and packages for Python on > > > > > > > http://www.scipy.org/Topical_Software > > > > > > > Article: Parallel and distributed programming. > > > > > > > After reading prerequisites and limitations I thing that the > > > > > > following ones could be good for me: > > > > > > > PyMPI > > > > > > > Pypar > > > > > > > MPI for Python > > > > > > > > > > > > I think mpi4py is the best option. It is a very nice > > > implementation > > > > > > and the developer is working really hard to make it a great > > > package. > > > > > > Also, I don't think pympi and pypar are being developed > actively > > > > > > anymore. > > > > > > > > > > > > Another thing you might check out is IPython's parallel > computing > > > > > > facilities that let you use mpi4py (and other mpi bindings) > > > > > > interactively: > > > > > > > > > > > > > http://ipython.scipy.org/moin/Parallel_Computing > > > > > > > > > > > > I have never used BPS, but am familiar with the model. Can > > > probably > > > > > > get it working interactively with IPython as well. > > > > > > > > > > > > Cheers, > > > > > > > > > > > > Brian > > > > > > > > > > > > > But I can't distinguish which one brings fewer limitations, > is > > > > > > efficient in application and will develop in future. > > > > > > > My favorite is BSP, but I can't find a package for present > SciPy > > > > > > and NumPy. > > > > > > > > > > > > > > Could anybody give me a recommendation, which way I should > go. > > > > > > > > > > > > > > Thanks. > > > > > > > Fie Pye > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --------------------------------------- > > > > > > > M?me dal?? spot P.?tvrtn??ka. Pod?vejte se www.neuservis.cz > > > > > > < http://www.neuservis.cz> > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > SciPy-user mailing list > > > > > > > SciPy-user at scipy.org > > > > > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > SciPy-user mailing list > > > > > > SciPy-user at scipy.org > > > > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Fl?vio Code?o Coelho > > > > > > registered Linux user # 386432 > > > > > > --------------------------- > > > > > > "Laws are like sausages. It's better not to see them being made." > > > > > > Otto von Bismark > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > > > > > > > _______________________________________________ > > > > > > SciPy-user mailing list > > > > > > SciPy-user at scipy.org > > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > _______________________________________________ > > > > > SciPy-user mailing list > > > > > SciPy-user at scipy.org > > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > > -- > > > > > > Fl?vio Code?o Coelho > > > registered Linux user # 386432 > > > --------------------------- > > > "Laws are like sausages. It's better not to see them being made." > > > Otto von Bismark > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > Fl?vio Code?o Coelho > registered Linux user # 386432 > --------------------------- > "Laws are like sausages. It's better not to see them being made." > Otto von Bismark > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From ckkart at hoc.net Tue Oct 31 05:57:11 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 31 Oct 2006 19:57:11 +0900 Subject: [SciPy-user] packaging with numpy.distutils In-Reply-To: <45466724.7030706@enthought.com> References: <4542B695.8040806@hoc.net> <45466724.7030706@enthought.com> Message-ID: <45472C07.5020202@hoc.net> Bryce Hendrix wrote: > What may be the easiest thing for you to do is to create a script that > is ran as a post-install that just moves the files to where you want > them. You have to add the script to the scripts argument to setup, then > specify the script with --install-script on the command line. Thanks, that's true. But I do not quite like the idea that data will be lying around in site-packages. But it seems there is now other way. Christian From oliphant.travis at ieee.org Tue Oct 31 07:57:37 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 31 Oct 2006 05:57:37 -0700 Subject: [SciPy-user] Need more comments from scientific community on python-dev Message-ID: <45474841.9030802@ieee.org> I'm recruiting more comments on python-dev regarding my two proposals for improving Python's native ability to share ndarray-like information. There is a dearth of scientific-computing and number-crunching-aware people on python-dev. The result is that I sound like a lone voice arguing for something that nobody cares about. When, I don't think that is true. Please, please. If you want Python to grow support for the array interface (or something like it), then please speak up on python-dev. Even something as simple as I really see the need for a way to exchange data-format information between two objects sharing the buffer protocol can be helpful. You can post through the gmane newsgroup interface: gmane.comp.python.devel Find any of the posts on the PEP's I've introduced. Thanks for your help. -Travis From david at ar.media.kyoto-u.ac.jp Tue Oct 31 08:37:31 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 31 Oct 2006 22:37:31 +0900 Subject: [SciPy-user] pyaudio 0.3, with docs ! Message-ID: <4547519B.4040201@ar.media.kyoto-u.ac.jp> Hi, I improved pyaudio last WE using indications given by various people on the list or privately, and as I finally got the motivation to set-up something which looks like a webpage, there is a doc with examples which show how to use it. The API to open files for writing is much saner, and the setup.py should be smart enough to grab all informations necessary to the wrapper, including the location of the shared libsndfile: download: http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/pyaudio/#installation doc + examples: http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/pyaudio/ I would appreciate to hear reports on platforms which are not linux (windows, mac os X) to see if my the setup.py works there, Cheers, David From aisaac at american.edu Tue Oct 31 11:14:00 2006 From: aisaac at american.edu (Alan Isaac) Date: Tue, 31 Oct 2006 11:14:00 -0500 Subject: [SciPy-user] [Numpy-discussion] Need more comments from scientific community on python-dev In-Reply-To: <45474841.9030802@ieee.org> References: <45474841.9030802@ieee.org> Message-ID: On Tue, 31 Oct 2006, Travis Oliphant wrote: > Please, please. If you want Python to grow support for > the array interface (or something like it), then please > speak up on python-dev. The easiest access to this discussion for me was http://news.gmane.org/group/gmane.comp.python.devel/ I cannot add to this discussion, but I REALLY hope others will help Travis out here. (A few have.) He is fielding a lot of questions, some of which look to me to be from individuals who are ready to have fairly strong opinions without really understanding the "why" of his proposals. The good news is, there seems to be (on my naive reading) some sympathy for what Travis is trying to do. I think more motivating examples would prove helpful in swinging things. Cheers, Alan Isaac From rshepard at appl-ecosys.com Tue Oct 31 11:41:35 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 31 Oct 2006 08:41:35 -0800 (PST) Subject: [SciPy-user] [Numpy-discussion] Need more comments from scientific community on python-dev In-Reply-To: References: <45474841.9030802@ieee.org> Message-ID: On Tue, 31 Oct 2006, Alan Isaac wrote: > The easiest access to this discussion for me was > http://news.gmane.org/group/gmane.comp.python.devel/ I cannot add to this > discussion, but I REALLY hope others will help Travis out here. (A few > have.) He is fielding a lot of questions, some of which look to me to be > from individuals who are ready to have fairly strong opinions without > really understanding the "why" of his proposals. All this is sufficiently far from my areas of expertise that I cannot contribute anything useful. Otherwise, I'd be happy to lend support. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From fperez.net at gmail.com Tue Oct 31 11:47:57 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 31 Oct 2006 09:47:57 -0700 Subject: [SciPy-user] [Numpy-discussion] Need more comments from scientific community on python-dev In-Reply-To: References: <45474841.9030802@ieee.org> Message-ID: On 10/31/06, Rich Shepard wrote: > On Tue, 31 Oct 2006, Alan Isaac wrote: > > > The easiest access to this discussion for me was > > http://news.gmane.org/group/gmane.comp.python.devel/ I cannot add to this > > discussion, but I REALLY hope others will help Travis out here. (A few > > have.) He is fielding a lot of questions, some of which look to me to be > > from individuals who are ready to have fairly strong opinions without > > really understanding the "why" of his proposals. > > All this is sufficiently far from my areas of expertise that I cannot > contribute anything useful. Otherwise, I'd be happy to lend support. I actually worry about the same: I really would like to help, but after reading the whole discussion, I realized that the low-level details being asked and discussed are something I don't really know enough to say anything. And I don't want to sound simply saying 'Hey, Travis is great, listen to him!' to python-dev, since that (asides from looking silly) can be somewhat counter-productive. It seems to me that Travis has already tried hard to organize the high-level overview of the issue for python-dev, and the rest of the discussion is one that only those who really have used these features can contribute to. But Travis obviously needs help on this, and it seems to me that some in the python-dev crowd are just resisting. I was wondering if a way to contribute could be to set up a wiki page for this particular topic, where the salient points of the discussion can be summarized and polished as things evolve. This would also give a reference for each idea as it clarifies, and it could serve to point the hard-to-break python-devvers to: by having a clean summary of the points, both high- and low-level, they might see the whole thing with more clarity. How does that sound, Travis? Is that something you think might help you, esp. since so many of us are feeling woefully underqualified to lend a useful hand in the actual discussion on python-dev? Regards, f From oliphant at ee.byu.edu Tue Oct 31 12:04:48 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 31 Oct 2006 10:04:48 -0700 Subject: [SciPy-user] [Numpy-discussion] Need more comments from scientific community on python-dev In-Reply-To: References: <45474841.9030802@ieee.org> Message-ID: <45478230.2050709@ee.byu.edu> Fernando Perez wrote: >On 10/31/06, Rich Shepard wrote: > > >>On Tue, 31 Oct 2006, Alan Isaac wrote: >> >> >> >>>The easiest access to this discussion for me was >>>http://news.gmane.org/group/gmane.comp.python.devel/ I cannot add to this >>>discussion, but I REALLY hope others will help Travis out here. (A few >>>have.) He is fielding a lot of questions, some of which look to me to be >>>from individuals who are ready to have fairly strong opinions without >>>really understanding the "why" of his proposals. >>> >>> >> All this is sufficiently far from my areas of expertise that I cannot >>contribute anything useful. Otherwise, I'd be happy to lend support. >> >> > >I actually worry about the same: I really would like to help, but >after reading the whole discussion, I realized that the low-level >details being asked and discussed are something I don't really know >enough to say anything. And I don't want to sound simply saying 'Hey, >Travis is great, listen to him!' to python-dev, since that (asides >from looking silly) can be somewhat counter-productive. > > >How does that sound, Travis? Is that something you think might help >you, esp. since so many of us are feeling woefully underqualified to >lend a useful hand in the actual discussion on python-dev? > > That would be great. I think a couple of things would also be useful. 1) Some way to indicate to python-dev that I'm actually speaking for more than just myself. So, while I agree that just supporting my PEP (which probably in reality needs work) without understanding it is counter-productive, a voice that says. "We really do need this kind of functionality" is at least one more voice. 2) Examples of sharing memory between two objects. PIL is the classic example and has some merit, but because the internal memory layout of the PIL is 'pointer-to-pointers' instead of 'big-chunk-of-memory' it's not a 1-1 match to NumPy and the array interface only can comunicate information about the "mode." But, I can see other examples. PyMedia, PyGame, PyVideo? CVXOPT, PyVoxel. All of these seem to define their own objects which are basically just interpretations of chunks of memory. At one time, we might have said "these should all be sub-classes of the ndarray". Now, we are thinking more along the lines of "these should all expose an array interface". The array interface is still more bulky then it needs to be (it has to go through the attribute-lookup process which can be slow). It would be much better if the extended buffer protocol were available as a function-pointer on the type object of the type. If you have an application where you've ever wanted NumPy in the core. See if the extended buffer protocol serves your purposes and if you agree, voice your approval for the PEP. In my mind, the data-format PEP does not need to go through if there really is a better way to pass data-format information through the buffer protocol. But, the extended buffer protocol we *do* need. -Travis >Regards, > >f >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > From jdhunter at ace.bsd.uchicago.edu Tue Oct 31 12:13:24 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue, 31 Oct 2006 11:13:24 -0600 Subject: [SciPy-user] [Numpy-discussion] Need more comments from scientific community on python-dev In-Reply-To: <45478230.2050709@ee.byu.edu> (Travis Oliphant's message of "Tue, 31 Oct 2006 10:04:48 -0700") References: <45474841.9030802@ieee.org> <45478230.2050709@ee.byu.edu> Message-ID: <87r6wo5ryj.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Travis" == Travis Oliphant writes: Travis> All of these seem to define their own objects which are Travis> basically just interpretations of chunks of memory. At Travis> one time, we might have said "these should all be Travis> sub-classes of the ndarray". Now, we are thinking more What about blitting pixel buffers from mpl or chaco agg into various GUI windows, GTK, Tk, WX, etc.... This seems like a ready made case for the array interface. I could pipe in with an example like this if it would help. JDH From oliphant at ee.byu.edu Tue Oct 31 12:20:35 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 31 Oct 2006 10:20:35 -0700 Subject: [SciPy-user] [Numpy-discussion] Need more comments from scientific community on python-dev In-Reply-To: <87r6wo5ryj.fsf@peds-pc311.bsd.uchicago.edu> References: <45474841.9030802@ieee.org> <45478230.2050709@ee.byu.edu> <87r6wo5ryj.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <454785E3.1020505@ee.byu.edu> John Hunter wrote: >>>>>>"Travis" == Travis Oliphant writes: >>>>>> >>>>>> > Travis> All of these seem to define their own objects which are > Travis> basically just interpretations of chunks of memory. At > Travis> one time, we might have said "these should all be > Travis> sub-classes of the ndarray". Now, we are thinking more > >What about blitting pixel buffers from mpl or chaco agg into various >GUI windows, GTK, Tk, WX, etc.... This seems like a ready made case >for the array interface. I could pipe in with an example like this if >it would help. > > Somebody mentioned something like this, but another example would help. The thread degenerated into a discussion of how RGBA is often different so you "can't share memory" anyway. I think what is being missed is that you would set up the NumPy array in the correct memory layout for whatever the blitting code you are using expects and then could do processing with arrays and then write them to the screen directly from the NumPy array (rather than going through a string) or using an attribute lookup to use the array interface. So, an example of how you "calculate with NumPy" and then "blit to the screen" the result would be very helpful. -Travis From gpajer at rider.edu Tue Oct 31 13:08:08 2006 From: gpajer at rider.edu (Gary Pajer) Date: Tue, 31 Oct 2006 13:08:08 -0500 Subject: [SciPy-user] [Numpy-discussion] Need more comments from scientific community on python-dev In-Reply-To: <87r6wo5ryj.fsf@peds-pc311.bsd.uchicago.edu> References: <45474841.9030802@ieee.org> <45478230.2050709@ee.byu.edu> <87r6wo5ryj.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <45479108.4030704@rider.edu> John Hunter wrote: >>>>>>"Travis" == Travis Oliphant writes: >>>>>> >>>>>> > Travis> All of these seem to define their own objects which are > Travis> basically just interpretations of chunks of memory. At > Travis> one time, we might have said "these should all be > Travis> sub-classes of the ndarray". Now, we are thinking more > >What about blitting pixel buffers from mpl or chaco agg into various >GUI windows, GTK, Tk, WX, etc.... This seems like a ready made case >for the array interface. I could pipe in with an example like this if >it would help. > >JDH > > Does this bear on the issue of fast updating of graphs (stripchart ) ? If so ... it's something I could use, and I revisit the problem every now and again. Now happens to be one of those nows. If my voice would help, what would the best way to communicate? -gary >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > From fperez.net at gmail.com Tue Oct 31 13:18:03 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 31 Oct 2006 11:18:03 -0700 Subject: [SciPy-user] [Numpy-discussion] Need more comments from scientific community on python-dev In-Reply-To: <45478230.2050709@ee.byu.edu> References: <45474841.9030802@ieee.org> <45478230.2050709@ee.byu.edu> Message-ID: On 10/31/06, Travis Oliphant wrote: > Fernando Perez wrote: > >I actually worry about the same: I really would like to help, but > >after reading the whole discussion, I realized that the low-level > >details being asked and discussed are something I don't really know > >enough to say anything. And I don't want to sound simply saying 'Hey, > >Travis is great, listen to him!' to python-dev, since that (asides > >from looking silly) can be somewhat counter-productive. > > > > > >How does that sound, Travis? Is that something you think might help > >you, esp. since so many of us are feeling woefully underqualified to > >lend a useful hand in the actual discussion on python-dev? > > > > > > That would be great. I think a couple of things would also be useful. OK, my experience so far has been that there's a certain 'activation barrier' with Wikis, but once pages are there, people for some reason feel more comfortable filling in. So even though it's mostly a place holder, I went ahead and made this: http://www.scipy.org/ArrayInterfacePEP Hopefully as the discussion evolves, this page can be filled in with all the necessary info in one place, and it will become in a few days a solid, organized repository of all the key points in this discussion. This will be a LOT easier to refer to in the python-dev battle than randomly scattered bits of emails in the discussion thread. I'll try to spend more time on it over the next few days to fill in, but I'm pretty busy with other things as well, so hopefully others can pitch in as well. Cheers, f ps - one more thing. This guy: http://blog.vrplumber.com/ has been rewriting the OpenGL bindings using ctypes, and I've seen posts from him about numpy (in his blog). He might be able to contribute something... From souheil.inati at nyu.edu Tue Oct 31 13:36:28 2006 From: souheil.inati at nyu.edu (Souheil Inati) Date: Tue, 31 Oct 2006 13:36:28 -0500 Subject: [SciPy-user] scipy 0.5.1 and numpy 1.0 Message-ID: Checking the website here http://www.scipy.org/Download, it is unclear if the latest stable release of SciPy (0.5.1) is compatible with the latest stable release of (NumPy 1.0). Could someone please clarify this for me. Thanks, Souheil From excellent.frizou at gmail.com Tue Oct 31 14:37:36 2006 From: excellent.frizou at gmail.com (Sam frizou) Date: Tue, 31 Oct 2006 20:37:36 +0100 Subject: [SciPy-user] csr_matrix Message-ID: <1d2f6e8c0610311137r58329fb6x4e8967e6ffe8783d@mail.gmail.com> Hi, Is it possible to get the set of indices of non null elements of a sparse matrix M ? I means I can see them by doing "print M", but I want to use them, so I would like something like a list of (i,j). Thanks. -- On obtient beaucoup plus avec un mot gentil et une arme qu'avec seulement un mot gentil - "Al Capone". From ckkart at hoc.net Tue Oct 31 18:12:44 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 01 Nov 2006 08:12:44 +0900 Subject: [SciPy-user] scipy 0.5.1 and numpy 1.0 In-Reply-To: References: Message-ID: <4547D86C.6090400@hoc.net> Souheil Inati wrote: > Checking the website here http://www.scipy.org/Download, it is > unclear if the latest stable release of SciPy (0.5.1) is compatible > with the latest stable release of (NumPy 1.0). Could someone please > clarify this for me. I think the last numpy which works with scipy 0.5.1 is 1.0rc2. At least it works here. Christian