From fperez.net at gmail.com Sat Mar 1 02:45:28 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 29 Feb 2008 23:45:28 -0800 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <1065457655.20080228095700@xs4all.nl> References: <1065457655.20080228095700@xs4all.nl> Message-ID: On Thu, Feb 28, 2008 at 12:57 AM, wrote: > I've spent some time on looking at various packages and frameworks > like Traits, Chaco and Envisage, but I just can't seem to wrap my head > around them. >From your description, those three (esp. the first two) are probably your best bets. Have you tried playing with the examples and asking specific questions on the enthought-dev list? The developers there are typically very responsive to specific queries. You may also want to have a look at Vision: http://mgltools.scripps.edu/packages/vision/overview though I don't know if it has 2-d plotting (it does have fancy OpenGL 3d features). Cheers, f From dmitrey.kroshko at scipy.org Sat Mar 1 03:04:50 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 01 Mar 2008 10:04:50 +0200 Subject: [SciPy-user] nonnegative linear squares (NNLS, lsqnonneg) in Python In-Reply-To: <1D37B5D0C584B04B902222300E9B2B1FB467A3@USMLVV1EXCTV06.ww005.siemens.net> References: <1D37B5D0C584B04B902222300E9B2B1FB467A3@USMLVV1EXCTV06.ww005.siemens.net> Message-ID: <47C90E22.1020106@scipy.org> I have searched for free constrained LLS solvers for some my own purposes - connecting the ones to scikits.openopt and using the ones for solving ralg/lincher subproblems. NNLS and WNNLS are BVLS predecessors. See here for BVLS license change from free-for-non-commercial to GPL (answer by Alan G Isaac): http://comments.gmane.org/gmane.comp.python.scientific.devel/7431 I intend to connect the routine to scikits.openopt during 1-2 days. Unfortunately, the BVLS routine seems to be is intended for dense problems only (and that's very bad for my ralg/lincher purposes). There is other free routines that could be considered - toms/587 (I don't know for dense only or for sparse as well) and BCLS. Latter has GPL (written in ANSI C) and is capable of handling sparse problems, moreover, implicit A matrix via defining funcs Ax and A^T x. BCLS consists of lots files; it has convenient MATLAB API (2 single standalone func for implicit and explicit matrix A) but calling it from C API is very inconvenient, one function is not enough, so I can't connect it to Python via ctypes, the task is too complicated. D. Jian, Bing (MED US) wrote: > Hi, > I am wondering if there is a non-negative linear squares solver > in scipy/numpy > which is equivalent to the lsqnonneg() in MATLAB? If not, then > probably I need > to write my own extensions based on existing C code. Thanks! > > Bing > From bryan at cole.uklinux.net Sat Mar 1 03:09:48 2008 From: bryan at cole.uklinux.net (Bryan Cole) Date: Sat, 01 Mar 2008 08:09:48 +0000 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <1065457655.20080228095700@xs4all.nl> References: <1065457655.20080228095700@xs4all.nl> Message-ID: <1204358988.2639.17.camel@pc1.cole.uklinux.net> Hi, I'm an ex-LabView user, now using python for lab data-acquisition and analysis for some years now. It's well worth the switch. Your best bet is http://pyqwt.sourceforge.net/ You're right, matplotlib/chaco are too slow for "interactive real-time" type work. The strengths of these packages are their presentation-quality output (vector and anti-aliased bitmap graphics). For data-acquisition applications, I prefer wxPython to Qt (I may move to something based on Traits/ETS once I figure out how to install it). For GUIs I have a home-made plotting widget which is fast enough for real-time work. Unfortunately, I'm not yet in a position to post it publicly. If you need *really* high performance, it's not too hard to write a plot-widget using PyOpenGL directly. Bryan On Thu, 2008-02-28 at 09:57 +0100, scipy-user at onnodb.com wrote: > Hi all, > > Using LabVIEW software for our data analysis at the moment, I'm > currently looking for alternatives. Especially since LabVIEW's > "graphical" programming language is somewhat cumbersome for some of > the things we're doing --- an iterative language would often be much > easier. (Actually, I personally don't like the graphical way of > programming at all :) ) > > Python seems to be a great alternative, although I haven't been able > yet to get things up & running the way I'd like. The main 'problem' is > that LabVIEW contains a lot of high-performance library code for > plotting data. I've been experimenting with SciPy and matplotlib, but > those libraries are just *way* slower than LabVIEW when plotting large > data sets (in our case, it's a current trace with a few MBs of data). > I'd like to plot a current trace, so that the user can quickly zoom in > & out, and pan using a horizontal scroll bar, but how should I do > this? (I've looked around for examples a bit, but being a newbie, it > can be hard to find your way around such a huge community). > > Another issue appears to be the creation of simple user interfaces. > This is very intuitive in LabVIEW, but could someone here give some > advice on a way to combine a UI and plot windows in a not-so-difficult > to learn way in Python? What's your own experience? > > I've spent some time on looking at various packages and frameworks > like Traits, Chaco and Envisage, but I just can't seem to wrap my head > around them. > > I'm looking forward to any help; thank you very much in advance! > > Best regards, > > From dmitrey.kroshko at scipy.org Sat Mar 1 03:17:19 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 01 Mar 2008 10:17:19 +0200 Subject: [SciPy-user] minpack.error / fsolve problem In-Reply-To: References: Message-ID: <47C9110F.4040807@scipy.org> As for me it yields "name LcUtil is not defined" (line 48). Also, there was some indent problems near try-except block, mb due to space-tab different numbers from the attached file. Webb Sprague wrote: > I am having a problem with convergence (I think) for an optimization. > > Every so often I do a non-linear fit of a parameter using fsolve (as > input a constant vector of base death rates, a constant vector > multiplier of those, and a variable scalar multiplier -- the last is > what I am trying to fit) and I get a "minpack.error" with the message > that "Error occured while calling the Python function named f" with > no other information. The information normally returned from fsolve > (ier, message, infodict) are all set to none. > I would recommend you try using another solver, for example nssolve from scikits.openopt. > I am kind of at a loss for how to proceed, at least without taking a > class on optimization algorithms. How do I get more information from > fsolve? Is there a better optimization function to use? > optimize.golden()? Or ... ? > Do you know exactly what do you need? Solve a system of non-linear equations via fsolve or to minimize a function? D. From s.mientki at ru.nl Sat Mar 1 05:12:25 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 01 Mar 2008 11:12:25 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <1204358988.2639.17.camel@pc1.cole.uklinux.net> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> Message-ID: <47C92C09.2060704@ru.nl> hi Bryan, unfortunately you're not able to post your code, (btw why not ?) but maybe you can answer a few questions .... Bryan Cole wrote: > Hi, > > I'm an ex-LabView user, now using python for lab data-acquisition and > analysis for some years now. It's well worth the switch. > > I'm too a former MatLab / LabView user, and I'm quit happy with Python / Scipy / wxPython for now. At the moment I'm trying to write an open source LabView equivalent, first results can be seen here: http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html > Your best bet is > http://pyqwt.sourceforge.net/ > > > You're right, matplotlib/chaco are too slow for "interactive real-time" > type work. The strengths of these packages are their > presentation-quality output (vector and anti-aliased bitmap graphics). > > For data-acquisition applications, I prefer wxPython to Qt (I may move > to something based on Traits/ETS once I figure out how to install it). > For GUIs I have a home-made plotting widget which is fast enough for > real-time work. Unfortunately, I'm not yet in a position to post it > publicly. If you need *really* high performance, it's not too hard to > write a plot-widget using PyOpenGL directly. > > I'm in the middle of writing a real time plot widget, based on direct canvas drawing, don't know yet how fast it is. I hope to make a video of it next week. But I wonder if you've a good data-acquisition module. I've one now (can use Soundcard , some NI-modules and several dedicated DAQ cards), but it's in fact a windows executable that I can control from Python. And of course I', looking for a more OS-indepedant solution. cheers, Stef From J.Anderson at hull.ac.uk Sat Mar 1 10:12:22 2008 From: J.Anderson at hull.ac.uk (Joseph Anderson) Date: Sat, 1 Mar 2008 15:12:22 -0000 Subject: [SciPy-user] Newbie help for installing pysamplerate References: <47C43DFE0200002A0000060C@KAMILLA.rrze.uni-erlangen.de><91cf711d0802260923x63498b1apd5983f0d5ae0c20@mail.gmail.com> Message-ID: Hello All, Most likely this is really just a question for David Cournapeau. . . Am having a bit of trouble apparently resulting from being a python newbie. I have numpy, scipy, and pyaudiolab up and going, but am having trouble getting pysamplerate to happen. In attempting to install pysamplerate, I have run pysamplerate's setup.py in a python interpreter, choosing task [2], the install option. That does the following: samplerate_info: libraries samplerate not found in /Library/Frameworks/Python.framework/Versions/2.5/lib FOUND: libraries = ['samplerate'] library_dirs = ['/usr/local/lib'] fulllibloc = /usr/local/lib/libsamplerate.so.0 fullheadloc = /usr/local/include/samplerate.h include_dirs = ['/usr/local/include'] running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_py copying pysamplerate.py -> build/lib/pysamplerate running install_lib creating /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/__init__.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/generate_const.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/header_parser.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/info.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/pysamplerate.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/setup.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/__init__.py to __init__.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/generate_const.py to generate_const.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/header_parser.py to header_parser.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/info.py to info.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/pysamplerate.py to pysamplerate.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/setup.py to setup.pyc running install_egg_info Writing /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate-0.1-py2.5.egg-info **************************************** The latest version of libsamplerate has been installed in /usr/local/lib/, with ls libsamplerate* listing the following: libsamplerate.0.1.1.dylib libsamplerate.dylib libsamplerate.0.dylib libsamplerate.la libsamplerate.a Starting a new python interpreter, attempting to import pysamplerate, i get: >>> import pysamplerate Traceback (most recent call last): File "", line 1, in File "pysamplerate.py", line 23, in _src = cdll.LoadLibrary('/usr/local/lib/libsamplerate.so.0') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ctypes/__init__.py", line 423, in LoadLibrary return self._dlltype(name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ctypes/__init__.py", line 340, in __init__ self._handle = _dlopen(self._name, mode) OSError: dlopen(/usr/local/lib/libsamplerate.so.0, 6): image not found **************************************** What I see is that libsamplerate.so.0 is missing from the /usr/local/lib directory. Is this a file that should be created by pysamplerate's setup.py? Anyway, I'm doing something wrong. Sure it is rather simple. Thanks for the help. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4128 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From aisaac at american.edu Sat Mar 1 11:00:57 2008 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 1 Mar 2008 11:00:57 -0500 Subject: [SciPy-user] =?utf-8?q?=5BNewbie=5D_High-performance_plotting_of?= =?utf-8?q?=09large=09datasets?= In-Reply-To: <47C92C09.2060704@ru.nl> References: <1065457655.20080228095700@xs4all.nl><1204358988.2639.17.camel@pc1.cole.uklinux.net><47C92C09.2060704@ru.nl> Message-ID: On Sat, 01 Mar 2008, Stef Mientki apparently wrote: > I'm too a former MatLab / LabView user, > and I'm quit happy with Python / Scipy / wxPython for now. > At the moment I'm trying to write an open source LabView equivalent, > first results can be seen here: > http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html Looks promising. Please post updates as the project moves along. Thank you, Alan Isaac From webb.sprague at gmail.com Sat Mar 1 12:52:22 2008 From: webb.sprague at gmail.com (Webb Sprague) Date: Sat, 1 Mar 2008 09:52:22 -0800 Subject: [SciPy-user] minpack.error / fsolve problem In-Reply-To: <47C9110F.4040807@scipy.org> References: <47C9110F.4040807@scipy.org> Message-ID: Thanks for the response, dmitrey! On Sat, Mar 1, 2008 at 12:17 AM, dmitrey wrote: > As for me it yields "name LcUtil is not defined" (line 48). Also, there > was some indent problems near try-except block, mb due to space-tab > different numbers from the attached file. Sorry but there is a lot of infrastructure you don't have, so it won't run as is. Plus it was cut and paste, etc. But it works 98% of the time in its context. > I would recommend you try using another solver, for example nssolve from > scikits.openopt. Sounds reasonable, but why that one? > Do you know exactly what do you need? Solve a system of non-linear > equations via fsolve or to minimize a function? > D. I need to fit a scalar (kt in the code) such that f(kt) = e_0 (a scalar which is given from somewhere else). f() involves a bunch of stuff (see the code, but don't bother trying to run it), but it is monotonic. I think of this as finding kt such that f(kt) - e_0 = 0, so I used a root solver. But I hardly care so long as it works Is there a best function for this? I am totally unfamiliar with optimization theory, besides one assignment on Newton's method in first semester calculus, and some handwaving about MLE's From silva at lma.cnrs-mrs.fr Sat Mar 1 15:22:17 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sat, 01 Mar 2008 21:22:17 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> Message-ID: <1204402937.10354.3.camel@localhost.localdomain> Le mardi 08 janvier 2008 ? 21:16 +0100, Jasper Stolte a ?crit : > Hi guys, > I'm new to the list, nice to meet you all. Anyway my question is: Is > there already someone developing some sort of Control Systems > Toolbox / Robust Control Toolbox equivalent for SciPy? I would love to > see it added, and I am thinking of building something from scratch. > Obviously that wouldn't make much sense if other people are already > working on something similar. Hi, Looking for control stuff in scipy, I've read the previous discussion you had with Ryan and Jeff and found the beginning of project you put on Google Code. Are you interested in some help for developing some control system features? I'm willing to python-ize some of the Octave Control Systems Toolbox http://enacit1.epfl.ch/cours_matlab/octave-manual/octave_30.html Are you ok ? -- Fabricio From bryan at cole.uklinux.net Sat Mar 1 16:06:17 2008 From: bryan at cole.uklinux.net (Bryan Cole) Date: Sat, 01 Mar 2008 21:06:17 +0000 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <47C92C09.2060704@ru.nl> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> Message-ID: <1204405576.2639.73.camel@pc1.cole.uklinux.net> Hi Stef, > > unfortunately you're not able to post your code, > (btw why not ?) Although I started developing the plot-widget in my own time (I did publish this first version on my home web-site (now defunct due to lack of time)), development continued at my workplace over 2 years or more now so now the code ownership is ambiguous. The best plan is for me to get permission to release the code. Unfortunately, upper management are a bit suspicious of OSS at my workplace. I'm working on this... > but maybe you can answer a few questions .... > > Bryan Cole wrote: > > Hi, > > > > I'm an ex-LabView user, now using python for lab data-acquisition and > > analysis for some years now. It's well worth the switch. > > > > > I'm too a former MatLab / LabView user, > and I'm quit happy with Python / Scipy / wxPython for now. > At the moment I'm trying to write an open source LabView equivalent, > first results can be seen here: > > http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html > > Your best bet is Nice work. I must confess, I'm not a huge fan of the "graphical programming" concept (otherwise I'd probably still be using labview). My wish would be a nice set of technical widgets (plots, knobs, sliders, gauges etc.) which are 1) documented and 2) integrated with a wxPython GUI-designer like wxGlade. In fact, a "technical edition" of wxGlade with such things integrated would make creating data-acquisition application easy to the newcomer without compromising flexibility. > > > > > I'm in the middle of writing a real time plot widget, based on direct > canvas drawing, > don't know yet how fast it is. I hope to make a video of it next week. I'll look forward to checking it out. > > But I wonder if you've a good data-acquisition module. > I've one now (can use Soundcard , some NI-modules and several dedicated > DAQ cards), > but it's in fact a windows executable that I can control from Python. Our data-acquisition modules are designed around the specific data/hardware we work with and hence are not particularly generic. I tend to write wrappers for C-based device drivers as required. We have a near-complete SWIG-generated wrapper-set for the (now obsolete) NI-DAQ drivers. Comedi (the linux DAQ-card drivers) already come with python wrappers. I couldn't get SWIG-wrappers for NI-DAQmx (the new driver API from NI) to build but now use a ctypes interface for the bits of the API we need. In fact, we've just migrated all our main driver-wrappers to ctypes. The benefit of not having to recompile C-code for each python version / platform outweighs the disadvantage of having to write/maintain the wrappers manually (as compared to auto-generation from header-files). I'm always happy to discuss this topic so if anyone want more details drop me an email. The NI-DAQmx drivers probably represent as good a data-acquisition API as you'll get. It's pretty much a 1:1 mapping to LabView nodes. The task-based approach is easy to work with although the NI documentation is far from perfect. You could make NI-DAQmx task nodes for your pylab_works framework. If you need cross-platform, re-implement the required functionality on linux with Comedi. The main recommendation I would make to anyone writing data-acquisition stuff in python is Use Traits/TraitsUI! The ability to auto-generate a GUI to configure hardware objects based on their Traits definitions is a *huge* productivity saving. > And of course I', looking for a more OS-indepedant solution. Amen. cheers, Bryan > > cheers, > Stef From oliphant at enthought.com Sat Mar 1 22:13:00 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Sat, 01 Mar 2008 21:13:00 -0600 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <1204358988.2639.17.camel@pc1.cole.uklinux.net> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> Message-ID: <47CA1B3C.4010402@enthought.com> Bryan Cole wrote: > Hi, > > I'm an ex-LabView user, now using python for lab data-acquisition and > analysis for some years now. It's well worth the switch. > > Your best bet is > http://pyqwt.sourceforge.net/ > > > You're right, matplotlib/chaco are too slow for "interactive real-time" > type work. The strengths of these packages are their > presentation-quality output (vector and anti-aliased bitmap graphics). > Are you sure you meant that *chaco* is too slow? The stated difference of chaco with matplotlib is interactive plotting and I know a bit of effort goes in to making it fast. I'm curious what problems you tried chaco on for which it was too slow. -Travis O. From dwf at cs.toronto.edu Sat Mar 1 23:12:50 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 1 Mar 2008 23:12:50 -0500 Subject: [SciPy-user] Maximally distinguishable colour map Message-ID: <28B8FA49-ACA9-4EE2-872F-21B53EEBA2FC@cs.toronto.edu> Hi folks, This isn't so much a SciPy question as it is a matplotlib question (or even a general "scientific computing question"), but I figure someone else has got to have run across the problem. Essentially I'm trying to choose a set of colours to display a graph plot that maximizes the visual differentiability of them. Basically, I'm going to be producing a graph with a lot of edges (in fact, a multigraph, i.e. more than one edge between a pair of nodes) and it gets pretty hard to distinguish colours that are very "close" together. Can anyone perhaps point me at a colour map that maximizes the visual difference between consecutive colours, or at least a way of constructing one? The method I have now works fairly well (every 43rd colour in the HSV map in matplotlib) but it unfortunately includes yellow early in the sequence (I should mention this is going to be on a white background). Thanks, David From bing.jian at gmail.com Sun Mar 2 02:15:42 2008 From: bing.jian at gmail.com (Bing) Date: Sun, 2 Mar 2008 02:15:42 -0500 Subject: [SciPy-user] Lawson-Hanson's Non-Negative Least Squares (NNLS) algorithm Message-ID: Hi Dmitrey, Thanks for your response to my previous email. I just handcoded a Python extension of Lawson-Hanson's Non-Negative Least Squares (NNLS) algorithm using C++ based on the following source code: http://www.cs.utexas.edu/~suvrit/work/progs/nnls.html Seems it is working fine for me. The extension is straightforward and please feel free to email me (bing.jian at gmail.com) if you are interested in this extension. Thanks! Bing -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Mar 2 04:44:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 2 Mar 2008 03:44:18 -0600 Subject: [SciPy-user] Maximally distinguishable colour map In-Reply-To: <28B8FA49-ACA9-4EE2-872F-21B53EEBA2FC@cs.toronto.edu> References: <28B8FA49-ACA9-4EE2-872F-21B53EEBA2FC@cs.toronto.edu> Message-ID: <3d375d730803020144u3b3cc68djb04d814b8fd1ccbb@mail.gmail.com> On Sat, Mar 1, 2008 at 10:12 PM, David Warde-Farley wrote: > Hi folks, > > This isn't so much a SciPy question as it is a matplotlib question (or > even a general "scientific computing question"), but I figure someone > else has got to have run across the problem. > > Essentially I'm trying to choose a set of colours to display a graph > plot that maximizes the visual differentiability of them. Basically, > I'm going to be producing a graph with a lot of edges (in fact, a > multigraph, i.e. more than one edge between a pair of nodes) and it > gets pretty hard to distinguish colours that are very "close" together. > > Can anyone perhaps point me at a colour map that maximizes the visual > difference between consecutive colours, or at least a way of > constructing one? The method I have now works fairly well (every 43rd > colour in the HSV map in matplotlib) but it unfortunately includes > yellow early in the sequence (I should mention this is going to be on > a white background). The qualitative color palettes on the ColorBrewer are fairly rigorously designed. Just avoid the pastel palletes. Note that human vision can only reliably distinguish about 7 hues in the same scene. Their 'Dark' palette is probably the most suitable. http://www.colorbrewer.org -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Sun Mar 2 04:55:46 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 2 Mar 2008 04:55:46 -0500 Subject: [SciPy-user] Maximally distinguishable colour map In-Reply-To: <3d375d730803020144u3b3cc68djb04d814b8fd1ccbb@mail.gmail.com> References: <28B8FA49-ACA9-4EE2-872F-21B53EEBA2FC@cs.toronto.edu> <3d375d730803020144u3b3cc68djb04d814b8fd1ccbb@mail.gmail.com> Message-ID: On 2-Mar-08, at 4:44 AM, Robert Kern wrote: > On Sat, Mar 1, 2008 at 10:12 PM, David Warde-Farley > wrote: > The qualitative color palettes on the ColorBrewer are fairly > rigorously designed. Just avoid the pastel palletes. Note that human > vision can only reliably distinguish about 7 hues in the same scene. > Their 'Dark' palette is probably the most suitable. Thanks Robert. I suspected there was an upper limit like that; consequently I'm currently doing every 43rd when there are less than 8 and an even spacing with less difference between them (our system prefers sparse solutions and so a user would have to force a non- sparse situation like this, in which case they can't reasonably expect to make sense of the colours anyway). It seems as though none of the qualitative palettes are certifiably colourblindness-friendly, but I guess this comes with the territory. David From robert.kern at gmail.com Sun Mar 2 05:15:07 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 2 Mar 2008 04:15:07 -0600 Subject: [SciPy-user] Maximally distinguishable colour map In-Reply-To: References: <28B8FA49-ACA9-4EE2-872F-21B53EEBA2FC@cs.toronto.edu> <3d375d730803020144u3b3cc68djb04d814b8fd1ccbb@mail.gmail.com> Message-ID: <3d375d730803020215h59dd84e8h974ba902aa0df81c@mail.gmail.com> On Sun, Mar 2, 2008 at 3:55 AM, David Warde-Farley wrote: > On 2-Mar-08, at 4:44 AM, Robert Kern wrote: > > > On Sat, Mar 1, 2008 at 10:12 PM, David Warde-Farley > > wrote: > > > The qualitative color palettes on the ColorBrewer are fairly > > rigorously designed. Just avoid the pastel palletes. Note that human > > vision can only reliably distinguish about 7 hues in the same scene. > > Their 'Dark' palette is probably the most suitable. > > Thanks Robert. I suspected there was an upper limit like that; > consequently I'm currently doing every 43rd when there are less than 8 > and an even spacing with less difference between them (our system > prefers sparse solutions and so a user would have to force a non- > sparse situation like this, in which case they can't reasonably expect > to make sense of the colours anyway). > > It seems as though none of the qualitative palettes are certifiably > colourblindness-friendly, but I guess this comes with the territory. Pretty much. The common red-green colorblindness essentially compresses the normal 3D colorspace into a 2D plane. That removes a ton of the hue variation that you are trying to utilize with a qualitative colormap. However, I am color-deficient (i.e., mostly, but not entirely red-green colorblind), and I find the Dark palette mostly acceptable. The pastel palettes are inscrutable to me. If you want to see for yourself, here is a free tool for Windows, OS X, and Linux to simulate colorblindness for your entire screen: http://colororacle.cartography.ch/ The paper they have there, "Color Design for the Color Vision Impaired," contains pretty useful advice. http://colororacle.cartography.ch/design.html Thank you for caring. :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Sun Mar 2 06:23:55 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 2 Mar 2008 12:23:55 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <1065457655.20080228095700@xs4all.nl> References: <1065457655.20080228095700@xs4all.nl> Message-ID: <20080302112355.GB14294@phare.normalesup.org> On Thu, Feb 28, 2008 at 09:57:00AM +0100, scipy-user at onnodb.com wrote: > Another issue appears to be the creation of simple user interfaces. > This is very intuitive in LabVIEW, but could someone here give some > advice on a way to combine a UI and plot windows in a not-so-difficult > to learn way in Python? What's your own experience? Traits/TraitsUI are absolutely great for this. I have used them in a way very similar to what you are doing. You can find notes that should help you learn how to create UIs easily starting here: http://gael-varoquaux.info/computers/traits_tutorial/index.html You probably want to use Chaco, rather than MPL, for plotting speed reasons. Hopefully these tutorials can get you on your way: https://svn.enthought.com/enthought/wiki/Tutorials/SimpleChaco2Plot https://svn.enthought.com/enthought/wiki/Tutorials/SimpleEngrTraitsAppWithChaco2Plot I suggest you do not try to start with Envisage, as it is overkill for your current needs, can be added later, and is a bit harder to learn. Hope this helps, Ga?l From gael.varoquaux at normalesup.org Sun Mar 2 08:05:44 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 2 Mar 2008 14:05:44 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <1204402937.10354.3.camel@localhost.localdomain> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <1204402937.10354.3.camel@localhost.localdomain> Message-ID: <20080302130544.GE14294@phare.normalesup.org> On Sat, Mar 01, 2008 at 09:22:17PM +0100, Fabrice Silva wrote: > Are you interested in some help for developing some control system > features? I'm willing to python-ize some of the Octave Control Systems > Toolbox > http://enacit1.epfl.ch/cours_matlab/octave-manual/octave_30.html I haven't been following this very closely, but can you remind me if we are talking about a scikit which could be released under the GPL, or something that would be released under a BSD license. The reason I ask this question is that Octave is GPL, and to write BSD-license coded inspired by their code I think you would need either to do some white room engineering, or ask for special permission to the authors. Don't trust my opinion too much, I am license expert, but I just wanted to point out this eventual snag. Cheers, Ga?l From s.mientki at ru.nl Sun Mar 2 12:12:06 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 02 Mar 2008 18:12:06 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> Message-ID: <47CADFE6.6060805@ru.nl> Alan G Isaac wrote: > On Sat, 01 Mar 2008, Stef Mientki apparently wrote: > >> I'm too a former MatLab / LabView user, >> and I'm quit happy with Python / Scipy / wxPython for now. >> At the moment I'm trying to write an open source LabView equivalent, >> first results can be seen here: >> http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html >> > > Looks promising. > Please post updates as the project moves along. > off course I will. cheers, Stef From osman at fuse.net Sun Mar 2 12:20:42 2008 From: osman at fuse.net (osman) Date: Sun, 02 Mar 2008 12:20:42 -0500 Subject: [SciPy-user] gutsy amd64 Message-ID: <1204478442.14422.23.camel@stargate.org> Hi, I have ubunto 7.10 64 bit on AMD64 machine. The usual apt-get install will not install both scipy and libumfpack. One removes the other. Is there a fix to this? I have also downloaded the latest svn but it does not biuld. Errors like: build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c:6196: error: expected expression before ?)? token build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c:6196: error: too few arguments to function ?SWIG_Python_NewPointerObj? error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-prototypes -fPIC -DSCIPY_UMFPACK_H -DSCIPY_AMD_H -DNO_ATLAS_INFO=2 -I/usr/local/include -I/usr/include -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c -o build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.o" failed with exit status 1 stargate:/home/osman/SCIPY/scipy-bash-> my swig is : stargate:/home/osman/SCIPY/scipy-bash-> swig -version SWIG Version 1.3.31 Compiled with g++ [x86_64-unknown-linux-gnu] Please see http://www.swig.org for reporting bugs and further information stargate:/home/osman/SCIPY/scipy-bash-> any help is much appreciated. TIA -osman From s.mientki at ru.nl Sun Mar 2 12:24:12 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 02 Mar 2008 18:24:12 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <1204405576.2639.73.camel@pc1.cole.uklinux.net> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> Message-ID: <47CAE2BC.2040800@ru.nl> Bryan Cole wrote: > Hi Stef, > >> unfortunately you're not able to post your code, >> (btw why not ?) >> > > Although I started developing the plot-widget in my own time (I did > publish this first version on my home web-site (now defunct due to lack > of time)), development continued at my workplace over 2 years or more > now so now the code ownership is ambiguous. The best plan is for me to > get permission to release the code. Unfortunately, upper management are > a bit suspicious of OSS at my workplace. I'm working on this... > > >> but maybe you can answer a few questions .... >> >> Bryan Cole wrote: >> >>> Hi, >>> >>> I'm an ex-LabView user, now using python for lab data-acquisition and >>> analysis for some years now. It's well worth the switch. >>> >>> >>> >> I'm too a former MatLab / LabView user, >> and I'm quit happy with Python / Scipy / wxPython for now. >> At the moment I'm trying to write an open source LabView equivalent, >> first results can be seen here: >> >> http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html >> >>> Your best bet is >>> > > Nice work. I must confess, I'm not a huge fan of the "graphical > programming" concept (otherwise I'd probably still be using labview). I don't think LabView and Visual Programming are identical. In my humble opinion, LabView violated against some vey basic rules, like flattness of information, and uniformity / simplicity. > My > wish would be a nice set of technical widgets (plots, knobs, sliders, > gauges etc.) which are 1) documented and 2) integrated with a wxPython > GUI-designer like wxGlade. In fact, a "technical edition" of wxGlade > with such things integrated would make creating data-acquisition > application easy to the newcomer without compromising flexibility. > > the you're lucky, ... ... I tried to get wxGlade running on 2 different machines, both failed ;-) >>> >>> >> I'm in the middle of writing a real time plot widget, based on direct >> canvas drawing, >> don't know yet how fast it is. I hope to make a video of it next week. >> > The NI-DAQmx drivers probably represent as good a data-acquisition API > as you'll get. It's pretty much a 1:1 mapping to LabView nodes. The > task-based approach is easy to work with although the NI documentation > is far from perfect. You could make NI-DAQmx task nodes for your > pylab_works framework. If you need cross-platform, re-implement the > required functionality on linux with Comedi. > > NI-DAQmx would be a good standard, and I think I've even seen a Python wrapper for it, but I can't find it anymore. In fact the windows program I use in PyLab_Works also contains a NI-DAQmx wrapper. > The main recommendation I would make to anyone writing data-acquisition > stuff in python is Use Traits/TraitsUI! The ability to auto-generate a > GUI to configure hardware objects based on their Traits definitions is a > *huge* productivity saving. > You might be quit right, I've heard this reasoning more than once, but .... ... I'm looking at the wrong documents or ... I'm simply too stupid or ... I'm a completely spoiled windows user but I really really don't understand one bit of Traits :-( cheers, Stef From robince at gmail.com Sun Mar 2 12:34:43 2008 From: robince at gmail.com (Robin) Date: Sun, 2 Mar 2008 17:34:43 +0000 Subject: [SciPy-user] gutsy amd64 In-Reply-To: <1204478442.14422.23.camel@stargate.org> References: <1204478442.14422.23.camel@stargate.org> Message-ID: On Sun, Mar 2, 2008 at 5:20 PM, osman wrote: > build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c:6196: error: expected expression before ')' token > build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c:6196: error: too few arguments to function 'SWIG_Python_NewPointerObj' > error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall > -Wstrict-prototypes -fPIC -DSCIPY_UMFPACK_H -DSCIPY_AMD_H > -DNO_ATLAS_INFO=2 -I/usr/local/include -I/usr/include > -I/usr/lib/python2.4/site-packages/numpy/core/include > -I/usr/include/python2.4 -c > build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c -o build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.o" failed with exit status 1 > stargate:/home/osman/SCIPY/scipy-bash-> This looks like it's not finding a header file... Look further up the output for the first error and it will tell you which is missing. You would probably need to install the -dev package to get the header files. (perhaps libsuitesparse-dev or libumfpack4-dev but I'm not sure). I think libsuitesparse might be the package to use. I'm not familiar with the ubuntu scipy packages (see below) but it might be that scipy depends on libsuitesparse, which blocks with libumfpack4 (becuase it is a more recent version). This would explain why you can't install them both at the same time. > any help is much appreciated. I've had better luck installing everything from scratch rather than relying on distribution packages which can be out of date, built with a different compiler, incompatible versions etc. I put the steps I use to build (64bit) on Ubuntu on the wiki, so perhaps that is helpful. http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 Cheers, Robin From osman at fuse.net Sun Mar 2 12:44:36 2008 From: osman at fuse.net (osman) Date: Sun, 02 Mar 2008 12:44:36 -0500 Subject: [SciPy-user] gutsy amd64 In-Reply-To: References: <1204478442.14422.23.camel@stargate.org> Message-ID: <1204479876.14422.28.camel@stargate.org> On Sun, 2008-03-02 at 17:34 +0000, Robin wrote: > I put the steps I use to build (64bit) on Ubuntu on the wiki, so > perhaps that is helpful. > http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 > Thanks Robin. I just found it. But when I remove g77 it will remove quite a few stuff :-( I just hope I will be able to re-install g77 . I also saw that ATLAS stuff was not found even though it was there. Seems to be related to g77 being installed. Had hoped it was easier but... Thanks again -osman From robince at gmail.com Sun Mar 2 12:51:10 2008 From: robince at gmail.com (Robin) Date: Sun, 2 Mar 2008 17:51:10 +0000 Subject: [SciPy-user] gutsy amd64 In-Reply-To: <1204479876.14422.28.camel@stargate.org> References: <1204478442.14422.23.camel@stargate.org> <1204479876.14422.28.camel@stargate.org> Message-ID: On Sun, Mar 2, 2008 at 5:44 PM, osman wrote: > > On Sun, 2008-03-02 at 17:34 +0000, Robin wrote: > > > I put the steps I use to build (64bit) on Ubuntu on the wiki, so > > perhaps that is helpful. > > http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 > > > Thanks Robin. I just found it. But when I remove g77 it will remove > quite a few stuff :-( I just hope I will be able to re-install g77 . > > I also saw that ATLAS stuff was not found even though it was there. > Seems to be related to g77 being installed. > > Had hoped it was easier but... If you don't want to remove g77 it's enough to make sure it is not on your path (mv or rename the binary temporarily). For some reason if distutils finds it it will use it in preference to gfortran regardless of the commandline --fcompiler option. (This was the case when I first started with scipy - it might be fixed now) Alternatively you could just use g77 instead of gfortran (ok as long as you use it for everything) - the instructions would be pretty similar except you might need to find libg2c instead of libgfortran for the extra path to the BLAS and LAPACK options for umfpack. I got the impression gfortran is more actively developed and supported though and seems to be the recommended option. Not sure about the ATLAS stuff - I never had much luck with Ubuntu packages as I said so I tend to build it myself and keep everything in a seperate directory. Cheers Robin From scipy-user at onnodb.com Sun Mar 2 14:49:49 2008 From: scipy-user at onnodb.com (scipy-user at onnodb.com) Date: Sun, 2 Mar 2008 20:49:49 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <20080302112355.GB14294@phare.normalesup.org> References: <1065457655.20080228095700@xs4all.nl> <20080302112355.GB14294@phare.normalesup.org> Message-ID: <11510090385.20080302204949@xs4all.nl> Hi Gael, GV> Traits/TraitsUI are absolutely great for this. I have used them in a way GV> very similar to what you are doing. You can find notes that should help GV> you learn how to create UIs easily starting here: GV> [snip] Thanks a lot for all your advice. I'll try soon, and try to post back later with my experiences! Best regards, -- Onno Broekmans From scipy-user at onnodb.com Sun Mar 2 14:54:19 2008 From: scipy-user at onnodb.com (scipy-user at onnodb.com) Date: Sun, 2 Mar 2008 20:54:19 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <1204358988.2639.17.camel@pc1.cole.uklinux.net> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> Message-ID: <1554490716.20080302205419@xs4all.nl> Hi Bryan, BC> I'm an ex-LabView user, now using python for lab data-acquisition and BC> analysis for some years now. It's well worth the switch. Ah, that's great to hear! BC> Your best bet is BC> http://pyqwt.sourceforge.net/ That looks good, I'll check it out asap. BC> You're right, matplotlib/chaco are too slow for "interactive real-time" BC> type work. The strengths of these packages are their BC> presentation-quality output (vector and anti-aliased bitmap graphics). Hm, that's what I suspected, but I hadn't been able to find different packages explicitly meant for real-time plotting. Anyway, thanks a lot for sharing your experiences!! Best regards, -- Onno Broekmans From scipy-user at onnodb.com Sun Mar 2 14:57:48 2008 From: scipy-user at onnodb.com (scipy-user at onnodb.com) Date: Sun, 2 Mar 2008 20:57:48 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: References: <1065457655.20080228095700@xs4all.nl> Message-ID: <1205901403.20080302205748@xs4all.nl> Hi Fernando, >> like Traits, Chaco and Envisage, but I just can't seem to wrap my head FP> From your description, those three (esp. the first two) are probably FP> your best bets. Have you tried playing with the examples and asking FP> specific questions on the enthought-dev list? The developers there FP> are typically very responsive to specific queries. No, I haven't tried that yet, but I certainly will! FP> You may also want to have a look at Vision: FP> http://mgltools.scripps.edu/packages/vision/overview Looks very extensive, and worth experimenting with. Thanks! Best regards, -- Onno Broekmans From scipy-user at onnodb.com Sun Mar 2 14:59:11 2008 From: scipy-user at onnodb.com (scipy-user at onnodb.com) Date: Sun, 2 Mar 2008 20:59:11 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <1204405576.2639.73.camel@pc1.cole.uklinux.net> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> Message-ID: <1896313329.20080302205911@xs4all.nl> >> I'm in the middle of writing a real time plot widget, based on direct >> canvas drawing, >> don't know yet how fast it is. I hope to make a video of it next week. BC> I'll look forward to checking it out. Me too! I'll keep an eye on this list :) Best regards, -- Onno Broekmans From stef.mientki at gmail.com Sun Mar 2 15:48:58 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Sun, 02 Mar 2008 21:48:58 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <1205901403.20080302205748@xs4all.nl> References: <1065457655.20080228095700@xs4all.nl> <1205901403.20080302205748@xs4all.nl> Message-ID: <47CB12BA.1070607@gmail.com> scipy-user at onnodb.com wrote: > Hi Fernando, > > >>> like Traits, Chaco and Envisage, but I just can't seem to wrap my head >>> > FP> From your description, those three (esp. the first two) are probably > FP> your best bets. Have you tried playing with the examples and asking > FP> specific questions on the enthought-dev list? The developers there > FP> are typically very responsive to specific queries. > > No, I haven't tried that yet, but I certainly will! > > FP> You may also want to have a look at Vision: > > FP> http://mgltools.scripps.edu/packages/vision/overview > > Looks very extensive, and worth experimenting with. > although I haven't worked with them both, I understand that Traits and Vision are each others opposite. So if you like them both, you should also take a look at everything in between ;-) * Orange * Elefant * Enthought with Traits UI, they are developping it for a customer, and planned to show us something in december 2007, but I guess it's somewhat delayed. * Vision * Pyphant * mathGUIde * PyLab_Works: http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html cheers, Stef From bryan at cole.uklinux.net Sun Mar 2 16:55:08 2008 From: bryan at cole.uklinux.net (Bryan Cole) Date: Sun, 02 Mar 2008 21:55:08 +0000 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <47CA1B3C.4010402@enthought.com> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47CA1B3C.4010402@enthought.com> Message-ID: <1204494906.16725.23.camel@pc1.cole.uklinux.net> > > > Are you sure you meant that *chaco* is too slow? The stated difference > of chaco with matplotlib is interactive plotting and I know a bit of > effort goes in to making it fast. You're quite right to pull me up on this. In fact, I've not tested chaco for this type of application so I can't say for sure if it's fast enough or not. I guess, my expectation was that it would not be much faster than matplotlib (given they both use Antigrain for rendering, which tends to be the bottleneck). Whenever I tried any type of anti-aliased drawing (antigrain or cairo) there is always a significant performance hit (on linux anyway). > > I'm curious what problems you tried chaco on for which it was too slow. Sounds like I should revisit chaco (I haven't tried it in a while). My main problem with it is lack of documentation (this is really what has prevented me from testing it extensively). cheers, Bryan > > -Travis O. From osman at fuse.net Sun Mar 2 16:59:24 2008 From: osman at fuse.net (osman) Date: Sun, 02 Mar 2008 16:59:24 -0500 Subject: [SciPy-user] gutsy amd64 In-Reply-To: References: <1204478442.14422.23.camel@stargate.org> <1204479876.14422.28.camel@stargate.org> Message-ID: <1204495164.14422.39.camel@stargate.org> On Sun, 2008-03-02 at 17:51 +0000, Robin wrote: OK Maybe I was too quick to declare victory :-( I am trying a package called sfe : stargate:/home/osman/sfepy-10b16f5102ab-bash-> python simple.py Traceback (most recent call last): File "simple.py", line 7, in from sfe.base.base import * File "/home/osman/sfepy-10b16f5102ab/sfe/base/base.py", line 5, in import scipy.linalg as nla File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: /usr/lib/python2.5/site-packages/scipy/linalg/flapack.so: undefined symbol: cblas_zswap This looks like a scipy problem? The scipy build process did not cause any errors. My scipy_build directory has libcblas: drwxr-xr-x 2 osman osman 1416 2008-03-02 16:19 include -rwxr-xr-x 1 osman osman 35094 2008-03-02 16:18 libamd.a -rw-r--r-- 1 osman osman 8666444 2008-03-02 16:02 libatlas.a -rw-r--r-- 1 osman osman 466848 2008-03-02 16:02 libcblas.a -rw-r--r-- 1 osman osman 572034 2008-03-02 16:02 libf77blas.a -rw-r--r-- 1 osman osman 1698704 2008-03-02 16:19 libgfortran.a -rw-r--r-- 1 osman osman 781688 2008-03-02 16:19 libgfortran.so -rw-r--r-- 1 osman osman 8780538 2008-03-02 16:02 liblapack.a -rw-r--r-- 1 osman osman 482856 2008-03-02 16:02 libtstatlas.a -rwxr-xr-x 1 osman osman 748537 2008-03-02 16:18 libumfpack.a ATLAS build was also successful. stargate:/home/osman-bash-> ldd /usr/lib/python2.5/site-packages/scipy/linalg/flapack.so libgfortran.so.2 => /usr/lib/libgfortran.so.2 (0x00002b5dc81fa000) libm.so.6 => /lib/libm.so.6 (0x00002b5dc84b9000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00002b5dc873b000) libc.so.6 => /lib/libc.so.6 (0x00002b5dc8949000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) stargate:/home/osman-bash-> Missing something? -osman From pwang at enthought.com Sun Mar 2 17:32:31 2008 From: pwang at enthought.com (Peter Wang) Date: Sun, 2 Mar 2008 16:32:31 -0600 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <1204494906.16725.23.camel@pc1.cole.uklinux.net> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47CA1B3C.4010402@enthought.com> <1204494906.16725.23.camel@pc1.cole.uklinux.net> Message-ID: <28FDB73A-0EEE-4A61-B9B9-DED43AEB6FB4@enthought.com> On Mar 2, 2008, at 3:55 PM, Bryan Cole wrote: >> Are you sure you meant that *chaco* is too slow? The stated >> difference >> of chaco with matplotlib is interactive plotting and I know a bit of >> effort goes in to making it fast. > > You're quite right to pull me up on this. In fact, I've not tested > chaco > for this type of application so I can't say for sure if it's fast > enough > or not. I guess, my expectation was that it would not be much faster > than matplotlib (given they both use Antigrain for rendering, which > tends to be the bottleneck). Whenever I tried any type of anti-aliased > drawing (antigrain or cairo) there is always a significant performance > hit (on linux anyway). Although Chaco uses Agg, the nature of *how* it uses Agg is very different from matplotlib. Also, Chaco's internal architecture is designed around interactivity. There are several examples with displaying live updating data: https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/data_stream.py The spectrum analyzer example at the very bottom of the Chaco gallery uses PyAudio to display a realtime FFT and spectrogram of the sound input: http://code.enthought.com/chaco/gallery/index.shtml Also, Chaco has several different backends it can use for output, not just Agg. I recently greatly improved the OpenGL backend so it is extremely fast on all three major platforms (although this has not been merged into the trunk just yet). > Sounds like I should revisit chaco (I haven't tried it in a while). > My main problem with it is lack of documentation (this is really what > has prevented me from testing it extensively). I apologize for the continuing lack of extensive documentation. The examples are a good place to start, and the classes all have some level of comments describing the common traits. I think the data_strea.py example would probably be a good place for you to try out your data acquisition and display. (It does require wxPython.) And, as always, you can email the list with questions. -Peter From robince at gmail.com Sun Mar 2 17:58:49 2008 From: robince at gmail.com (Robin) Date: Sun, 2 Mar 2008 22:58:49 +0000 Subject: [SciPy-user] gutsy amd64 In-Reply-To: <1204495164.14422.39.camel@stargate.org> References: <1204478442.14422.23.camel@stargate.org> <1204479876.14422.28.camel@stargate.org> <1204495164.14422.39.camel@stargate.org> Message-ID: On Sun, Mar 2, 2008 at 9:59 PM, osman wrote: > > On Sun, 2008-03-02 at 17:51 +0000, Robin wrote: > OK Maybe I was too quick to declare victory :-( > I am trying a package called sfe : > > stargate:/home/osman/sfepy-10b16f5102ab-bash-> python simple.py > Traceback (most recent call last): > File "simple.py", line 7, in > from sfe.base.base import * > File "/home/osman/sfepy-10b16f5102ab/sfe/base/base.py", line 5, in > > import scipy.linalg as nla > File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line > 8, in > from basic import * > File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line > 17, in > from lapack import get_lapack_funcs > File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line > 17, in > from scipy.linalg import flapack > ImportError: /usr/lib/python2.5/site-packages/scipy/linalg/flapack.so: > undefined symbol: cblas_zswap > > > This looks like a scipy problem? The scipy build process did not cause > any errors. > > My scipy_build directory has libcblas: > > drwxr-xr-x 2 osman osman 1416 2008-03-02 16:19 include > -rwxr-xr-x 1 osman osman 35094 2008-03-02 16:18 libamd.a > -rw-r--r-- 1 osman osman 8666444 2008-03-02 16:02 libatlas.a > -rw-r--r-- 1 osman osman 466848 2008-03-02 16:02 libcblas.a > -rw-r--r-- 1 osman osman 572034 2008-03-02 16:02 libf77blas.a > -rw-r--r-- 1 osman osman 1698704 2008-03-02 16:19 libgfortran.a > -rw-r--r-- 1 osman osman 781688 2008-03-02 16:19 libgfortran.so > -rw-r--r-- 1 osman osman 8780538 2008-03-02 16:02 liblapack.a > -rw-r--r-- 1 osman osman 482856 2008-03-02 16:02 libtstatlas.a > -rwxr-xr-x 1 osman osman 748537 2008-03-02 16:18 libumfpack.a > > ATLAS build was also successful. > > stargate:/home/osman-bash-> > ldd /usr/lib/python2.5/site-packages/scipy/linalg/flapack.so > libgfortran.so.2 => /usr/lib/libgfortran.so.2 > (0x00002b5dc81fa000) > libm.so.6 => /lib/libm.so.6 (0x00002b5dc84b9000) > libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00002b5dc873b000) > libc.so.6 => /lib/libc.so.6 (0x00002b5dc8949000) > /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) > stargate:/home/osman-bash-> > > Missing something? I'm afraid that's a bit beyond my expertise - I haven't seen that particular error before. The only thing I can suggest is to check again against the instructions on the wiki, and remember it can be quite sensitive to details... One thing I thought was check the order of the libs listed in your site.cfg: atlas_libs = lapack, f77blas, cblas, atlas I know that changing the order there can result in errors similar to the one your seeing. Robin From robince at gmail.com Sun Mar 2 17:59:38 2008 From: robince at gmail.com (Robin) Date: Sun, 2 Mar 2008 22:59:38 +0000 Subject: [SciPy-user] gutsy amd64 In-Reply-To: References: <1204478442.14422.23.camel@stargate.org> <1204479876.14422.28.camel@stargate.org> <1204495164.14422.39.camel@stargate.org> Message-ID: P.S. Posting a build log and the output of python setup.py config might also help diagnose the problem. Robin From gael.varoquaux at normalesup.org Mon Mar 3 03:34:58 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 3 Mar 2008 09:34:58 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <47CAE2BC.2040800@ru.nl> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> Message-ID: <20080303083458.GC14020@phare.normalesup.org> On Sun, Mar 02, 2008 at 06:24:12PM +0100, Stef Mientki wrote: > > The main recommendation I would make to anyone writing data-acquisition > > stuff in python is Use Traits/TraitsUI! The ability to auto-generate a > > GUI to configure hardware objects based on their Traits definitions is a > > *huge* productivity saving. > You might be quit right, > I've heard this reasoning more than once, > but .... > ... I'm looking at the wrong documents > or > ... I'm simply too stupid > or > ... I'm a completely spoiled windows user > but I really really don't understand one bit of Traits :-( Have you tried looking at: http://gael-varoquaux.info/computers/traits_tutorial I wrote it specificaly targetting someone with no prior knowledge of GUI developement or even object oriented programming. HTH, Ga?l From jasperstolte at gmail.com Mon Mar 3 04:46:52 2008 From: jasperstolte at gmail.com (Jasper Stolte) Date: Mon, 3 Mar 2008 10:46:52 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <20080302130544.GE14294@phare.normalesup.org> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <1204402937.10354.3.camel@localhost.localdomain> <20080302130544.GE14294@phare.normalesup.org> Message-ID: <89198da10803030146x2ead7d65s67528018a4a200d0@mail.gmail.com> Hey guys, excellent! Indeed, I quietly started something and put it in google code. It would be great to have some help with it, though originally I planned on getting some kind of basic skeleton finished before asking other ppls to join in. Of course, if you want to help with that as well, you are very much welcome. About the licensing, I would specifically want this to be a part of scipy in the future. The code should thus be released under a BSD license (I put it like that in Google Code). I'm not quite sure how this licensing stuff works. The algorithms are all public domain afaik, Octave made some implementation of them in C++. The class structure of this toolbox will be totally different, it's even written in a whole other language. Is it forbidden for us to look at how they implemented it without going to GPL? Not too big of a problem because it can be done without, but it's always nice to have some kind of reference. So far it's all original.. :) Tonight I'll restructure some, and put up the last version. Ofcourse, I barely started, so a LOT still has to be done. Greetz, Jasper On Sun, Mar 2, 2008 at 2:05 PM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > On Sat, Mar 01, 2008 at 09:22:17PM +0100, Fabrice Silva wrote: > > Are you interested in some help for developing some control system > > features? I'm willing to python-ize some of the Octave Control Systems > > Toolbox > > http://enacit1.epfl.ch/cours_matlab/octave-manual/octave_30.html > > I haven't been following this very closely, but can you remind me if we > are talking about a scikit which could be released under the GPL, or > something that would be released under a BSD license. The reason I ask > this question is that Octave is GPL, and to write BSD-license coded > inspired by their code I think you would need either to do some white > room engineering, or ask for special permission to the authors. Don't > trust my opinion too much, I am license expert, but I just wanted to > point out this eventual snag. > > Cheers, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Anderson at hull.ac.uk Mon Mar 3 05:16:03 2008 From: J.Anderson at hull.ac.uk (Joseph Anderson) Date: Mon, 3 Mar 2008 10:16:03 -0000 Subject: [SciPy-user] Newbie help for installing pysamplerate References: Message-ID: Hello All, Don't know if this went through the first time or not. . . -----Original Message----- From: scipy-user-bounces at scipy.org on behalf of Joseph Anderson Sent: Sat 03/01/2008 3:12 PM To: SciPy Users List Subject: Newbie help for installing pysamplerate Hello All, Most likely this is really just a question for David Cournapeau. . . Am having a bit of trouble apparently resulting from being a python newbie. I have numpy, scipy, and pyaudiolab up and going, but am having trouble getting pysamplerate to happen. In attempting to install pysamplerate, I have run pysamplerate's setup.py in a python interpreter, choosing task [2], the install option. That does the following: samplerate_info: libraries samplerate not found in /Library/Frameworks/Python.framework/Versions/2.5/lib FOUND: libraries = ['samplerate'] library_dirs = ['/usr/local/lib'] fulllibloc = /usr/local/lib/libsamplerate.so.0 fullheadloc = /usr/local/include/samplerate.h include_dirs = ['/usr/local/include'] running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_py copying pysamplerate.py -> build/lib/pysamplerate running install_lib creating /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/__init__.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/generate_const.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/header_parser.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/info.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/pysamplerate.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate copying build/lib/pysamplerate/setup.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/__init__.py to __init__.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/generate_const.py to generate_const.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/header_parser.py to header_parser.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/info.py to info.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/pysamplerate.py to pysamplerate.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate/setup.py to setup.pyc running install_egg_info Writing /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/pysamplerate-0.1-py2.5.egg-info **************************************** The latest version of libsamplerate has been installed in /usr/local/lib/, with ls libsamplerate* listing the following: libsamplerate.0.1.1.dylib libsamplerate.dylib libsamplerate.0.dylib libsamplerate.la libsamplerate.a Starting a new python interpreter, attempting to import pysamplerate, i get: >>> import pysamplerate Traceback (most recent call last): File "", line 1, in File "pysamplerate.py", line 23, in _src = cdll.LoadLibrary('/usr/local/lib/libsamplerate.so.0') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ctypes/__init__.py", line 423, in LoadLibrary return self._dlltype(name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ctypes/__init__.py", line 340, in __init__ self._handle = _dlopen(self._name, mode) OSError: dlopen(/usr/local/lib/libsamplerate.so.0, 6): image not found **************************************** What I see is that libsamplerate.so.0 is missing from the /usr/local/lib directory. Is this a file that should be created by pysamplerate's setup.py? Anyway, I'm doing something wrong. Sure it is rather simple. Thanks for the help. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ATT1467696.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ATT1467698.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From j.reid at mail.cryst.bbk.ac.uk Mon Mar 3 06:08:41 2008 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Mon, 03 Mar 2008 11:08:41 +0000 Subject: [SciPy-user] scipy build with gcc on solaris problems Message-ID: Hi, I'm on solaris using gcc: SunOS mahler 5.10 Generic_118833-36 sun4u sparc SUNW,Sun-Fire-880 Solaris Using built-in specs. Target: sparc-sun-solaris2.10 Configured with: ../configure --prefix=/usr/local --with-gnu-as --with-as=/usr/local/bin/as --with-gnu-ld --with-ld=/usr/local/bin/ld --with-libiconv --enable-libada --enable-libssp --enable-objc-gc --enable-threads --enable-languages=c,c++,objc,obj-c++,fortran Thread model: posix gcc version 4.2.2 The scipy tests generate a core dump. The numpy tests seem fine except for a few warnings about invalid values. What am I doing wrong? I build atlas following the advice here: http://www.scipy.org/Installing_SciPy/Linux#head-89e1f6afaa3314d98a22c79b063cceee2cc6313c I install scipy using "easy_install scipy" when I get some warnings like the following: scipy/ndimage/src/nd_image.c: In function 'Py_Filter1DFunc': scipy/ndimage/src/nd_image.c:273: warning: function called through a non-compatible type scipy/ndimage/src/nd_image.c:273: note: if this code is reached, the program will abort scipy/ndimage/src/nd_image.c:274: warning: function called through a non-compatible type scipy/ndimage/src/nd_image.c:274: note: if this code is reached, the program will abort scipy/ndimage/src/nd_image.c: In function 'Py_FilterFunc': scipy/ndimage/src/nd_image.c:351: warning: function called through a non-compatible type scipy/ndimage/src/nd_image.c:351: note: if this code is reached, the program will abort scipy/ndimage/src/nd_image.c: In function 'Py_Histogram': scipy/ndimage/src/nd_image.c:1100: warning: function called through a non-compatible type scipy/ndimage/src/nd_image.c:1100: note: if this code is reached, the program will abort and when I run the tests I get a core dump: ipython >>> import scipy >>> scipy.test() Here is the "gdb python core" output: Core was generated by `/usr/local/bin/python /usr/local/bin/ipython'. Program terminated with signal 4, Illegal instruction. #0 Py_FilterFunc (buffer=0xc91778, filter_size=2, output=0xffbfc118, data=0xffbfc1c4) at scipy/ndimage/src/nd_image.c:351 351 scipy/ndimage/src/nd_image.c: No such file or directory. in scipy/ndimage/src/nd_image.c (gdb) where #0 Py_FilterFunc (buffer=0xc91778, filter_size=2, output=0xffbfc118, data=0xffbfc1c4) at scipy/ndimage/src/nd_image.c:351 #1 0xfdc16eb8 in NI_GenericFilter (input=0x900708, function=0xfdc130b8 , data=0xffbfc1c4, footprint=0xb96be0, output=0xb96c40, mode=, cvalue=0, origins=0xbfcd60) at scipy/ndimage/src/ni_filters.c:858 #2 0xfdc14a54 in Py_GenericFilter (obj=0x0, args=0xb493f0) at scipy/ndimage/src/nd_image.c:411 #3 0x000f7a24 in PyCFunction_Call (func=0x838878, arg=0xb493f0, kw=0x0) at Objects/methodobject.c:108 #4 0x000a188c in PyEval_EvalFrameEx (f=0xbf6570, throwflag=-4209780) at Python/ceval.c:3564 #5 0x000a3784 in PyEval_EvalCodeEx (co=0x7f7b60, globals=0x7f7b60, locals=0x1, args=0x1a210c, argcount=2, kws=0x28, kwcount=3, defs=0x833dfc, defcount=8, closure=0x0) at Python/ceval.c:2831 #6 0x000a1804 in PyEval_EvalFrameEx (f=0xbf63c0, throwflag=-4209276) at Python/ceval.c:3659 #7 0x000a3784 in PyEval_EvalCodeEx (co=0x9fdf98, globals=0x9fdf98, locals=0x1, args=0x43d2b4, argcount=10432304, kws=0x20, kwcount=1, defs=0x0, defcount=8, closure=0x0) at Python/ceval.c:2831 #8 0x000a1804 in PyEval_EvalFrameEx (f=0xa4ee30, throwflag=-4208772) at Python/ceval.c:3659 #9 0x000a3784 in PyEval_EvalCodeEx (co=0x2bdf50, globals=0x2bdf50, locals=0x1, args=0x138800, argcount=2, kws=0x9db3c8, kwcount=0, defs=0x2d7c9c, defcount=1, closure=0x0) at Python/ceval.c:2831 #10 0x000f72ec in function_call (func=0x2dd2b0, arg=0x93c788, kw=0xb4b270) at Objects/funcobject.c:517 #11 0x00026448 in PyObject_Call (func=0x2dd2b0, arg=0x93c788, kw=0xb4b270) at Objects/abstract.c:1860 #12 0x0009f6f8 in PyEval_EvalFrameEx (f=0xa4ecc0, throwflag=0) at Python/ceval.c:3844 #13 0x000a3784 in PyEval_EvalCodeEx (co=0x2bdf98, globals=0x2bdf98, locals=0x1, args=0x138800, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2831 #14 0x000f72ec in function_call (func=0x2dd2f0, arg=0x93c620, kw=0x0) at Objects/funcobject.c:517 #15 0x00026448 in PyObject_Call (func=0x2dd2f0, arg=0x93c620, kw=0x0) at Objects/abstract.c:1860 #16 0x00031310 in instancemethod_call (func=0x2, arg=0x93c620, kw=0x0) at Objects/classobject.c:2497 #17 0x00026448 in PyObject_Call (func=0x8c7d78, arg=0x93c620, kw=0x0) at Objects/abstract.c:1860 #18 0x0009fe10 in PyEval_EvalFrameEx (f=0xa4eb40, throwflag=-4206524) at Python/ceval.c:3775 #19 0x000a3784 in PyEval_EvalCodeEx (co=0x2a35c0, globals=0x2a35c0, locals=0x1, args=0x138800, argcount=2, kws=0x0, kwcount=0, defs=0x61e25c, defcount=1, closure=0x0) at Python/ceval.c:2831 #20 0x000f72ec in function_call (func=0x67fdf0, arg=0x93c738, kw=0x0) at Objects/funcobject.c:517 #21 0x00026448 in PyObject_Call (func=0x67fdf0, arg=0x93c738, kw=0x0) at Objects/abstract.c:1860 #22 0x00031310 in instancemethod_call (func=0x0, arg=0x93c738, kw=0x0) at Objects/classobject.c:2497 #23 0x00026448 in PyObject_Call (func=0x8ce3f0, arg=0xaf9d70, kw=0x0) at Objects/abstract.c:1860 #24 0x00074ad8 in slot_tp_call (self=0xf, args=0xaf9d70, kwds=0x0) at Objects/typeobject.c:4633 #25 0x00026448 in PyObject_Call (func=0xa8de30, arg=0xaf9d70, kw=0x0) at Objects/abstract.c:1860 #26 0x0009fe10 in PyEval_EvalFrameEx (f=0xa57fa8, throwflag=-4204804) at Python/ceval.c:3775 #27 0x000a3784 in PyEval_EvalCodeEx (co=0x2d46e0, globals=0x2d46e0, locals=0x1, args=0x138800, argcount=2, kws=0x9db4f8, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2831 #28 0x000f72ec in function_call (func=0x2dd730, arg=0x93c558, kw=0xb40e40) at Objects/funcobject.c:517 #29 0x00026448 in PyObject_Call (func=0x2dd730, arg=0x93c558, kw=0xb40e40) at Objects/abstract.c:1860 #30 0x0009f6f8 in PyEval_EvalFrameEx (f=0xa57e38, throwflag=0) at Python/ceval.c:3844 #31 0x000a3784 in PyEval_EvalCodeEx (co=0x2d4728, globals=0x2d4728, locals=0x1, args=0x138800, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2831 #32 0x000f72ec in function_call (func=0x2dd770, arg=0x942c10, kw=0x0) at Objects/funcobject.c:517 #33 0x00026448 in PyObject_Call (func=0x2dd770, arg=0x942c10, kw=0x0) at Objects/abstract.c:1860 #34 0x00031310 in instancemethod_call (func=0x0, arg=0x942c10, kw=0x0) at Objects/classobject.c:2497 #35 0x00026448 in PyObject_Call (func=0x759a58, arg=0x7d6bd0, kw=0x0) at Objects/abstract.c:1860 #36 0x00074ad8 in slot_tp_call (self=0xc, args=0x7d6bd0, kwds=0x0) at Objects/typeobject.c:4633 #37 0x00026448 in PyObject_Call (func=0x7bea30, arg=0x7d6bd0, kw=0x0) at Objects/abstract.c:1860 #38 0x0009fe10 in PyEval_EvalFrameEx (f=0x96e7a8, throwflag=-4202332) at Python/ceval.c:3775 #39 0x000a2b58 in PyEval_EvalFrameEx (f=0x95e978, throwflag=-4201972) at Python/ceval.c:3650 #40 0x000a3784 in PyEval_EvalCodeEx (co=0x2a3068, globals=0x2a3068, locals=0x1, args=0x610880, argcount=3, kws=0x18, kwcount=0, defs=0x6779fc, defcount=5, closure=0x0) at Python/ceval.c:2831 #41 0x000a1804 in PyEval_EvalFrameEx (f=0x78e3f8, throwflag=-4201468) at Python/ceval.c:3659 #42 0x000a3784 in PyEval_EvalCodeEx (co=0x60b188, globals=0x60b188, locals=0x1, args=0x1a2100, argcount=0, kws=0x8, kwcount=0, defs=0x774c44, defcount=2, closure=0x0) at Python/ceval.c:2831 #43 0x000a1804 in PyEval_EvalFrameEx (f=0x6baa30, throwflag=-4200964) at Python/ceval.c:3659 #44 0x000a3784 in PyEval_EvalCodeEx (co=0x775ba8, globals=0x775ba8, locals=0x1, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2831 #45 0x000a22e8 in PyEval_EvalFrameEx (f=0x804e78, throwflag=7822248) at Python/ceval.c:494 ---Type to continue, or q to quit--- #46 0x000a3784 in PyEval_EvalCodeEx (co=0x4e5020, globals=0x4e5020, locals=0x1, args=0x138800, argcount=2, kws=0x804e44, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2831 #47 0x000a1804 in PyEval_EvalFrameEx (f=0x804cf0, throwflag=-4199956) at Python/ceval.c:3659 #48 0x000a3784 in PyEval_EvalCodeEx (co=0x4dcf50, globals=0x4dcf50, locals=0x1, args=0x2aa020, argcount=3, kws=0x10, kwcount=0, defs=0x58d424, defcount=2, closure=0x0) at Python/ceval.c:2831 #49 0x000a1804 in PyEval_EvalFrameEx (f=0x5d7aa0, throwflag=-4199452) at Python/ceval.c:3659 #50 0x000a2b58 in PyEval_EvalFrameEx (f=0x5e43c0, throwflag=-4199092) at Python/ceval.c:3650 #51 0x000a3784 in PyEval_EvalCodeEx (co=0x4dccc8, globals=0x4dccc8, locals=0x1, args=0x138800, argcount=2, kws=0x5df1f8, kwcount=0, defs=0x5901fc, defcount=1, closure=0x0) at Python/ceval.c:2831 #52 0x000a1804 in PyEval_EvalFrameEx (f=0x5df0b0, throwflag=-4198588) at Python/ceval.c:3659 #53 0x000a3784 in PyEval_EvalCodeEx (co=0x4dcbf0, globals=0x4dcbf0, locals=0x1, args=0x138800, argcount=2, kws=0x37c9fc, kwcount=0, defs=0x5901dc, defcount=1, closure=0x0) at Python/ceval.c:2831 #54 0x000a1804 in PyEval_EvalFrameEx (f=0x37c8b0, throwflag=-4198084) at Python/ceval.c:3659 #55 0x000a3784 in PyEval_EvalCodeEx (co=0x448410, globals=0x448410, locals=0x1, args=0x13796c, argcount=1, kws=0xc, kwcount=0, defs=0x358b04, defcount=2, closure=0x0) at Python/ceval.c:2831 #56 0x000a1804 in PyEval_EvalFrameEx (f=0x5d7338, throwflag=-4197580) at Python/ceval.c:3659 #57 0x000a3784 in PyEval_EvalCodeEx (co=0x428d10, globals=0x428d10, locals=0x1, args=0x13796c, argcount=0, kws=0x4, kwcount=0, defs=0x43d87c, defcount=1, closure=0x0) at Python/ceval.c:2831 #58 0x000a1804 in PyEval_EvalFrameEx (f=0x216480, throwflag=-4197076) at Python/ceval.c:3659 #59 0x000a3784 in PyEval_EvalCodeEx (co=0x1d1578, globals=0x1d1578, locals=0x1, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2831 #60 0x000a3918 in PyEval_EvalCode (co=0x1d1578, globals=0x176d20, locals=0x176d20) at Python/ceval.c:494 #61 0x000c8c6c in PyRun_FileExFlags (fp=0x0, filename=0xffbffa52 "/usr/local/bin/ipython", start=-4, globals=0x176d20, locals=0x176d20, closeit=1, flags=0xffbff814) at Python/pythonrun.c:1271 #62 0x000c9ba8 in PyRun_SimpleFileExFlags (fp=0x15c4e8, filename=0xffbffa52 "/usr/local/bin/ipython", closeit=1, flags=0xffbff814) at Python/pythonrun.c:877 #63 0x0001de58 in Py_Main (argc=2, argv=0x0) at Modules/main.c:523 #64 0x0001d440 in _start () From j.reid at mail.cryst.bbk.ac.uk Mon Mar 3 06:16:08 2008 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Mon, 03 Mar 2008 11:16:08 +0000 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: References: Message-ID: Also when I import scipy.optimize I have the following error. Could easy_install be using the solaris CC compiler? Would this cause this problem? In [6]: import scipy.optimize --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /usr/local/lib/python2.5/site-packages/ in () /usr/local/lib/python2.5/site-packages/scipy-0.6.0-py2.5-solaris-2.10-sun4u.egg/scipy/optimize/__init__.py in () 9 from zeros import * 10 from anneal import * ---> 11 from lbfgsb import fmin_l_bfgs_b global lbfgsb = undefined global fmin_l_bfgs_b = undefined 12 from tnc import fmin_tnc 13 from cobyla import fmin_cobyla /usr/local/lib/python2.5/site-packages/scipy-0.6.0-py2.5-solaris-2.10-sun4u.egg/scipy/optimize/lbfgsb.py in () 28 29 from numpy import zeros, float64, array, int32 ---> 30 import _lbfgsb global _lbfgsb = undefined 31 import optimize 32 ImportError: ld.so.1: python: fatal: relocation error: file /usr/local/lib/python2.5/site-packages/scipy-0.6.0-py2.5-solaris-2.10-sun4u.egg/scipy/optimize/_lbfgsb.so: symbol etime_: referenced symbol not found From j.reid at mail.cryst.bbk.ac.uk Mon Mar 3 06:29:15 2008 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Mon, 03 Mar 2008 11:29:15 +0000 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: References: Message-ID: John Reid wrote: > Also when I import scipy.optimize I have the following error. Could > easy_install be using the solaris CC compiler? Would this cause this > problem? > I found out how to change the default compiler with easy_install but now I am told I cannot compile the code on Solaris with gcc: Building modules... Building module "mvn"... Constructing wrapper function "mvnun"... value,inform = mvnun(lower,upper,means,covar,[maxpts,abseps,releps]) Constructing wrapper function "mvndst"... error,value,inform = mvndst(lower,upper,infin,correl,[maxpts,abseps,releps]) Constructing COMMON block support for "dkblck"... ivls Wrote C/API module "mvn" to file "build/src.solaris-2.10-sun4u-2.5/scipy/stats/mvnmodule.c" Fortran 77 wrappers are saved to "build/src.solaris-2.10-sun4u-2.5/scipy/stats/mvn-f2pywrappers.f" error: Setup script exited with error: don't know how to compile C/C++ code on platform 'posix' with 'gcc' compiler I created the following ~/.pydistutils.cfg [build] compiler = gcc to change the default compiler. Any help appreciated, John. From nmelgarejodiaz at gmail.com Mon Mar 3 07:10:42 2008 From: nmelgarejodiaz at gmail.com (Natali Melgarejo Diaz) Date: Mon, 3 Mar 2008 13:10:42 +0100 Subject: [SciPy-user] Varias figuras en aplicacion en local Message-ID: Buenas a todos, He hecho una aplicacion grafica en pyQt4 q al hacer click en los botones tiene que hacer graficas de funciones predefinidas en otras clases, bueno, al hacer click en el 2do boton, el programa ya no responde. y yo quisiera q el usuario pueda ver ambas figuras o las que sean, al aprecer solo se puede hacer una ventana por una tal y como lo tengo implementado. Us? Threads pero el problema persiste, y las figuras las hago usando pylab con figure(1), figure(2) ..etc pero al hacer show() parece q se raya :S Alguien ha tenido q hacer algo parecido ? como lo solucion?? Gracias x adelantado! Natali ;) -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon Mar 3 07:22:17 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 03 Mar 2008 21:22:17 +0900 Subject: [SciPy-user] gutsy amd64 In-Reply-To: <1204478442.14422.23.camel@stargate.org> References: <1204478442.14422.23.camel@stargate.org> Message-ID: <47CBED79.8070704@ar.media.kyoto-u.ac.jp> osman wrote: > Hi, > I have ubunto 7.10 64 bit on AMD64 machine. The usual apt-get install > will not install both scipy and libumfpack. umfpack is optional for scipy. Do you really need the facilities it provides (sparse algebra) ? > One removes the other. Is > there a fix to this? No. I don't understand debian scipy and numpy packaging, to be honest. I gave up using official packages a long time ago. > I have also downloaded the latest svn but it does > not biuld. Errors like: > > > build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c:6196: error: expected expression before ?)? token > build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c:6196: error: too few arguments to function ?SWIG_Python_NewPointerObj? > error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall > -Wstrict-prototypes -fPIC -DSCIPY_UMFPACK_H -DSCIPY_AMD_H > -DNO_ATLAS_INFO=2 -I/usr/local/include -I/usr/include > -I/usr/lib/python2.4/site-packages/numpy/core/include > -I/usr/include/python2.4 -c > build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c -o build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.o" failed with exit status 1 > stargate:/home/osman/SCIPY/scipy-bash-> This is due to having installed umfpack from debian: debian modifies umfpack sources, such as the headers are installed in their own directory (/usr/include/umfpack, instead of /usr/include). That makes sense from a packaging point of view, because that's a bad practice to put many headers in /usr/include. But it breaks scipy build if you do not add /usr/include/umfpack in your site.cfg. My advice to build scipy from sources on debian/ubuntu: - install atlas package, this one is ok and works well: sudo apt-get install atlas-base-dev (you can also install the atlas optimized for your cpu, but you can do that later, it will be automatically used by debian at runtime). - do not use umfpack and co, remove the packages. - use g77, and not gfortran. gfortran and g77 are incompatible, and debian/ubuntu still use g77 for their current ABI (that will not change for the next ubuntu version; maybe for ubuntu 8.10; for debian, they are in the middle of the migration). This will work on any architecture, with scipy and numpy svn. cheers, David From david at ar.media.kyoto-u.ac.jp Mon Mar 3 07:33:03 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 03 Mar 2008 21:33:03 +0900 Subject: [SciPy-user] Newbie help for installing pysamplerate In-Reply-To: References: Message-ID: <47CBEFFF.6060307@ar.media.kyoto-u.ac.jp> Joseph Anderson wrote: > > > What I see is that libsamplerate.so.0 is missing from the /usr/local/lib directory. Is this a file that should be created by pysamplerate's setup.py? > > Anyway, I'm doing something wrong. Sure it is rather simple. > Well, pysamplerate should not try to load *.so files, it cannot work on Mac OS X. I would have believed this was a stupid mistake of mine, but I am surprised, because the system does find the libsamplerate.so.0 file, which does not seem to exist on your platform... Anyway, you are not doing anything wrong, that's a mistake of mine. Let me check tonight on my macbook to see what's going on on Mac OS X. Also, note that the most up-to-date version is available in scikits under the name samplerate (I should have mentioned it on pysamplerate webpage). cheers, David From robert.kern at gmail.com Mon Mar 3 11:29:46 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 3 Mar 2008 10:29:46 -0600 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: References: Message-ID: <3d375d730803030829t2290c9fbub441e5d17d8561cb@mail.gmail.com> On Mon, Mar 3, 2008 at 5:29 AM, John Reid wrote: > > > John Reid wrote: > > Also when I import scipy.optimize I have the following error. Could > > easy_install be using the solaris CC compiler? Would this cause this > > problem? > > > > I found out how to change the default compiler with easy_install but now > I am told I cannot compile the code on Solaris with gcc: Using --compiler=gcc doesn't help, actually. That option chooses between different classes of compilers rather than specific executables; 'gcc' is not an option. Look at "python setup.py build_ext --help-compiler" for the available options. Almost certainly, you shouldn't change it. The only real choices are for Windows and old Mac OS. The correct compiler *should* be picked up from the Python Makefile which is stored in your Python installation. Usually, you will need the same compiler that built your Python. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lists at benair.net Mon Mar 3 11:54:08 2008 From: lists at benair.net (BK) Date: Mon, 03 Mar 2008 17:54:08 +0100 Subject: [SciPy-user] change array index order Message-ID: <1204563248.7503.14.camel@iagpc71.iag.uni-stuttgart.de> Hi everybody, I am still kind of a newbie but I've been using scipy for some time now for post-processing of CFD surface ASCII data. Over the time I wrote myself a bunch of small tools doing this and that. Lately there was a version change in the CFD code I am using and the output format changed slightly. Old format was I,J-grid data with N variables in the order N,I,J (N being the fastet running index). The new data format uses I,J,N ordering (with I running fastest). So, is there a way to tell scipy to somehow 'map' one index direction to another one, like: data_new[n,j,i] -> data_old[j,i,n] without copying all the data? This way I could just change my IO routine and keep all the other tools unchanged. And I could easily handle new and old version of output files. I suspect this is a real newbie-question, but so far I haven't been able to figure it out myself. Thanks for your help, and thanks for that great toos scipy! Best regards Bene From wnbell at gmail.com Mon Mar 3 12:15:33 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 3 Mar 2008 11:15:33 -0600 Subject: [SciPy-user] change array index order In-Reply-To: <1204563248.7503.14.camel@iagpc71.iag.uni-stuttgart.de> References: <1204563248.7503.14.camel@iagpc71.iag.uni-stuttgart.de> Message-ID: On Mon, Mar 3, 2008 at 10:54 AM, BK wrote: > So, is there a way to tell scipy to somehow 'map' one index direction to > another one, like: > data_new[n,j,i] -> data_old[j,i,n] > without copying all the data? I've never used it, but I think rollaxis() is what you want: data_new = rollaxis(data_old,2,0) If that doesn't work, then use two calls to swapaxes() http://www.scipy.org/Numpy_Example_List#rollaxis http://www.scipy.org/Numpy_Example_List#swapaxes -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From peridot.faceted at gmail.com Mon Mar 3 12:21:25 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 3 Mar 2008 12:21:25 -0500 Subject: [SciPy-user] change array index order In-Reply-To: References: <1204563248.7503.14.camel@iagpc71.iag.uni-stuttgart.de> Message-ID: On 03/03/2008, Nathan Bell wrote: > On Mon, Mar 3, 2008 at 10:54 AM, BK wrote: > > > So, is there a way to tell scipy to somehow 'map' one index direction to > > another one, like: > > data_new[n,j,i] -> data_old[j,i,n] > > without copying all the data? > > > I've never used it, but I think rollaxis() is what you want: > data_new = rollaxis(data_old,2,0) > > If that doesn't work, then use two calls to swapaxes() transpose() can do arbitrary permutations of the axes of an array. Depending on how you access the data, you may notice a major change in the speed of your program. Locality of memory reference can make a tremendous difference in runtime on modern architectures. You might think about copying the array into an order that groups the data that's accessed together; this can be achieved with transpose(), copy(), transpose(), since (if I recall correctly) copy() always produces an array in "C order", with the last index changing most rapidly. Anne From s.mientki at ru.nl Mon Mar 3 14:21:52 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 03 Mar 2008 20:21:52 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <20080303083458.GC14020@phare.normalesup.org> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> Message-ID: <47CC4FD0.9040405@ru.nl> Gael Varoquaux wrote: > On Sun, Mar 02, 2008 at 06:24:12PM +0100, Stef Mientki wrote: > >>> The main recommendation I would make to anyone writing data-acquisition >>> stuff in python is Use Traits/TraitsUI! The ability to auto-generate a >>> GUI to configure hardware objects based on their Traits definitions is a >>> *huge* productivity saving. >>> > > >> You might be quit right, >> I've heard this reasoning more than once, >> but .... >> ... I'm looking at the wrong documents >> or >> ... I'm simply too stupid >> or >> ... I'm a completely spoiled windows user >> but I really really don't understand one bit of Traits :-( >> > > Have you tried looking at: > http://gael-varoquaux.info/computers/traits_tutorial > I wrote it specificaly targetting someone with no prior knowledge of GUI > developement or even object oriented programming. > > thanks Gael, I didn't know these manuals, but I think I'm still missing the clue completely. What I understand is that traits is a smart replacement of "*arg, **kwargs" (which I've never used either). But being a smart replacement, it also makes it a complex / difficult replacement. As far as I understand, your presentation is about how you can easily create a GUI interface with Traits. Please don't understand me wrong, I don't want to upset you, I think you're very valuable contributor to both this list and the Python community, but as follower of the KISS principle, again I don't understand it. Please tell me in 2 lines what's the essential difference between TraitsUI in your presentation, and the lines below (note that with a little effort even "Types" can be removed), so maybe I can add those essentials to my code, or even might switch to traits ;-) Names = [ 'For All Signals', 'AutoScale', 'Upper Value', 'Lower Value' ] Values = [ False, True, 200, 20 ] Types = [ bool, bool ] OK, Values = MultiLineDialog ( Names, Values, Types, 'Set Border Values', width = 70 ) cheers, Stef -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: moz-screenshot-5.jpg Type: image/jpeg Size: 6904 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Mon Mar 3 14:53:43 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 3 Mar 2008 20:53:43 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <47CC4FD0.9040405@ru.nl> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> <47CC4FD0.9040405@ru.nl> Message-ID: <20080303195343.GI14020@phare.normalesup.org> On Mon, Mar 03, 2008 at 08:21:52PM +0100, Stef Mientki wrote: > What I understand is that traits is a smart replacement of "*arg, > **kwargs" > (which I've never used either). > But being a smart replacement, it also makes it a complex / difficult > replacement. Well, its much more than that. Traits can be seen as three things: * Type validation of attributes: the attributes of an object can be given a type descriptor (more precisely a validation method) and the compliance of the attribute is checked at run-time when this attribute is set. This is very useful for complex codebase, but probably not for you. * Reactive/callback programming made easy. When you set the attribute of a HasTtraits object, a method of the object can be called to handle this change. This is known as reactive programming and is a great programming pattern that makes your code easier to read and more modular. I couldn't stress this more. * Visualization made easy: from the two bullet points above, TraitsUI can generate interactive dialogs. The automatic generation of dialogs combined with the reactive programming makes the codebase very light and nice to read. > but as follower of the KISS principle, again I don't understand it. KISS is great, but remember: 1 month of hard work can save you one week end of learning. If you start doing interactive GUIs, you won't be able to get around learning a few things, and I think it is easier to learn Traits than the alternatives. > Please tell me in 2 lines what's the essential difference between TraitsUI > in your presentation, > and the lines below (note that with a little effort even "Types" can be > removed), > so maybe I can add those essentials to my code, > or even might switch to traits ;-) > > > Names = [ 'For All Signals', 'AutoScale', 'Upper Value', 'Lower Value' > ] > Values = [ False, True, 200, 20 ] > Types = [ bool, bool ] > OK, Values = MultiLineDialog ( Names, Values, Types, > 'Set Border Values', > width = 70 ) > No object oriented programming. I don't believe you can do complexe codebases without OOP. Yes you can fight to avoid it, but you are doing yourself more harm than good. And besides, it is not that hard. In addition you don't have any reactive programming: the code above cannot be interactive. This is what I call a "visual script". Going from program-driven logics, where the user is presented sequentially a set of dialogs, to user-driven logics, where the user (or an experiment) keeps interacting with the program, will require a paradigm shift. I believe that Traits will make this paradigm shifte easier than any alternative. I also believe your second best bet is PyQt. But Traits has a nice model/view separation in which you can get the reactive programming without the GUI event-loop, and that's really nice. In short, yes you do have to learn things, but you won't get an interactive program for free, sorry. Ga?l From j.reid at mail.cryst.bbk.ac.uk Mon Mar 3 15:12:57 2008 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Mon, 03 Mar 2008 20:12:57 +0000 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: <3d375d730803030829t2290c9fbub441e5d17d8561cb@mail.gmail.com> References: <3d375d730803030829t2290c9fbub441e5d17d8561cb@mail.gmail.com> Message-ID: Robert Kern wrote: > The correct compiler *should* be picked up from the Python Makefile > which is stored in your Python installation. Usually, you will need > the same compiler that built your Python. > Thanks for the info. Now I've compiled LAPACK with f77 rather than gfortran and "-dalign -native -xO5" (which is what was in Make.inc for the ATLAS config as recommended) I've configured ATLAS using: ../configure -C ic cc -F ic -KPIC -F gc -fPIC --with-netlib-lapack=../../lapack-3.1.1/lapack_LINUX.a so I think the C API to LAPACK should now be compiled with cc which is what I believe my python was compiled with. I seem to get the exact same core dump though. Any ideas what else I could check? Thanks, John. From s.mientki at ru.nl Mon Mar 3 15:29:11 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 03 Mar 2008 21:29:11 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <20080303195343.GI14020@phare.normalesup.org> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> <47CC4FD0.9040405@ru.nl> <20080303195343.GI14020@phare.normalesup.org> Message-ID: <47CC5F97.1080604@ru.nl> hi Gael, I might even agree more with you than you or I thought at first sight ... Gael Varoquaux wrote: > Traits can be seen as three things: > > * Type validation of attributes: good point, if you know how to handle the exceptions, instead of "error 1229: contact your distributor" > * Reactive/callback programming made easy. very good point, should be done all the time ! > * Visualization made easy: it's boring, but also a very good point, although I'm not a MS fan at all, all programs should be good as Excel at this point. > > >> but as follower of the KISS principle, again I don't understand it. >> > > KISS is great, but remember: 1 month of hard work can save you one week > end of learning. If you start doing interactive GUIs, you won't be able > to get around learning a few things, and I think it is easier to learn > Traits than the alternatives. > I'll hope the future will change that: simple programming for everyone ! > No object oriented programming. I don't believe you can do complexe > codebases without OOP. agreed. > In addition you don't have any reactive programming: the code above > cannot be interactive. This is what I call a "visual script". Going from > program-driven logics, where the user is presented sequentially a set of > dialogs, to user-driven logics, where the user (or an experiment) keeps > interacting with the program, will require a paradigm shift. More than agree, my adagiao: "the best programs are written by users", unfortunately programming is yet too difficult for most domain experts. > I believe > that Traits will make this paradigm shifte easier than any alternative. I > also believe your second best bet is PyQt. But Traits has a nice > model/view separation in which you can get the reactive programming > without the GUI event-loop, and that's really nice. > > In short, yes you do have to learn things, but you won't get an > interactive program for free, sorry. > And now I get a strange feeling, ... ... that I'm working on some kind of traits, maybe a lot less sophisticated, but on the other hand much easier ;-) thanks for the explanation, cheers, Stef From pwang at enthought.com Mon Mar 3 15:47:16 2008 From: pwang at enthought.com (Peter Wang) Date: Mon, 3 Mar 2008 14:47:16 -0600 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <47CC4FD0.9040405@ru.nl> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> <47CC4FD0.9040405@ru.nl> Message-ID: <113C8C25-06B6-4878-A248-8047F5F77B2A@enthought.com> On Mar 3, 2008, at 1:21 PM, Stef Mientki wrote: > What I understand is that traits is a smart replacement of "*arg, > **kwargs" > (which I've never used either). Not at all, not more so than a car is just a smart replacement of horses with circular legs. :) Gael succinctly identifies the heart of the matter: > Going from > program-driven logics, where the user is presented sequentially a > set of > dialogs, to user-driven logics, where the user (or an experiment) > keeps > interacting with the program, will require a paradigm shift. I believe > that Traits will make this paradigm shifte easier than any > alternative. Let's look at your example: > Please tell me in 2 lines what's the essential difference between > TraitsUI in your presentation, and the lines below > Names = [ 'For All Signals', 'AutoScale', 'Upper Value', 'Lower > Value' ] > Values = [ False, True, 200, 20 ] > Types = [ bool, bool ] > OK, Values = MultiLineDialog ( Names, Values, Types, > 'Set Border Values', > width = 70 ) Presumably this dialog is part of a larger application. Can your user write a script to invoke parts of this dialog? Can the user have this dialog open while they have a different dialog open (i.e. non-modal view)? What if option "Foo" in that other dialog directly affects the AutoScale setting? If this is a plot, perhaps their mouse interaction with the plot turns off AutoScale, since they might be zooming in to a sub-region; how is that going to work? This dialog is presumably a controller for a single piece of your program; how will you embed this dialog inside a larger GUI panel when the user wants to act on several items at the same time? The more moving parts you have an in an application, the more relationships and interdependencies you have to manage. For most software, the best way to manage what would generally be an N^2 explosion of relationships is by grouping closely-related things into patterns that have well defined semantics. OOP by itself is not enough. Traits allows you to do reactive-based programming in Python, meaning that your components are much less tightly coupled. A very nice side effect is that one tends to write more explicit "model" classes, which are then viewed by view objects, and manipulated by controller objects. A user can easily script in this environment by directly manipulating the model with Python code. A designer can trivially switch out views. Your app suddenly stops being a rigid, procedurally-constructed closed box of UI code tightly coupled with domain logic; instead, it is a live system of objects that communicate via events. You can fire up your application with IPython (or embed a shell like PyCrust into it) and introspect and interact with live parts of your running application. This event-/message-driven approach to software construction not only fits very well with the goal of providing a friendly computation environment for non- programmer domain experts, but it is actually the essence of object oriented programming. Just my $.02 as a long-time traits user. :) -Peter From gael.varoquaux at normalesup.org Mon Mar 3 15:57:09 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 3 Mar 2008 21:57:09 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <47CC5F97.1080604@ru.nl> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> <47CC4FD0.9040405@ru.nl> <20080303195343.GI14020@phare.normalesup.org> <47CC5F97.1080604@ru.nl> Message-ID: <20080303205709.GA7804@phare.normalesup.org> On Mon, Mar 03, 2008 at 09:29:11PM +0100, Stef Mientki wrote: > And now I get a strange feeling, ... > ... that I'm working on some kind of traits, > maybe a lot less sophisticated, > but on the other hand much easier ;-) Neware, you might be reinventing the wheel. As you try to take your ideas further, you will probably end up with the same problems Traits ahd to face three years ago. I don't believe in the "this is too complicated for me, let me reinvent something similar but different" approach. Anyhow, I which you good luck. I am interested in what might come out, however I'll stick with Traits, because I know it has been used for years in real-world large project and is now at its third major version, with many changes benefiting from experience. Ga?l From aisaac at american.edu Mon Mar 3 19:20:05 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 3 Mar 2008 19:20:05 -0500 Subject: [SciPy-user] =?iso-8859-1?q?=5BNewbie=5D_High-performance_plottin?= =?iso-8859-1?q?g_of=09large=09datasets?= In-Reply-To: <20080303195343.GI14020@phare.normalesup.org> References: <1065457655.20080228095700@xs4all.nl><1204358988.2639.17.camel@pc1.cole.uklinux.net><47C92C09.2060704@ru.nl><1204405576.2639.73.camel@pc1.cole.uklinux.net><47CAE2BC.2040800@ru.nl><20080303083458.GC14020@phare.normalesup.org><47CC4FD0.9040405@ru.nl><20080303195343.GI14020@phare.normalesup.org> Message-ID: On Mon, 3 Mar 2008, Gael Varoquaux apparently wrote: > * Type validation of attributes: the attributes of an object can be given > a type descriptor (more precisely a validation method) and the > compliance of the attribute is checked at run-time when this attribute > is set. This is very useful for complex codebase, but probably not for > you. Can you easily state briefly how this differs from the use of properties (with type checking on the setter)? Thank you, Alan Isaac From gael.varoquaux at normalesup.org Mon Mar 3 19:36:23 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 4 Mar 2008 01:36:23 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of?large?datasets In-Reply-To: References: <20080303195343.GI14020@phare.normalesup.org> Message-ID: <20080304003623.GA5384@phare.normalesup.org> On Mon, Mar 03, 2008 at 07:20:05PM -0500, Alan G Isaac wrote: > On Mon, 3 Mar 2008, Gael Varoquaux apparently wrote: > > * Type validation of attributes: the attributes of an object can be given > > a type descriptor (more precisely a validation method) and the > > compliance of the attribute is checked at run-time when this attribute > > is set. This is very useful for complex codebase, but probably not for > > you. > Can you easily state briefly how this differs from the use of > properties (with type checking on the setter)? Fundementaly, I don't think this differs much. You have a lot of syntactic sugar around it, which makes the code more readable, and easier to reuse, as it gives the commonly used patterns (dynamical initialisation, delegation, as well as a lot of existing validation types). Don't under-estimate the work to get the syntax and the different patterns right. When you use it a lot, Traits simply "feels right". I was present when someone asked R. Kern (I hope you won't mind me quoting you, Robert) "what do you use Traits for?", and his reply was, "As an 'object' replacement". In addition, the property-calling code is written in C, which makes it orders of magnitude faster than standard Python properties. Cheers, Ga?l From osman at fuse.net Mon Mar 3 22:13:26 2008 From: osman at fuse.net (osman) Date: Mon, 03 Mar 2008 22:13:26 -0500 Subject: [SciPy-user] gutsy amd64 In-Reply-To: References: <1204478442.14422.23.camel@stargate.org> <1204479876.14422.28.camel@stargate.org> <1204495164.14422.39.camel@stargate.org> Message-ID: <1204600407.4862.4.camel@stargate.org> On Sun, 2008-03-02 at 22:59 +0000, Robin wrote: > P.S. > > Posting a build log and the output of python setup.py config might > also help diagnose the problem.python setup.py config OK. I have attached numpy's python setup.py config . When I try scipy it gives an immediate error. Before I had forgotten to re-install numpy. It was using the ubuntu/debian installed package for numpy. stargate:/home/osman/scipy-bash-> python setup.py build Traceback (most recent call last): File "setup.py", line 92, in setup_package() File "setup.py", line 63, in setup_package from numpy.distutils.core import setup File "/usr/lib/python2.5/site-packages/numpy/__init__.py", line 46, in import linalg File "/usr/lib/python2.5/site-packages/numpy/linalg/__init__.py", line 4, in from linalg import * File "/usr/lib/python2.5/site-packages/numpy/linalg/linalg.py", line 28, in from numpy.linalg import lapack_lite ImportError: /usr/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: _gfortran_concat_string I followed the wiki exactly (just cut and pasted). -osman -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy.config.log Type: text/x-log Size: 6251 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy.build.log Type: text/x-log Size: 12821 bytes Desc: not available URL: From osman at fuse.net Mon Mar 3 22:18:25 2008 From: osman at fuse.net (osman) Date: Mon, 03 Mar 2008 22:18:25 -0500 Subject: [SciPy-user] gutsy amd64 In-Reply-To: <47CBED79.8070704@ar.media.kyoto-u.ac.jp> References: <1204478442.14422.23.camel@stargate.org> <47CBED79.8070704@ar.media.kyoto-u.ac.jp> Message-ID: <1204600705.4862.9.camel@stargate.org> On Mon, 2008-03-03 at 21:22 +0900, David Cournapeau wrote: > This is due to having installed umfpack from debian: debian modifies > umfpack sources, such as the headers are installed in their own > directory (/usr/include/umfpack, instead of /usr/include). That makes > sense from a packaging point of view, because that's a bad practice to > put many headers in /usr/include. But it breaks scipy build if you do > not add /usr/include/umfpack in your site.cfg. I need umfpack for sfe a python based FE package which uses scipy/numpy. I'll follow your advice after trying to build with gfortran based on Robin's advice. > My advice to build scipy from sources on debian/ubuntu: > - install atlas package, this one is ok and works well: sudo apt-get > install atlas-base-dev (you can also install the atlas optimized for > your cpu, but you can do that later, it will be automatically used by > debian at runtime). > - do not use umfpack and co, remove the packages. > - use g77, and not gfortran. gfortran and g77 are incompatible, and > debian/ubuntu still use g77 for their current ABI (that will not change > for the next ubuntu version; maybe for ubuntu 8.10; for debian, they are > in the middle of the migration). Thanks for the info. Will let you know what happened. br, -osman From aisaac at american.edu Tue Mar 4 01:18:58 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 4 Mar 2008 01:18:58 -0500 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <89198da10803030146x2ead7d65s67528018a4a200d0@mail.gmail.com> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com><1204402937.10354.3.camel@localhost.localdomain><20080302130544.GE14294@phare.normalesup.org><89198da10803030146x2ead7d65s67528018a4a200d0@mail.gmail.com> Message-ID: On Mon, 3 Mar 2008, Jasper Stolte apparently wrote: > I'm not quite sure how this licensing stuff works. The > algorithms are all public domain afaik, Octave made some > implementation of them in C++. The class structure of this > toolbox will be totally different, it's even written in > a whole other language. Is it forbidden for us to look at > how they implemented it without going to GPL? Safest is not to look. And ask the author to release under BSD. (Sometimes s/he will.) But this seems an interesting case, if you accurately describe it. When you say the algorithm is in the public domain, does that mean that you know of a public domain implementation in code or in pseudocode? Cheers, Alan Isaac From aisaac at american.edu Tue Mar 4 01:19:00 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 4 Mar 2008 01:19:00 -0500 Subject: [SciPy-user] =?iso-8859-1?q?=5BNewbie=5D_High-performance_plottin?= =?iso-8859-1?q?g=09of=3Flarge=3Fdatasets?= In-Reply-To: <20080304003623.GA5384@phare.normalesup.org> References: <20080303195343.GI14020@phare.normalesup.org> <20080304003623.GA5384@phare.normalesup.org> Message-ID: >> On Mon, 3 Mar 2008, Gael Varoquaux apparently wrote: >>> * Type validation of attributes: the attributes of an object can be given >>> a type descriptor (more precisely a validation method) and the >>> compliance of the attribute is checked at run-time when this attribute >>> is set. This is very useful for complex codebase, but probably not for >>> you. > On Mon, Mar 03, 2008 at 07:20:05PM -0500, Alan G Isaac wrote: >> Can you easily state briefly how this differs from the use of >> properties (with type checking on the setter)? On Tue, 4 Mar 2008, Gael Varoquaux apparently wrote: > Fundementaly, I don't think this differs much. You have a lot of > syntactic sugar around it, which makes the code more readable, and easier > to reuse, as it gives the commonly used patterns (dynamical > initialisation, delegation, as well as a lot of existing validation > types). Don't under-estimate the work to get the syntax > and the different patterns right. When you use it a lot, > Traits simply "feels right". ... > In addition, the property-calling code is written in C, > which makes it orders of magnitude faster than standard > Python properties. Thanks! Alan From david at ar.media.kyoto-u.ac.jp Tue Mar 4 01:14:47 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 04 Mar 2008 15:14:47 +0900 Subject: [SciPy-user] gutsy amd64 In-Reply-To: <1204600407.4862.4.camel@stargate.org> References: <1204478442.14422.23.camel@stargate.org> <1204479876.14422.28.camel@stargate.org> <1204495164.14422.39.camel@stargate.org> <1204600407.4862.4.camel@stargate.org> Message-ID: <47CCE8D7.2060407@ar.media.kyoto-u.ac.jp> osman wrote: > stargate:/home/osman/scipy-bash-> python setup.py build > Traceback (most recent call last): > File "setup.py", line 92, in > setup_package() > File "setup.py", line 63, in setup_package > from numpy.distutils.core import setup > File "/usr/lib/python2.5/site-packages/numpy/__init__.py", line 46, in > > import linalg > File "/usr/lib/python2.5/site-packages/numpy/linalg/__init__.py", line > 4, in > from linalg import * > File "/usr/lib/python2.5/site-packages/numpy/linalg/linalg.py", line > 28, in > from numpy.linalg import lapack_lite > ImportError: /usr/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: _gfortran_concat_string > > I followed the wiki exactly (just cut and pasted). This is likely because you mixed g77 and gfortran. You really should not use gfortran on ubuntu, unless you really know what you are doing. It just makes things more complicated, for no additional value. cheers, David From j.reid at mail.cryst.bbk.ac.uk Tue Mar 4 05:17:38 2008 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Tue, 04 Mar 2008 10:17:38 +0000 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: References: <3d375d730803030829t2290c9fbub441e5d17d8561cb@mail.gmail.com> Message-ID: How do I know which compilers I should be using? Which compilers does scipy use by default on Solaris 10? My guess is that I need to compile LAPACK and ATLAS with a combination of f77 and cc. The documentation for ATLAS strongly suggests using gcc though. Should I ignore this? Thanks, John. From robince at gmail.com Tue Mar 4 05:32:44 2008 From: robince at gmail.com (Robin) Date: Tue, 4 Mar 2008 10:32:44 +0000 Subject: [SciPy-user] gutsy amd64 In-Reply-To: <1204600407.4862.4.camel@stargate.org> References: <1204478442.14422.23.camel@stargate.org> <1204479876.14422.28.camel@stargate.org> <1204495164.14422.39.camel@stargate.org> <1204600407.4862.4.camel@stargate.org> Message-ID: On Tue, Mar 4, 2008 at 3:13 AM, osman wrote: > OK. I have attached numpy's python setup.py config . When I try scipy > it gives an immediate error. Before I had forgotten to re-install numpy. > It was using the ubuntu/debian installed package for numpy. > > > stargate:/home/osman/scipy-bash-> python setup.py build > > Traceback (most recent call last): > File "setup.py", line 92, in > setup_package() > File "setup.py", line 63, in setup_package > from numpy.distutils.core import setup > File "/usr/lib/python2.5/site-packages/numpy/__init__.py", line 46, in > > import linalg > File "/usr/lib/python2.5/site-packages/numpy/linalg/__init__.py", line > 4, in > from linalg import * > File "/usr/lib/python2.5/site-packages/numpy/linalg/linalg.py", line > 28, in > from numpy.linalg import lapack_lite > ImportError: /usr/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: _gfortran_concat_string > > I followed the wiki exactly (just cut and pasted). > > -osman The only thing I can see different from whats worked for me is that you have: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas', 'gfortran'] whereas I have: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] so I would check your site.cfg file. I only needed to manually add and copy the gfortran library to the umfpack entry... The other thing I would note that caught me a few times was when rebuilding you have to manually delete the old numpy/scipy directory (ie /usr/lib/python2.5/site-packages/numpy) as well as the 'build' directory where you run python setup.py. Otherwise perhaps it's best to follow David's advice, and use g77. I think most of my instructions would still hold, except you would put g2c instead of gfortran as the extra UMFPACK library and copy libg2c to scipy_build rather than gfortran. I must admit after some time trying this is the only method that I've had success with (and it's worked for me on a number of fresh Ubuntu installs, both 32 and 64 bit). It is a complicated process though and things can easily get messed up... I'd suggested cleaning/deleting everything and starting from scratch (lapack, atlas, umfpack) making sure you add in all the fPIC flags for lapack and atlas, point atlas to lapack and make sure it's using gfortran. Actually most of the problems I was having was with umfpack, but it looks like your getting stuck with an ATLAS issue... Sorry I can't be more help... Robin From robert.kern at gmail.com Tue Mar 4 05:36:11 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 4 Mar 2008 04:36:11 -0600 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: References: <3d375d730803030829t2290c9fbub441e5d17d8561cb@mail.gmail.com> Message-ID: <3d375d730803040236y3fe022c6l5efdd4ad67359058@mail.gmail.com> On Tue, Mar 4, 2008 at 4:17 AM, John Reid wrote: > How do I know which compilers I should be using? Which compilers does > scipy use by default on Solaris 10? For C, it should (in both the descriptive and normative sense) use whatever compiler that built Python. Look in $PREFIX/lib/python2.x/config/Makefile for the CC variable. For Fortran, the build will probably use f77 if you don't pick anything else using --fcompiler. You may or may not need to change that. > My guess is that I need to compile LAPACK and ATLAS with a combination > of f77 and cc. The documentation for ATLAS strongly suggests using gcc > though. Should I ignore this? I don't know any specific details about building on Solaris, but using a different compiler for the ATLAS library than your Python is probably not going to work too well. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From j.reid at mail.cryst.bbk.ac.uk Tue Mar 4 07:30:25 2008 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Tue, 04 Mar 2008 12:30:25 +0000 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: <3d375d730803040236y3fe022c6l5efdd4ad67359058@mail.gmail.com> References: <3d375d730803030829t2290c9fbub441e5d17d8561cb@mail.gmail.com> <3d375d730803040236y3fe022c6l5efdd4ad67359058@mail.gmail.com> Message-ID: Robert Kern wrote: >> My guess is that I need to compile LAPACK and ATLAS with a combination >> of f77 and cc. The documentation for ATLAS strongly suggests using gcc >> though. Should I ignore this? > > I don't know any specific details about building on Solaris, but using > a different compiler for the ATLAS library than your Python is > probably not going to work too well. > ATLAS uses different compilers in 8 or so different ways. You are suggesting I want to change all of them? Here is some info from ATLAS's install guide: 3.2 Changing the compilers and flags that ATLAS uses for the build ATLAS defines eight different compilers and associated flag macros in its Make.inc which are used to compile various files during the install process. ATLAS's configure provides flags for changing both the compiler and flags for each of these macros. In the following list, the macro name is given first, and the configure flag abbreviation is in parentheses: 1. XCC (xc): C compiler used to compile ATLAS's build harness routines (these never appear in any user-callable library) 2. GOODGCC (gc): gcc with any required architectural flags (eg. -m64), which will be used to assemble cpp-enabled assembly and to compile certain multiple implementation routines that specifically request gcc 3. F77 (if): FORTRAN compiler used to compile ATLAS's FORTRAN77 API interface routines. 4. ICC (ic): C compiler used to compile ATLAS's C API interface routines. 5. DMC (dm): C compiler used to compile ATLAS's generated double precision (real and complex) matmul kernels 6. SMC (sm): C compiler used to compile ATLAS's generated single precision (real and complex) matmul kernels 7. DKC (dk): C compiler used to compile all other double precision routines (mainly used for other kernels, thus the K) 8. SKC (sk): C compiler used to compile all other single precision routines (mainly used for other kernels, thus the K) It is almost never a good idea to change DMC or SMC, and it is only very rarely a good idea to change DKC or SKC. For ATLAS 3.8.0, all architectural defaults are set using gcc 4.2 only (the one exception is MIPS/IRIX, where SGI's compiler is used). In most cases, switching these compilers will get you worse performance and accuracy, even when you are absolutely sure it is a better compiler and flag combination! In particular we tried the Intel compiler icc (called icl on Windows) on Intel x86 platforms, and overall performance was lower than gcc. Even worse, from the documentation icc does not seem to have any firm IEEE floating point compliance unless you want to run so slow that you could compute it by hand faster. This means that whenever icc achieves reasonable performance, I have no idea if the error will be bounded or not. I could not obtain access to icc on the Itaniums, where icc has historically been much faster than gcc, but I note that the performance of gcc4.2 is much better than gcc3 for most routines, so gcc may be the best compiler there now as well. There is almost never a need to change XCC, since it doesn't affect the output libraries in any way, and we have seen that changing the kernel compilers is a bad idea. However, what if you yourself use a non-gnu compiler, like Intel's icc or ifort, then what you need to do is tell ATLAS to compile its interface routines with your compilers, which is discussed in Section 3.2.1. Another common problem is that your OS has been built with an older gcc whose libraries are incompatible with gcc 4.2. In this case, creating an executable with gcc4.2 can cause problems, and so what you want to do is keep gcc3 as you default compiler (compiling ATLAS interface routines with it, as well as using it for all linking) but compile the ATLAS kernel routines with gcc4. This case is discussed in Section 3.2.2. For those who insist on monkeying with other compilers, Section 3.2.3 gives some guidance. Finally installing ATLAS without a FORTRAN compiler is discussed in Section 3.2.4. From bsouthey at gmail.com Tue Mar 4 09:36:47 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 04 Mar 2008 08:36:47 -0600 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com><1204402937.10354.3.camel@localhost.localdomain><20080302130544.GE14294@phare.normalesup.org><89198da10803030146x2ead7d65s67528018a4a200d0@mail.gmail.com> Message-ID: <47CD5E7F.3030508@gmail.com> Hi, Really this is legal advice that you need from a lawyer (and I'm not one). However, there are resources that can help you. I would suggest first browsing Software Freedom Law Center's guide: "A Legal Issues Primer for Open Source and Free Software Projects" which is available at http://www.softwarefreedom.org/resources/2008/foss-primer.html. Also the viewpoint on using BSD licenses in GPL code http://www.softwarefreedom.org/resources/2007/gpl-non-gpl-collaboration.html . If you use someone's code (or pseudo code for that matter) in some way then you are bound by the terms of the code. Just looking at the code (or pseudo code) is probably sufficient to the licensing terms to be valid (especially with software patents involved). Here you are not looking at the algorithm that might be controlled by some license but an actual expression of that algorithm which is controlled by the authors copyrights and consequently the license. Thus, the need for clean room design (http://en.wikipedia.org/wiki/Clean_room_design). Regards Bruce Alan G Isaac wrote: > On Mon, 3 Mar 2008, Jasper Stolte apparently wrote: > >> I'm not quite sure how this licensing stuff works. The >> algorithms are all public domain afaik, Octave made some >> implementation of them in C++. The class structure of this >> toolbox will be totally different, it's even written in >> a whole other language. Is it forbidden for us to look at >> how they implemented it without going to GPL? >> > > Safest is not to look. > And ask the author to release under BSD. > (Sometimes s/he will.) > > But this seems an interesting case, > if you accurately describe it. > > When you say the algorithm is in the public domain, > does that mean that you know of a public domain > implementation in code or in pseudocode? > > Cheers, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From j.reid at mail.cryst.bbk.ac.uk Tue Mar 4 13:00:43 2008 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Tue, 04 Mar 2008 18:00:43 +0000 Subject: [SciPy-user] Newbie broadcasting question Message-ID: This works fine: In [49]: numpy.zeros((3,2))+numpy.array([1,2]) Out[49]: array([[ 1., 2.], [ 1., 2.], [ 1., 2.]]) but this doesn't: In [50]: numpy.zeros((3,2))+numpy.array([1,2,3]) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) C:\Dev\MyProjects\Bio\Python\site_dpm\ in () ValueError: shape mismatch: objects cannot be broadcast to a single shape > (1)() what is the simplest way to get this addition to work? I would like the following array([[ 0., 0.], [ 0., 0.], [ 0., 0.]]) + array([1,2,3]) = array([[ 1., 1.], [ 2., 2.], [ 3., 3.]]) In general my arrays are held in variables and are not constructed on the fly so while this works numpy.zeros((3,2)).T+numpy.array([1,2,3]) which works fine with the '+' operator but not with '+='. This doesn't work: In [54]: a=zeros((3,2)) In [55]: a.T += numpy.array([1,2,3]) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) C:\Dev\MyProjects\Bio\Python\site_dpm\ in () AttributeError: attribute 'T' of 'numpy.ndarray' objects is not writable > (1)() My guess is that I should do a = (a.T + numpy.array([1,2,3])).T but I wonder if there is a more efficient way. Am I missing something? Thanks, John. From robince at gmail.com Tue Mar 4 13:13:08 2008 From: robince at gmail.com (Robin) Date: Tue, 4 Mar 2008 18:13:08 +0000 Subject: [SciPy-user] Newbie broadcasting question In-Reply-To: References: Message-ID: On Tue, Mar 4, 2008 at 6:00 PM, John Reid wrote: > This works fine: > > In [49]: numpy.zeros((3,2))+numpy.array([1,2]) > Out[49]: > array([[ 1., 2.], > [ 1., 2.], > [ 1., 2.]]) > > > but this doesn't: > In [50]: numpy.zeros((3,2))+numpy.array([1,2,3]) > --------------------------------------------------------------------------- > ValueError Traceback (most recent call last) > > C:\Dev\MyProjects\Bio\Python\site_dpm\ in () > > ValueError: shape mismatch: objects cannot be broadcast to a single shape > > (1)() > > > > what is the simplest way to get this addition to work? I would like the > following > > array([[ 0., 0.], > [ 0., 0.], > [ 0., 0.]]) > > > + > > array([1,2,3]) > > = > > array([[ 1., 1.], > [ 2., 2.], > [ 3., 3.]]) > > In general my arrays are held in variables and are not constructed on > the fly so while this works > > numpy.zeros((3,2)).T+numpy.array([1,2,3]) > > which works fine with the '+' operator but not with '+='. This doesn't work: > In [54]: a=zeros((3,2)) > > In [55]: a.T += numpy.array([1,2,3]) > --------------------------------------------------------------------------- > AttributeError Traceback (most recent call last) > > C:\Dev\MyProjects\Bio\Python\site_dpm\ in () > > AttributeError: attribute 'T' of 'numpy.ndarray' objects is not writable > > (1)() > > > My guess is that I should do > a = (a.T + numpy.array([1,2,3])).T > > but I wonder if there is a more efficient way. Am I missing something? > > Thanks, > John. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > You could try: aT = a.T aT += numpy.array([1,2,3]) Since aT is a view the data is the same as a, so a itself will be updated. From robert.kern at gmail.com Tue Mar 4 13:19:35 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 4 Mar 2008 12:19:35 -0600 Subject: [SciPy-user] Newbie broadcasting question In-Reply-To: References: Message-ID: <3d375d730803041019s621248d2qf74939a9c7dd21d7@mail.gmail.com> On Tue, Mar 4, 2008 at 12:00 PM, John Reid wrote: > This works fine: > > In [49]: numpy.zeros((3,2))+numpy.array([1,2]) > Out[49]: > array([[ 1., 2.], > [ 1., 2.], > [ 1., 2.]]) > > > but this doesn't: > In [50]: numpy.zeros((3,2))+numpy.array([1,2,3]) > --------------------------------------------------------------------------- > ValueError Traceback (most recent call last) > > C:\Dev\MyProjects\Bio\Python\site_dpm\ in () > > ValueError: shape mismatch: objects cannot be broadcast to a single shape > > (1)() > > > > what is the simplest way to get this addition to work? Instead of adding a shape-(3,) array, make it a shape-(3,1) array. In [1]: from numpy import * In [2]: z = zeros((3,2)) In [3]: a = array([1,2,3]) In [5]: z + a[:,newaxis] Out[5]: array([[ 1., 1.], [ 2., 2.], [ 3., 3.]]) A good overview of broadcasting is here: http://www.scipy.org/EricsBroadcastingDoc -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Mar 4 13:28:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 4 Mar 2008 12:28:14 -0600 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: References: <3d375d730803030829t2290c9fbub441e5d17d8561cb@mail.gmail.com> <3d375d730803040236y3fe022c6l5efdd4ad67359058@mail.gmail.com> Message-ID: <3d375d730803041028m76e15bd0je7f45adf3685b1b9@mail.gmail.com> On Tue, Mar 4, 2008 at 6:30 AM, John Reid wrote: > > > Robert Kern wrote: > >> My guess is that I need to compile LAPACK and ATLAS with a combination > >> of f77 and cc. The documentation for ATLAS strongly suggests using gcc > >> though. Should I ignore this? > > > > I don't know any specific details about building on Solaris, but using > > a different compiler for the ATLAS library than your Python is > > probably not going to work too well. > > > > ATLAS uses different compilers in 8 or so different ways. You are > suggesting I want to change all of them? 3-8 probably. But like I said, I know very few details about building ATLAS or scipy on Solaris. I'm out of my depth. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stef.mientki at gmail.com Tue Mar 4 13:46:32 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Tue, 04 Mar 2008 19:46:32 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <113C8C25-06B6-4878-A248-8047F5F77B2A@enthought.com> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> <47CC4FD0.9040405@ru.nl> <113C8C25-06B6-4878-A248-8047F5F77B2A@enthought.com> Message-ID: <47CD9908.1030506@gmail.com> Peter Wang wrote: > On Mar 3, 2008, at 1:21 PM, Stef Mientki wrote: > > >> What I understand is that traits is a smart replacement of "*arg, >> **kwargs" >> (which I've never used either). >> > > Not at all, not more so than a car is just a smart replacement of > horses with circular legs. :) > yes, just like a "deux Chevaux Vapeur", or "ugly duck" as it called in our country, but at least I get forward ;-) thanks, Stef Mientki From stef.mientki at gmail.com Tue Mar 4 14:11:25 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Tue, 04 Mar 2008 20:11:25 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <20080303205709.GA7804@phare.normalesup.org> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> <47CC4FD0.9040405@ru.nl> <20080303195343.GI14020@phare.normalesup.org> <47CC5F97.1080604@ru.nl> <20080303205709.GA7804@phare.normalesup.org> Message-ID: <47CD9EDD.7000002@gmail.com> hi Gael, Gael Varoquaux wrote: > On Mon, Mar 03, 2008 at 09:29:11PM +0100, Stef Mientki wrote: > >> And now I get a strange feeling, ... >> ... that I'm working on some kind of traits, >> maybe a lot less sophisticated, >> but on the other hand much easier ;-) >> > > Neware, you might be reinventing the wheel. Don't think so, let's say I'm trying to make an octagon, not as well as a wheel, but better than a square. (or am I creating a hoovercraft ;-) We started this discussion in november last year, I had the plan to use Vision to develop a LabView like environment. You were the one who told me that Enthought's product based on TraitsUI would be the best. I agreed with you, but as nobody has seen the Enthought product, and it would last to the end of 2007 before it was made public, I decided that it didn't fit my time schedule. (And by now I still haven't seen anything from Enthought ;-) Going for th? best, is not always th? best choice, if we would go for fusion now, you wouldn't be able to read this mail ;-) > As you try to take your ideas > further, you will probably end up with the same problems Traits ahd to > face three years ago. I don't believe in the "this is too complicated for > me, let me reinvent something similar but different" approach. > > Anyhow, I which you good luck. I am interested in what might come out, > however I'll stick with Traits, because I know it has been used for years > in real-world large project and is now at its third major version, with > many changes benefiting from experience. > > I agree, Traits might be a very good choice (once you've mastered it), so again I fully agree there's no single argument to leave Traits, but there are a few obstacles to start with Traits. cheers, Stef From aisaac at american.edu Tue Mar 4 14:19:21 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 4 Mar 2008 14:19:21 -0500 Subject: [SciPy-user] =?iso-8859-1?q?=5BNewbie=5D_High-performance_plottin?= =?iso-8859-1?q?g_of=09large=09datasets?= In-Reply-To: <47CD9EDD.7000002@gmail.com> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> <47CC4FD0.9040405@ru.nl> <20080303195343.GI14020@phare.normalesup.org> <47CC5F97.1080604@ru.nl><20080303205709.GA7804@phare.normalesup.org><47CD9EDD.7000002@gmail.com> Message-ID: On Tue, 04 Mar 2008, Stef Mientki apparently wrote: > there are a few obstacles to start with Traits. I have not really played with Traits yet, but at first approach it seems pretty natural. What are the obstacles? Cheers, Alan Isaac From webb.sprague at gmail.com Tue Mar 4 14:31:10 2008 From: webb.sprague at gmail.com (Webb Sprague) Date: Tue, 4 Mar 2008 11:31:10 -0800 Subject: [SciPy-user] minpack.error / fsolve problem In-Reply-To: References: <47C9110F.4040807@scipy.org> Message-ID: If anybody cares, I resolved the issue by using scipy.optimize.brent() to minimize ((f(x) - y)**2. Works great now. On 3/1/08, Webb Sprague wrote: > Thanks for the response, dmitrey! > > On Sat, Mar 1, 2008 at 12:17 AM, dmitrey wrote: > > As for me it yields "name LcUtil is not defined" (line 48). Also, there > > was some indent problems near try-except block, mb due to space-tab > > different numbers from the attached file. > > Sorry but there is a lot of infrastructure you don't have, so it won't > run as is. Plus it was cut and paste, etc. But it works 98% of the > time in its context. > > > I would recommend you try using another solver, for example nssolve from > > scikits.openopt. > > Sounds reasonable, but why that one? > > > Do you know exactly what do you need? Solve a system of non-linear > > equations via fsolve or to minimize a function? > > D. > > I need to fit a scalar (kt in the code) such that f(kt) = e_0 (a > scalar which is given from somewhere else). f() involves a bunch of > stuff (see the code, but don't bother trying to run it), but it is > monotonic. > > I think of this as finding kt such that f(kt) - e_0 = 0, so I used a > root solver. But I hardly care so long as it works > > Is there a best function for this? > > I am totally unfamiliar with optimization theory, besides one > assignment on Newton's method in first semester calculus, and some > handwaving about MLE's > From jasperstolte at gmail.com Tue Mar 4 14:39:43 2008 From: jasperstolte at gmail.com (Jasper Stolte) Date: Tue, 4 Mar 2008 20:39:43 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <47CD5E7F.3030508@gmail.com> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <1204402937.10354.3.camel@localhost.localdomain> <20080302130544.GE14294@phare.normalesup.org> <89198da10803030146x2ead7d65s67528018a4a200d0@mail.gmail.com> <47CD5E7F.3030508@gmail.com> Message-ID: <89198da10803041139j396f8dd8s5e6981f901304892@mail.gmail.com> Oow wow, For my hobbies, I would like to stay as far away from lawyers as possible.. :) If the original creators of the Octave Control Systems Toolbox don't want to specifically allow it, I'll just have to do without.. Greetz, Jasper On Tue, Mar 4, 2008 at 3:36 PM, Bruce Southey wrote: > Hi, > Really this is legal advice that you need from a lawyer (and I'm not > one). However, there are resources that can help you. I would suggest > first browsing Software Freedom Law Center's guide: "A Legal Issues > Primer for Open Source and Free Software Projects" which is available at > http://www.softwarefreedom.org/resources/2008/foss-primer.html. Also the > viewpoint on using BSD licenses in GPL code > > http://www.softwarefreedom.org/resources/2007/gpl-non-gpl-collaboration.html > . > > If you use someone's code (or pseudo code for that matter) in some way > then you are bound by the terms of the code. Just looking at the code > (or pseudo code) is probably sufficient to the licensing terms to be > valid (especially with software patents involved). Here you are not > looking at the algorithm that might be controlled by some license but an > actual expression of that algorithm which is controlled by the authors > copyrights and consequently the license. Thus, the need for clean room > design (http://en.wikipedia.org/wiki/Clean_room_design). > > Regards > Bruce > > > Alan G Isaac wrote: > > On Mon, 3 Mar 2008, Jasper Stolte apparently wrote: > > > >> I'm not quite sure how this licensing stuff works. The > >> algorithms are all public domain afaik, Octave made some > >> implementation of them in C++. The class structure of this > >> toolbox will be totally different, it's even written in > >> a whole other language. Is it forbidden for us to look at > >> how they implemented it without going to GPL? > >> > > > > Safest is not to look. > > And ask the author to release under BSD. > > (Sometimes s/he will.) > > > > But this seems an interesting case, > > if you accurately describe it. > > > > When you say the algorithm is in the public domain, > > does that mean that you know of a public domain > > implementation in code or in pseudocode? > > > > Cheers, > > Alan Isaac > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Tue Mar 4 15:11:17 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Tue, 04 Mar 2008 21:11:17 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> <47CC4FD0.9040405@ru.nl> <20080303195343.GI14020@phare.normalesup.org> <47CC5F97.1080604@ru.nl> <20080303205709.GA7804@phare.normalesup.org> <47CD9EDD.7000002@gmail.com> Message-ID: <47CDACE5.4000903@ru.nl> Alan G Isaac wrote: > On Tue, 04 Mar 2008, Stef Mientki apparently wrote: > >> there are a few obstacles to start with Traits. >> > > I have not really played with Traits yet, > but at first approach it seems pretty natural. > What are the obstacles? > Don't let you stop by my remarks ! But for a non-programmer like me, it doesn't feel natural at all, so my learning curve is just too steep. Although I'll certainly will look at it when I've some more time. cheers, Stef From aisaac at american.edu Tue Mar 4 15:23:04 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 4 Mar 2008 15:23:04 -0500 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <89198da10803041139j396f8dd8s5e6981f901304892@mail.gmail.com> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com><1204402937.10354.3.camel@localhost.localdomain><20080302130544.GE14294@phare.normalesup.org><89198da10803030146x2ead7d65s67528018a4a200d0@mail.gmail.com><47CD5E7F.3030508@gmail.com><89198da10803041139j396f8dd8s5e6981f901304892@mail.gmail.com> Message-ID: On Tue, 4 Mar 2008, Jasper Stolte apparently wrote: > If the original creators of the Octave Control Systems > Toolbox don't want to specifically allow it, I'll just > have to do without. You won't know until you ask them! Cheers, Alan Isaac From aisaac at american.edu Tue Mar 4 15:23:06 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 4 Mar 2008 15:23:06 -0500 Subject: [SciPy-user] =?utf-8?q?=5BNewbie=5D_High-performance_plotting_of_?= =?utf-8?q?large=09datasets?= In-Reply-To: <47CDACE5.4000903@ru.nl> References: <1065457655.20080228095700@xs4all.nl><1204358988.2639.17.camel@pc1.cole.uklinux.net><47C92C09.2060704@ru.nl><1204405576.2639.73.camel@pc1.cole.uklinux.net><47CAE2BC.2040800@ru.nl><20080303083458.GC14020@phare.normalesup.org><47CC4FD0.9040405@ru.nl><20080303195343.GI14020@phare.normalesup.org><47CC5F97.1080604@ru.nl><20080303205709.GA7804@phare.normalesup.org><47CD9EDD.7000002@gmail.com><47CDACE5.4000903@ru.nl> Message-ID: On Tue, 04 Mar 2008, Stef Mientki apparently wrote: > But for a non-programmer like me, > it doesn't feel natural at all, > so my learning curve is just too steep. I'm just a user too. Did you try it at all? You can start by just letting Traits handle some simple type checking with predefined traits: This is not mind bending, but it may not meet a need of yours either ... Cheers, Alan From J.Anderson at hull.ac.uk Tue Mar 4 15:56:50 2008 From: J.Anderson at hull.ac.uk (Joseph Anderson) Date: Tue, 4 Mar 2008 20:56:50 -0000 Subject: [SciPy-user] Newbie help for installing pysamplerate References: <47CBEFFF.6060307@ar.media.kyoto-u.ac.jp> Message-ID: Hello David, Ah, ok. Glad to hear it isn't exactly my newbie misconceptions causing things not to work. Thanks for having a look at what is going on. I hadn't bee aware of scikits, but found my way to: http://scipy.org/scipy/scikits/wiki Along with trying to get (py)samplerate running, I'm using pyaudiolab. (I think I have version 0.6.7.) From looking around scikits and your comments below, looks like I should go ahead and reinstall using the scikits versions--these being the current/up to date? Thanks for having a look at samplerate. I'll go ahead and have another try when you've got things going. My best, Jo ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -----Original Message----- From: scipy-user-bounces at scipy.org on behalf of David Cournapeau Sent: Mon 03/03/2008 12:33 PM To: SciPy Users List Subject: Re: [SciPy-user] Newbie help for installing pysamplerate Joseph Anderson wrote: > > > What I see is that libsamplerate.so.0 is missing from the /usr/local/lib directory. Is this a file that should be created by pysamplerate's setup.py? > > Anyway, I'm doing something wrong. Sure it is rather simple. > Well, pysamplerate should not try to load *.so files, it cannot work on Mac OS X. I would have believed this was a stupid mistake of mine, but I am surprised, because the system does find the libsamplerate.so.0 file, which does not seem to exist on your platform... Anyway, you are not doing anything wrong, that's a mistake of mine. Let me check tonight on my macbook to see what's going on on Mac OS X. Also, note that the most up-to-date version is available in scikits under the name samplerate (I should have mentioned it on pysamplerate webpage). cheers, David _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3949 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From fredmfp at gmail.com Tue Mar 4 16:35:01 2008 From: fredmfp at gmail.com (fred) Date: Tue, 04 Mar 2008 22:35:01 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <47CD9EDD.7000002@gmail.com> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> <47CC4FD0.9040405@ru.nl> <20080303195343.GI14020@phare.normalesup.org> <47CC5F97.1080604@ru.nl> <20080303205709.GA7804@phare.normalesup.org> <47CD9EDD.7000002@gmail.com> Message-ID: <47CDC085.7030308@gmail.com> Stef Mientki a ?crit : > I agree, Traits might be a very good choice (once you've mastered it), > so again I fully agree there's no single argument to leave Traits, > but there are a few obstacles to start with Traits. Stef, This is only my 2 cts of a Traits's user-but-not-programmer guy. I can say that building a simple app with Traits (using Chaco2) works like a charm. I do like using Traits and so on. And people at enthought are very nice ;-) BTW, you have a lot of examples in source code tree, to begin. http://fredantispam.free.fr/snapshot.png Cheers, -- http://scipy.org/FredericPetit From s.mientki at ru.nl Tue Mar 4 16:41:58 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Tue, 04 Mar 2008 22:41:58 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> <47CC4FD0.9040405@ru.nl> <20080303195343.GI14020@phare.normalesup.org> <47CC5F97.1080604@ru.nl> <20080303205709.GA7804@phare.normalesup.org> <47CD9EDD.7000002@gmail.com> <47CDACE5.4000903@ru.nl> Message-ID: <47CDC226.5040108@ru.nl> hi Allen, Alan G Isaac wrote: > On Tue, 04 Mar 2008, Stef Mientki apparently wrote: > >> But for a non-programmer like me, >> it doesn't feel natural at all, >> so my learning curve is just too steep. >> > > I'm just a user too. > Did you try it at all? > no, I only try things when I see it's "direct" relevance ... > You can start by just letting Traits > handle some simple type checking with predefined traits: > > This is not mind bending, but it may not meet a need of > yours either ... > indeed I see no connection to the major headlines of my application :-( nonetheless, thanks for the link, Stef From s.mientki at ru.nl Tue Mar 4 16:47:55 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Tue, 04 Mar 2008 22:47:55 +0100 Subject: [SciPy-user] [Newbie] High-performance plotting of large datasets In-Reply-To: <47CDC085.7030308@gmail.com> References: <1065457655.20080228095700@xs4all.nl> <1204358988.2639.17.camel@pc1.cole.uklinux.net> <47C92C09.2060704@ru.nl> <1204405576.2639.73.camel@pc1.cole.uklinux.net> <47CAE2BC.2040800@ru.nl> <20080303083458.GC14020@phare.normalesup.org> <47CC4FD0.9040405@ru.nl> <20080303195343.GI14020@phare.normalesup.org> <47CC5F97.1080604@ru.nl> <20080303205709.GA7804@phare.normalesup.org> <47CD9EDD.7000002@gmail.com> <47CDC085.7030308@gmail.com> Message-ID: <47CDC38B.5070307@ru.nl> hi Fred fred wrote: > Stef Mientki a ?crit : > > >> I agree, Traits might be a very good choice (once you've mastered it), >> so again I fully agree there's no single argument to leave Traits, >> but there are a few obstacles to start with Traits. >> > Stef, > > This is only my 2 cts of a Traits's user-but-not-programmer guy. > > I can say that building a simple app with Traits (using Chaco2) > works like a charm. Yes, Chaco looks very nice, very good color choses, I should have discovered that sooner ! And I'll certainly will integrate that into my application, but I've just finished integrating MatPlotLib, PyPlot and ScopePlot :-9 > I do like using Traits and so on. > And people at enthought are very nice ;-) > Fully agree, what I understand they were they guys that developed and promoted SciPy. cheers, Stef > BTW, you have a lot of examples in source code tree, to begin. > > http://fredantispam.free.fr/snapshot.png > > > > Cheers, > > From aisaac at american.edu Tue Mar 4 16:50:59 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 4 Mar 2008 16:50:59 -0500 Subject: [SciPy-user] =?utf-8?q?=5BNewbie=5D_High-performance_plotting_of_?= =?utf-8?q?large=09datasets?= In-Reply-To: <47CDC226.5040108@ru.nl> References: <1065457655.20080228095700@xs4all.nl><1204358988.2639.17.camel@pc1.cole.uklinux.net><47C92C09.2060704@ru.nl><1204405576.2639.73.camel@pc1.cole.uklinux.net><47CAE2BC.2040800@ru.nl><20080303083458.GC14020@phare.normalesup.org><47CC4FD0.9040405@ru.nl><20080303195343.GI14020@phare.normalesup.org><47CC5F97.1080604@ru.nl><20080303205709.GA7804@phare.normalesup.org><47CD9EDD.7000002@gmail.com><47CDACE5.4000903@ru.nl><47CDC226.5040108@ru.nl> Message-ID: > Alan G Isaac wrote: >> On Tue, 04 Mar 2008, Stef Mientki apparently wrote: > indeed I see no connection to the major headlines of my application :-( Just to be clear: this link was only meant to show that Traits is nothing to shy away from. For relevance, see instead Cheers, Alan Isaac From david at ar.media.kyoto-u.ac.jp Tue Mar 4 22:03:34 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 05 Mar 2008 12:03:34 +0900 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: <3d375d730803040236y3fe022c6l5efdd4ad67359058@mail.gmail.com> References: <3d375d730803030829t2290c9fbub441e5d17d8561cb@mail.gmail.com> <3d375d730803040236y3fe022c6l5efdd4ad67359058@mail.gmail.com> Message-ID: <47CE0D86.9030807@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > For C, it should (in both the descriptive and normative sense) use > whatever compiler that built Python. Look in > $PREFIX/lib/python2.x/config/Makefile for the CC variable. > > For Fortran, the build will probably use f77 if you don't pick > anything else using --fcompiler. You may or may not need to change > that. > > I noticed that atlas and scipy often do not pick the same fortran compiler by default. If you have gcc and sun compilers on solaris, I would not be surprised if this were the case. With ATLAS, you can change the fortran compiler with the option -C if fortran_compiler during configure stage. cheers, David From david at ar.media.kyoto-u.ac.jp Tue Mar 4 22:06:46 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 05 Mar 2008 12:06:46 +0900 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: References: <3d375d730803030829t2290c9fbub441e5d17d8561cb@mail.gmail.com> <3d375d730803040236y3fe022c6l5efdd4ad67359058@mail.gmail.com> Message-ID: <47CE0E46.7070902@ar.media.kyoto-u.ac.jp> John Reid wrote: > > ATLAS uses different compilers in 8 or so different ways. You are > suggesting I want to change all of them? Here is some info from ATLAS's > install guide: > I would first try building with gcc, and only changing the fortran compiler. But the question is: why are you trying to build atlas on solaris ? sun performance libraries are faster in general, and you do not need to build them, cheers, David From david at ar.media.kyoto-u.ac.jp Tue Mar 4 22:30:26 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 05 Mar 2008 12:30:26 +0900 Subject: [SciPy-user] Newbie help for installing pysamplerate In-Reply-To: References: <47CBEFFF.6060307@ar.media.kyoto-u.ac.jp> Message-ID: <47CE13D2.6030100@ar.media.kyoto-u.ac.jp> Joseph Anderson wrote: > Hello David, > > Ah, ok. Glad to hear it isn't exactly my newbie misconceptions causing things not to work. Thanks for having a look at what is going on. > > I hadn't bee aware of scikits, but found my way to: > > http://scipy.org/scipy/scikits/wiki > > > Along with trying to get (py)samplerate running, I'm using pyaudiolab. (I think I have version 0.6.7.) From looking around scikits and your comments below, looks like I should go ahead and reinstall using the scikits versions--these being the current/up to date? > Yes. Since their inclusion, just be aware that py prefix was dropped: pyaudiolab becomes audiolab, same for pysamplerate becoming samplerate. I did a quick hack to solve the problem on mac os X, available in revision 890. cheers, David From ryanlists at gmail.com Wed Mar 5 00:11:29 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 4 Mar 2008 23:11:29 -0600 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <1204402937.10354.3.camel@localhost.localdomain> <20080302130544.GE14294@phare.normalesup.org> <89198da10803030146x2ead7d65s67528018a4a200d0@mail.gmail.com> <47CD5E7F.3030508@gmail.com> <89198da10803041139j396f8dd8s5e6981f901304892@mail.gmail.com> Message-ID: I have started cleaning up my rough controls toolbox and made it available for my students to download from this page: http://www.siue.edu/~rkrauss/python_intro.html basically, it is just the top two links: http://www.siue.edu/~rkrauss/controls-1.0.win32.exe and http://www.siue.edu/~rkrauss/controls-1.0.tar.gz It is still a bit messy and not as well documented as it should be, but it has one example and some docstrings. It is my own creation and though I don't think I have a license statement in it yet, I would release it under a BSD license. I will continue tinkering with it, cleaning it up, and including more examples as my System Dynamics and Feedback Controls classes use it more and more through out the semester. I would welcome any collaboration, but would very much want to avoid GPL contamination. Ryan On Tue, Mar 4, 2008 at 2:23 PM, Alan G Isaac wrote: > On Tue, 4 Mar 2008, Jasper Stolte apparently wrote: > > If the original creators of the Octave Control Systems > > Toolbox don't want to specifically allow it, I'll just > > have to do without. > > You won't know until you ask them! > > > > Cheers, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Wed Mar 5 00:11:29 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 4 Mar 2008 23:11:29 -0600 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <1204402937.10354.3.camel@localhost.localdomain> <20080302130544.GE14294@phare.normalesup.org> <89198da10803030146x2ead7d65s67528018a4a200d0@mail.gmail.com> <47CD5E7F.3030508@gmail.com> <89198da10803041139j396f8dd8s5e6981f901304892@mail.gmail.com> Message-ID: I have started cleaning up my rough controls toolbox and made it available for my students to download from this page: http://www.siue.edu/~rkrauss/python_intro.html basically, it is just the top two links: http://www.siue.edu/~rkrauss/controls-1.0.win32.exe and http://www.siue.edu/~rkrauss/controls-1.0.tar.gz It is still a bit messy and not as well documented as it should be, but it has one example and some docstrings. It is my own creation and though I don't think I have a license statement in it yet, I would release it under a BSD license. I will continue tinkering with it, cleaning it up, and including more examples as my System Dynamics and Feedback Controls classes use it more and more through out the semester. I would welcome any collaboration, but would very much want to avoid GPL contamination. Ryan On Tue, Mar 4, 2008 at 2:23 PM, Alan G Isaac wrote: > On Tue, 4 Mar 2008, Jasper Stolte apparently wrote: > > If the original creators of the Octave Control Systems > > Toolbox don't want to specifically allow it, I'll just > > have to do without. > > You won't know until you ask them! > > > > Cheers, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From j.anderson at hull.ac.uk Wed Mar 5 05:03:38 2008 From: j.anderson at hull.ac.uk (Joseph Anderson) Date: Wed, 05 Mar 2008 10:03:38 +0000 Subject: [SciPy-user] Newbie help for installing pysamplerate In-Reply-To: <47CE13D2.6030100@ar.media.kyoto-u.ac.jp> Message-ID: Great. Thanks David. I'll go ahead and try both audiolab and the updated samplerate. My best, Jo ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ On 5/3/08 03:30, "David Cournapeau" wrote: > Joseph Anderson wrote: >> Hello David, >> >> Ah, ok. Glad to hear it isn't exactly my newbie misconceptions causing things >> not to work. Thanks for having a look at what is going on. >> >> I hadn't bee aware of scikits, but found my way to: >> >> http://scipy.org/scipy/scikits/wiki >> >> >> Along with trying to get (py)samplerate running, I'm using pyaudiolab. (I >> think I have version 0.6.7.) From looking around scikits and your comments >> below, looks like I should go ahead and reinstall using the scikits >> versions--these being the current/up to date? >> > Yes. Since their inclusion, just be aware that py prefix was dropped: > pyaudiolab becomes audiolab, same for pysamplerate becoming samplerate. > > I did a quick hack to solve the problem on mac os X, available in > revision 890. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ***************************************************************************************** To view the terms under which this email is distributed, please go to http://www.hull.ac.uk/legal/email_disclaimer.html ***************************************************************************************** From j.reid at mail.cryst.bbk.ac.uk Wed Mar 5 07:33:26 2008 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Wed, 05 Mar 2008 12:33:26 +0000 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: <47CE0E46.7070902@ar.media.kyoto-u.ac.jp> References: <3d375d730803030829t2290c9fbub441e5d17d8561cb@mail.gmail.com> <3d375d730803040236y3fe022c6l5efdd4ad67359058@mail.gmail.com> <47CE0E46.7070902@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau wrote: > I would first try building with gcc, and only changing the fortran > compiler. But the question is: why are you trying to build atlas on > solaris ? sun performance libraries are faster in general, and you do > not need to build them, Thanks. I had seen some postings that people had trouble using sun performance libraries with scipy. Perhaps those were old posts? Is that all resolved? Also I was just trying to follow what documentation there is for building scipy. I could not find any that mentioned Solaris. From david at ar.media.kyoto-u.ac.jp Wed Mar 5 08:59:29 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 05 Mar 2008 22:59:29 +0900 Subject: [SciPy-user] scipy build with gcc on solaris problems In-Reply-To: References: <3d375d730803030829t2290c9fbub441e5d17d8561cb@mail.gmail.com> <3d375d730803040236y3fe022c6l5efdd4ad67359058@mail.gmail.com> <47CE0E46.7070902@ar.media.kyoto-u.ac.jp> Message-ID: <47CEA741.9020300@ar.media.kyoto-u.ac.jp> John Reid wrote: > > Thanks. I had seen some postings that people had trouble using sun > performance libraries with scipy. Perhaps those were old posts? Is that > all resolved? Also I was just trying to follow what documentation there > is for building scipy. I could not find any that mentioned Solaris. > > You have two solutions: - you add libraries and libraries path in site.cfg under the sections blas and lapack. To find the options, you should use the command "suncc -xlic_lib=sunperf -#" to see which libraries are linked when sunperf is used (-# is for verbose link with sun studio). - you can also try numscons, an alternative build system I am currently working on. This one explicitly supports sunperf. Numpy is buildable, and works with sunperf (on indiana with sunstudio 12, at least). scipy is almost done, but there are still some issues, mostly related to recent changes in sparse module. Unfortunately, the trick I am using in numscons is not easily 'backportable' to the current default scipy build system, because of what looks like a bug of sun linker (the -xlic_lib=sunperf flag is ignored by sun linker when building a shared library, that is when -G is used). cheers, David From boyle5 at llnl.gov Wed Mar 5 14:20:08 2008 From: boyle5 at llnl.gov (James Boyle) Date: Wed, 5 Mar 2008 11:20:08 -0800 Subject: [SciPy-user] pynetcdf errors Message-ID: python - Python 2.5.1 numpy '1.0.3.1' pynetcdf 0.7 OS X 10.4.11 Using pynetcdf I create a file, write to it and then close the file. When the file closes I get a large number of the following messages: python(22656) malloc: *** Deallocation of a pointer not malloced: 0x239e6b0; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug Any advice on ow to address this problem? I noticed that on the scipy user forum ( summer 2007) that there was a report of pynetcdf crashing on python 2.5.1 but there was no solution offered. --Jim From lopmart at gmail.com Wed Mar 5 17:00:13 2008 From: lopmart at gmail.com (jose luis Lopez Martinez) Date: Wed, 5 Mar 2008 14:00:13 -0800 Subject: [SciPy-user] help about a solve sparse matrix Message-ID: <4eeef9d40803051400x72bbd623j5f2d61cd8758292f@mail.gmail.com> hi i work in my thesis of master and i will work with sparse matrix (i.e128^4), so my question is if exist a command in scipy that solve sparse matrix using at the Gauss-Seidel iteration method? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From lopmart at gmail.com Wed Mar 5 19:14:26 2008 From: lopmart at gmail.com (jose luis Lopez Martinez) Date: Wed, 5 Mar 2008 16:14:26 -0800 Subject: [SciPy-user] help about imshow Message-ID: <4eeef9d40803051614p51e0348bv88855822e46c8d3b@mail.gmail.com> Saludos Como despliego una imagen con niveles de gris de 0 a 1?,porqe si le doy la siguiente informacion me despliega el 0.5 como si fuera el uno y no como la mitad de la escala de grises. from pylab import * from scipy import * H=array([[0.0,0.0,0.5,0.0,0.0],[0.0,0.0,0.0,0.0,0.0],[0.0,0.0,0.0,0.0,0.0],[ 0.0,0.0,0.0,0.0,0.0],[0.0,0.0,0.0,0.0,0.0]]) print H; figure() imshow(H,cmap=cm.gray) Asumo que el imshow realiza lo siguiente C=H.copy() C=C-C.min() C=C/C.max() figure() imshow(C,cmap=cm.gray) lo cual me da la misma imagen que la primera, es decir que el 0.5 lo toma como si fuera maximo valor de la escala de grises.Pero en realidad deberia desplegar visualmente un valor de gris intermedio y no el blanco(ya qu eel blanco solo lo debe mostrar si su valor es 1) gracias -------------- next part -------------- An HTML attachment was scrubbed... URL: From wizzard028wise at gmail.com Wed Mar 5 20:11:14 2008 From: wizzard028wise at gmail.com (Dorian) Date: Thu, 6 Mar 2008 02:11:14 +0100 Subject: [SciPy-user] help about imshow In-Reply-To: <4eeef9d40803051614p51e0348bv88855822e46c8d3b@mail.gmail.com> References: <4eeef9d40803051614p51e0348bv88855822e46c8d3b@mail.gmail.com> Message-ID: <674a602a0803051711l26837fbdsca3c5f19fdf7b895@mail.gmail.com> Saludos, I can't understand what you are talking about ! Try to translate in English may be it will easy to help. Gracias On 06/03/2008, jose luis Lopez Martinez wrote: > > Saludos > > Como despliego una imagen con niveles de gris de 0 a 1?,porqe si le doy la > siguiente informacion > me despliega el 0.5 como si fuera el uno y no como la mitad de la escala > de grises. > > > from pylab import * > from scipy import * > > H=array([[0.0,0.0,0.5,0.0,0.0],[0.0,0.0,0.0,0.0,0.0],[0.0,0.0,0.0,0.0,0.0 > ],[0.0,0.0,0.0,0.0,0.0],[0.0,0.0,0.0,0.0,0.0]]) > > print H; > figure() > imshow(H,cmap=cm.gray) > > Asumo que el imshow realiza lo siguiente > > > C=H.copy() > C=C-C.min() > C=C/C.max() > > figure() > imshow(C,cmap=cm.gray) > lo cual me da la misma imagen que la primera, es decir que el 0.5 lo toma > como si fuera > maximo valor de la escala de grises.Pero en realidad deberia desplegar > visualmente un valor de > gris intermedio y no el blanco(ya qu eel blanco solo lo debe mostrar si su > valor es 1) > > > gracias > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lopmart at gmail.com Wed Mar 5 20:36:52 2008 From: lopmart at gmail.com (jose luis Lopez Martinez) Date: Wed, 5 Mar 2008 17:36:52 -0800 Subject: [SciPy-user] help about imshow Message-ID: <4eeef9d40803051736u438494c3yef2c72aa9470e296@mail.gmail.com> my apologise for the previous mail in spanish well, my question is how show a image in scale of gray from 0 to 1?, because, my prgram is : from pylab import * from scipy import * H=array([[0.0,0.0,0.5,0.0,0.0],[0.0,0.0,0.0,0.0,0.0],[0.0,0.0,0.0,0.0,0.0],[ 0.0,0.0,0.0,0.0,0.0],[0.0,0.0,0.0,0.0,0.0]]) print H; figure() imshow(H,cmap=cm.gray) well, 0.5 show how white and it's not correct, because the 0.5 is the half the scale the gray thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Mar 5 21:00:51 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 5 Mar 2008 21:00:51 -0500 Subject: [SciPy-user] help about imshow In-Reply-To: <4eeef9d40803051736u438494c3yef2c72aa9470e296@mail.gmail.com> References: <4eeef9d40803051736u438494c3yef2c72aa9470e296@mail.gmail.com> Message-ID: <200803052100.52478.pgmdevlist@gmail.com> Jose, 1. Your question would be better addressed on the matplotlib mailing list 2. Try to add the vmin=0 and vmax=1 optional parameters to imshow >>> imshow(H,cmap=cm.gray,vmin=0,vmax=1) That way, the limits are properly taken into account From e.howick at irl.cri.nz Wed Mar 5 22:10:56 2008 From: e.howick at irl.cri.nz (eleanor) Date: Thu, 6 Mar 2008 03:10:56 +0000 (UTC) Subject: [SciPy-user] scipy.stats.scoreatpercentile BUG ? (n, 1) shaped arrays versus (n, ) Message-ID: I've just tracked problems in my code down to this unexpected behaviour >>> import scipy >>> d = scipy.arange(1,100) >>> scipy.stats.scoreatpercentile(d,50) 50 That's correct Then shuffle the data >>> scipy.random.shuffle(d) >>> scipy.stats.scoreatpercentile(d,50) 50 Everything's still fine Now try it with an array where d.shape = (99,1) >>> d2 = d.reshape(-1,1) >>> d2.shape (99, 1) >>> scipy.stats.scoreatpercentile(d2,50) array([82]) What the!!! >>> d2[49] array([82]) The problem seems to be that the sort in scoreatpercentile values = np.sort(a) dosen't sort an (n, 1) array. I've had many problems with scipy and numpy functions not dealing well with (n,1) arrays when they expect (n,) or is it the other way round. Which shape should I use with 1D arrays, to lessen my temper tantrums? Eleanor From nmelgarejodiaz at gmail.com Thu Mar 6 04:23:47 2008 From: nmelgarejodiaz at gmail.com (Natali Melgarejo Diaz) Date: Thu, 6 Mar 2008 10:23:47 +0100 Subject: [SciPy-user] Sparse Matrix Message-ID: Good morning everyone, I have a problem with linsolve.spsolve, I am using it as a translation of teh operator "\" backslash in Matlab for division. CAm you tell if it's the same as \ in Matlab or if thre is another language. Normally, the parameters are going to be: A = sparse.lil_matrix(N,N) b = (K0**2 -KZ**2)*(-1j) then we add some values to A... Efd = linsolve.spsolve(A,b) + Ei And the values aren't the same as in Matkab., linsolve.spsolve(A,b) gets only zeros and when I plot i have a "singular martix" error. Thanks in advance for your help! -- ********Natali******** -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuele at relativita.com Thu Mar 6 04:38:38 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Thu, 06 Mar 2008 10:38:38 +0100 Subject: [SciPy-user] unexpected MLP.solve() output: bug? [scikits-openopt] Message-ID: <47CFBB9E.5060109@relativita.com> Dear Dmitrey, It happens sometimes that the object returned by NLP.solve() has the .ff field of numpy.ndarray type instead of a float64 type. The array has just one element, which is the current minimum of the function, as expected. Example: ---- p=NLP(....) r = p.solve('ralg') print type(r.ff) ---- The output is usually "float64" and sometimes "numpy.ndarray". Since this is pretty unexpected and caused me some troubles (easily solved) could you change to one of the two behaviors (or motivate)? Details: openopt 0.15 numpy 1.04 scipy 0.5.2 ubuntu linux x86_64 In any case, many thanks for you optimization library. Emanuele From dmitrey.kroshko at scipy.org Thu Mar 6 06:41:27 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 06 Mar 2008 13:41:27 +0200 Subject: [SciPy-user] unexpected MLP.solve() output: bug? [scikits-openopt] Message-ID: <47CFD867.3090908@scipy.org> Ok, I have committed the changes. However, first of all you should see r.isFeasible==True and/or r.istop>=0. If r.isFeasible=true, r.ff was always a single number (lake it was before), if it's false - r.ff can be anything - inf, nan, float or ndarray ect. So I have made some changes and committed to svn (and updated tar.gz.file in OO install page), try now (I had no much time, so maybe this bug still remains somewhere and/or a new one have appeared) and inform if any other bugs will be encountered. Also, for more safety you could use by yourself f_opt = asfarray(r.ff).flatten()[0] Regards, D. > Dear Dmitrey, > > It happens sometimes that the object returned by NLP.solve() has > the .ff field of numpy.ndarray type instead of a float64 type. The > array has just one element, which is the current minimum of the > function, as expected. > > Example: > ---- > p=NLP(....) > r = p.solve('ralg') > print type(r.ff) > ---- > The output is usually "float64" and sometimes "numpy.ndarray". > > Since this is pretty unexpected and caused me some troubles (easily > solved) could you change to one of the two behaviors (or motivate)? > > Details: > openopt 0.15 > numpy 1.04 > scipy 0.5.2 > ubuntu linux x86_64 > > In any case, many thanks for you optimization library. > > Emanuele From fperez.net at gmail.com Thu Mar 6 13:15:51 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 6 Mar 2008 10:15:51 -0800 Subject: [SciPy-user] Numpy/Cython Google Summer of Code project idea Message-ID: Hi all, after the Scipy/Sage Days 8 meeting, we were all very impressed by the progress made by Cython. For those not familiar with it, Cython: http://www.cython.org/ is an evolved version of Pyrex (which is used by numpy and scipy) with lots of improvements. We'd like to position Cython as the preferred way of writing most, if not all, new extension code written for numpy and scipy, as it is easier to write, get right, debug (when you still get it wrong) and maintain than writing to the raw Python-C API. A specific project along these lines, that would be very beneficial for numpy could be: - Creating new matrix types in cython that match the cvxopt matrices. The creation of new numpy array types with efficient code would be very useful. - Rewriting the existing ndarray subclasses that ship with numpy, such as record arrays, in cython. In doing this, benchmarks of the relative performance of the new code should be obtained. Another possible project would be the addition to Cython of syntactic support for array expressions, multidimensional indexing, and other features of numpy. This is probably more difficult than the above, as it would require fairly detailed knowledge of both the numpy C API and the Cython internals, but would ultimately be extremely useful. Any student interested in this should quickly respond on the list; such a project would likely be co-mentored by people on the Numpy and Cython teams, since it is likely to require expertise from both ends. Cheers, f From dineshbvadhia at hotmail.com Thu Mar 6 13:51:23 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Thu, 6 Mar 2008 10:51:23 -0800 Subject: [SciPy-user] empty() Message-ID: I have to ask but in > A= scipy.asmatrix(scipy.empty((I, J), int)) does the empty() function really 'empty' out the array or does it just allocate space for the array leaving spurious content in the array? I want to make sure before starting to use empty() with wild abandon! Cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Thu Mar 6 14:01:49 2008 From: robince at gmail.com (Robin) Date: Thu, 6 Mar 2008 19:01:49 +0000 Subject: [SciPy-user] empty() In-Reply-To: References: Message-ID: On Thu, Mar 6, 2008 at 6:51 PM, Dinesh B Vadhia wrote: > I have to ask but in > > A= scipy.asmatrix(scipy.empty((I, J), int)) > > does the empty() function really 'empty' out the array or does it just > allocate space for the array leaving spurious content in the array? It just allocates space, leaving spurious content in the array. This is OK if you are later going to assign each element. If you want the entries initialised you can use zeros() or ones(). >From the docstring: """ Return a new array of shape (d1,...,dn) and given type with all its entries uninitialized. This can be faster than zeros. """ Robin From peridot.faceted at gmail.com Thu Mar 6 14:05:50 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 6 Mar 2008 20:05:50 +0100 Subject: [SciPy-user] empty() In-Reply-To: References: Message-ID: On 06/03/2008, Dinesh B Vadhia wrote: > I have to ask but in > > A= scipy.asmatrix(scipy.empty((I, J), int)) > > does the empty() function really 'empty' out the array or does it just > allocate space for the array leaving spurious content in the array? > > I want to make sure before starting to use empty() with wild abandon! empty() creates a fresh new array but does not touch the contents. Thus they can be anything; for freshly-allocated very large arrays they may be all zeros on some systems, but otherwise they usually contain malloc bookkeeping information or old data. Thus the array is packed with spurious content. It is probably a good idea to either use A = numpy.zeros() or, if you want to be sure to notice if you fail to set some element of the array, A = numpy.zeros()/0.0 (this latter ensures that the array is filled with "Not a Number", a special floating-point value that indicates invalid data). Only if you find your program has no bugs and is too slow is empty() a good idea. It's usually also worth trying to construct the array directly with the correct values - for example using arange() or eye() - rather than filling in values after the fact. I should also point out that both empy() and asmatrix() are *numpy* functions, not scipy. That they are also available in scipy is a historical quirk; scipy simply reexports some of the numpy functions in its own namespace (complicating the process of finding the real scipy functions). It's a better idea to access these functions through the numpy namespace. Anne From lopmart at gmail.com Thu Mar 6 14:08:18 2008 From: lopmart at gmail.com (jose luis Lopez Martinez) Date: Thu, 6 Mar 2008 11:08:18 -0800 Subject: [SciPy-user] help about sparse matrix Message-ID: <4eeef9d40803061108o50a59e49w7ecfb356372504c4@mail.gmail.com> i am working in my thesis of master and i will work with sparse matrix and solve equations of lineal algebra of A . x = B the size of matrix is 128^4(268,435,456 rows)x 128^4(268,435,456 colums). the matrix have diagonald dominancies that is a sufficient condition for convergence of Jacobi iteration , gauss seidel and xor(relaxion method), but is slow for converge. it exist a command in scipy that solve sparse matrix using at the Gauss-Seidel iteration method o krylov iterative methods? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From doutriaux1 at llnl.gov Thu Mar 6 14:56:56 2008 From: doutriaux1 at llnl.gov (Charles Doutriaux) Date: Thu, 06 Mar 2008 11:56:56 -0800 Subject: [SciPy-user] blas.... In-Reply-To: References: Message-ID: <47D04C88.70301@llnl.gov> Hi, I have problem installing scipy on my system(Linux RedHAt Enterprise 4)... I run an classic configure, it finds blas on my system with the routine srotmg_ in it. Great! No need to build my own blas i thought ! So i go ahead run scipy. And then BAM! /usr/local/cdat/experimental/lib/python2.5/site-packages/scipy/linalg/fblas.so: undefined symbol: srotmg_ Now what happened? Is it possible the libblas that worked great withe configure is actually corrupted? Now if i build my own blas (gfortran) Any ideas? Thx, From peridot.faceted at gmail.com Thu Mar 6 15:27:27 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 6 Mar 2008 21:27:27 +0100 Subject: [SciPy-user] help about sparse matrix In-Reply-To: <4eeef9d40803061108o50a59e49w7ecfb356372504c4@mail.gmail.com> References: <4eeef9d40803061108o50a59e49w7ecfb356372504c4@mail.gmail.com> Message-ID: On 06/03/2008, jose luis Lopez Martinez wrote: > i am working in my thesis of master and i will work with sparse matrix and > solve equations of lineal algebra of > A . x = B > > the size of matrix is 128^4(268,435,456 rows)x 128^4(268,435,456 colums). > the matrix have diagonald dominancies that is a sufficient condition > for convergence of Jacobi iteration , gauss seidel and xor(relaxion method), > but is slow for converge. > it exist a command in scipy that solve sparse matrix using at the > Gauss-Seidel iteration method o krylov iterative methods? Scipy does have iterative solvers for sparse matrices, but your matrix is enormous. You will need to use a 64-bit machine at the least, and the one I tried it on ran out of memory with even a simple array that large. It may be possible though, if you are already able to manipulate this matrix: In [85]: n=128**3; A = scipy.sparse.speye(n,n) + 0.01*scipy.sparse.speye(n,n,1) - 0.01*scipy.sparse.speye(n,n,-1); scipy.linsolve.spsolve(A,numpy.random.randn(n)) Out[85]: array([ 0.98303453, 0.41739294, -0.94134509, ..., -0.44176429, -0.72973676, -1.39742764]) If your matrix is actually band-diagonal, as the one I construct here is, there is a special-purpose solver available which will probably save you some pain. Good luck, Anne From robert.kern at gmail.com Thu Mar 6 15:46:51 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Mar 2008 14:46:51 -0600 Subject: [SciPy-user] blas.... In-Reply-To: <47D04C88.70301@llnl.gov> References: <47D04C88.70301@llnl.gov> Message-ID: <3d375d730803061246o5fa7dd95g46acc469eec0cd99@mail.gmail.com> On Thu, Mar 6, 2008 at 1:56 PM, Charles Doutriaux wrote: > Hi, > > I have problem installing scipy on my system(Linux RedHAt Enterprise > 4)... I run an classic configure, it finds blas on my system with the > routine > srotmg_ in it. Great! No need to build my own blas i thought ! > > So i go ahead run scipy. > > And then BAM! > /usr/local/cdat/experimental/lib/python2.5/site-packages/scipy/linalg/fblas.so: > undefined symbol: srotmg_ > > Now what happened? Is it possible the libblas that worked great withe > configure is actually corrupted? Can you double-check that fblas.so got correctly linked to libblas? $ ldd /usr/local/cdat/experimental/lib/python2.5/site-packages/scipy/linalg/fblas.so -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From doutriaux1 at llnl.gov Thu Mar 6 16:25:55 2008 From: doutriaux1 at llnl.gov (Charles Doutriaux) Date: Thu, 06 Mar 2008 13:25:55 -0800 Subject: [SciPy-user] blas.... In-Reply-To: <3d375d730803061246o5fa7dd95g46acc469eec0cd99@mail.gmail.com> References: <47D04C88.70301@llnl.gov> <3d375d730803061246o5fa7dd95g46acc469eec0cd99@mail.gmail.com> Message-ID: <47D06163.90509@llnl.gov> Scipy says it found blas, in /usr/lib It's bizarre... If i build my own blas and put it in site.cfg. it works... C. Robert Kern wrote: > On Thu, Mar 6, 2008 at 1:56 PM, Charles Doutriaux wrote: > >> Hi, >> >> I have problem installing scipy on my system(Linux RedHAt Enterprise >> 4)... I run an classic configure, it finds blas on my system with the >> routine >> srotmg_ in it. Great! No need to build my own blas i thought ! >> >> So i go ahead run scipy. >> >> And then BAM! >> /usr/local/cdat/experimental/lib/python2.5/site-packages/scipy/linalg/fblas.so: >> undefined symbol: srotmg_ >> >> Now what happened? Is it possible the libblas that worked great withe >> configure is actually corrupted? >> > > Can you double-check that fblas.so got correctly linked to libblas? > > $ ldd /usr/local/cdat/experimental/lib/python2.5/site-packages/scipy/linalg/fblas.so > > From alan.mcintyre at gmail.com Thu Mar 6 18:18:08 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Thu, 6 Mar 2008 18:18:08 -0500 Subject: [SciPy-user] Numpy/Cython Google Summer of Code project idea In-Reply-To: References: Message-ID: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> > A specific project along these lines, that would be very beneficial > for numpy could be: > > - Creating new matrix types in cython that match the cvxopt matrices. > The creation of new numpy array types with efficient code would be > very useful. > > - Rewriting the existing ndarray subclasses that ship with numpy, such > as record arrays, in cython. In doing this, benchmarks of the > relative performance of the new code should be obtained. What level of experience do you think would be necessary for the student for this? I've got a fair amount of Python & C experience, and I've used numpy and Pyrex (but not Cython) in the past. I wouldn't mind putting in some time to become familiar with the particulars before submitting a project proposal. From fperez.net at gmail.com Fri Mar 7 03:57:08 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Mar 2008 00:57:08 -0800 Subject: [SciPy-user] Numpy/Cython Google Summer of Code project idea In-Reply-To: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> References: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> Message-ID: Hi Alan, On Thu, Mar 6, 2008 at 3:18 PM, Alan McIntyre wrote: > > A specific project along these lines, that would be very beneficial > > for numpy could be: > > > > - Creating new matrix types in cython that match the cvxopt matrices. > > The creation of new numpy array types with efficient code would be > > very useful. > > > > - Rewriting the existing ndarray subclasses that ship with numpy, such > > as record arrays, in cython. In doing this, benchmarks of the > > relative performance of the new code should be obtained. > > What level of experience do you think would be necessary for the > student for this? I've got a fair amount of Python & C experience, and > I've used numpy and Pyrex (but not Cython) in the past. I wouldn't > mind putting in some time to become familiar with the particulars > before submitting a project proposal. I don't want to put words in others' mouths, since I wouldn't be the one mentoring such a project. I suspect that you'd need to be reasonably familiar with some C programming, because at this point in the game this project might require debugging generated C code and perhaps diving into Cython itself. In the long term we'd like Cython to be so python-like and friendly that those who are *not* C experts can use it effectively for Numpy programming, but that isn't currently the case. As far as Pyrex/cython, if you know pyrex, you'll be OK with cython. It's only better than pyrex, but is as far as I know mostly, if not fully compatible with pyrex. Cheers f From bnuttall at uky.edu Fri Mar 7 14:00:44 2008 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Fri, 7 Mar 2008 14:00:44 -0500 Subject: [SciPy-user] C compilers and Python wrappers In-Reply-To: References: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> Message-ID: Folks, I may be asked to help port an application written in C that runs on Macs to run on a Windows-based PC. I have two questions: 1) Does anyone have recommendations for a free, open source C compiler for Windows? 2) Is there a step-by-step guide or tutorial for writing a Python module to act as a wrapper for the C code? Thanks. Brandon Nuttall -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Fri Mar 7 14:08:13 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 7 Mar 2008 20:08:13 +0100 Subject: [SciPy-user] C compilers and Python wrappers In-Reply-To: References: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> Message-ID: Hi, You can check on mingw, but also the Visual C++ Express edition (although it is not open source, but I don't see why it should be). To write a module, you can check ctypes (and there is a tutorial in the scipy's cookbook ;)) Matthieu 2008/3/7, Nuttall, Brandon C : > > Folks, > > > > I may be asked to help port an application written in C that runs on Macs > to run on a Windows-based PC. I have two questions: > > > > 1) Does anyone have recommendations for a free, open source C > compiler for Windows? > > 2) Is there a step-by-step guide or tutorial for writing a Python > module to act as a wrapper for the C code? > > > > Thanks. > > > > Brandon Nuttall > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From osman at fuse.net Fri Mar 7 14:24:29 2008 From: osman at fuse.net (osman) Date: Fri, 07 Mar 2008 14:24:29 -0500 Subject: [SciPy-user] gutsy amd64 In-Reply-To: References: <1204478442.14422.23.camel@stargate.org> <1204479876.14422.28.camel@stargate.org> <1204495164.14422.39.camel@stargate.org> <1204600407.4862.4.camel@stargate.org> Message-ID: <1204917869.21034.5.camel@stargate.org> Hi Robin, I found that my problems were coming from having a complicated PATH, LD_LIBRARY_PATH etc. So I cleaned up my .profile and .bashrc and deleted all numpy and scipy that was created and started building from numpy. It worked. I do have few failures in the tests for scipy, but they seem mostly small numerical differences. Some others seems to be due to the newness of scipy. Looks like some returns were changed. Now I am trying to build the sfe package. Thanks for the pointers. -osman From ggellner at uoguelph.ca Fri Mar 7 14:39:32 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Fri, 07 Mar 2008 12:39:32 -0700 Subject: [SciPy-user] C compilers and Python wrappers In-Reply-To: References: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> Message-ID: <20080307193932.GA6936@giton> Also check out http://pymag.phparch.com The December issue (which you can purchase separately) has a thorough introduction for ctypes, with a science library bent! I have been wrapping libraries like mad since! Gabriel On Fri, Mar 07, 2008 at 08:08:13PM +0100, Matthieu Brucher wrote: > Hi, > > You can check on mingw, but also the Visual C++ Express edition (although > it is not open source, but I don't see why it should be). > To write a module, you can check ctypes (and there is a tutorial in the > scipy's cookbook ;)) > > Matthieu > > 2008/3/7, Nuttall, Brandon C <[1]bnuttall at uky.edu>: > > Folks, > > > > I may be asked to help port an application written in C that runs on > Macs to run on a Windows-based PC. I have two questions: > > > > 1) Does anyone have recommendations for a free, open source C > compiler for Windows? > > 2) Is there a step-by-step guide or tutorial for writing a Python > module to act as a wrapper for the C code? > > > > Thanks. > > > > Brandon Nuttall > > _______________________________________________ > SciPy-user mailing list > [2]SciPy-user at scipy.org > [3]http://projects.scipy.org/mailman/listinfo/scipy-user > > -- > French PhD student > Website : [4]http://matthieu-brucher.developpez.com/ > Blogs : [5]http://matt.eifelle.com and > [6]http://blog.developpez.com/?blog=92 > LinkedIn : [7]http://www.linkedin.com/in/matthieubrucher > > References > > Visible links > 1. mailto:bnuttall at uky.edu > 2. mailto:SciPy-user at scipy.org > 3. http://projects.scipy.org/mailman/listinfo/scipy-user > 4. http://matthieu-brucher.developpez.com/ > 5. http://matt.eifelle.com/ > 6. http://blog.developpez.com/?blog=92 > 7. http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From dineshbvadhia at hotmail.com Fri Mar 7 17:39:18 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Fri, 7 Mar 2008 14:39:18 -0800 Subject: [SciPy-user] empty() Message-ID: Thanks! What I need to do is 1) create a very large sparse matrix A (with integer 1's and A is very sparse), 2) pickle the matrix and 3) unpickle it when needed in various modules. Performance of unpickling (ie. step 3) is important as the matrix is very large. Hence, in step 1, I didn't really want to initialize the matrix A with zeroes() because it is adding unnecessary 'weight' to the pickling/unpickling and hence performance. Appreciate your thoughts. Cheers! Dinesh ............................................................... Message: 4 Date: Thu, 6 Mar 2008 20:05:50 +0100 From: "Anne Archibald" Subject: Re: [SciPy-user] empty() To: "SciPy Users List" Message-ID: Content-Type: text/plain; charset=UTF-8 On 06/03/2008, Dinesh B Vadhia wrote: > I have to ask but in > > A= scipy.asmatrix(scipy.empty((I, J), int)) > > does the empty() function really 'empty' out the array or does it just > allocate space for the array leaving spurious content in the array? > > I want to make sure before starting to use empty() with wild abandon! empty() creates a fresh new array but does not touch the contents. Thus they can be anything; for freshly-allocated very large arrays they may be all zeros on some systems, but otherwise they usually contain malloc bookkeeping information or old data. Thus the array is packed with spurious content. It is probably a good idea to either use A = numpy.zeros() or, if you want to be sure to notice if you fail to set some element of the array, A = numpy.zeros()/0.0 (this latter ensures that the array is filled with "Not a Number", a special floating-point value that indicates invalid data). Only if you find your program has no bugs and is too slow is empty() a good idea. It's usually also worth trying to construct the array directly with the correct values - for example using arange() or eye() - rather than filling in values after the fact. I should also point out that both empy() and asmatrix() are *numpy* functions, not scipy. That they are also available in scipy is a historical quirk; scipy simply reexports some of the numpy functions in its own namespace (complicating the process of finding the real scipy functions). It's a better idea to access these functions through the numpy namespace. Anne -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Fri Mar 7 19:14:58 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 7 Mar 2008 19:14:58 -0500 Subject: [SciPy-user] empty() In-Reply-To: References: Message-ID: On 07/03/2008, Dinesh B Vadhia wrote: > Thanks! What I need to do is 1) create a very large sparse matrix A (with > integer 1's and A is very sparse), 2) pickle the matrix and 3) unpickle it > when needed in various modules. > > Performance of unpickling (ie. step 3) is important as the matrix is very > large. Hence, in step 1, I didn't really want to initialize the matrix A > with zeroes() because it is adding unnecessary 'weight' to the > pickling/unpickling and hence performance. > > Appreciate your thoughts. Cheers! Sparse matrices are somewhat special. All numpy functions act on normal, dense, matrices (in which there is a value for every position even if it is zero). empty(), for example, produces a dense matrix with uninitialized values. If you want to work with sparse matrices, take a look at the functions in scipy.sparse(). These use one of several special representations of sparse matrices. Unfortunately they require entries to be floating-point, but you should be aware that floating-point computations on values that happen to be integers are exact, up to the point when they become large enough that the mantissa "overflows". For 32-bit floats, this happens at 2**24, while for 64-bit floats it happens at 2**53. Calculations will probably be slightly slower (though a separate part of the chip does these calculations from the part that does index manipulations and whatnot, so try it first), but don't worry too much about using floats instead of integers. scipy provides functions for manipulating sparse matrices, but do be careful, because sparse matrices can be silently converted to dense matrices if you try to apply an operation that isn't implemented specially. Nevertheless sparse linear algebra is available. Take a look at: http://www.scipy.org/SciPy_Tutorial#head-c60163f2fd2bab79edd94be43682414f18b90df7 Pickling and unpickling of sparse matrices should work fine, and be reasonably rapid (be sure to use cPickle). From david at ar.media.kyoto-u.ac.jp Fri Mar 7 22:31:54 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 08 Mar 2008 12:31:54 +0900 Subject: [SciPy-user] C compilers and Python wrappers In-Reply-To: References: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> Message-ID: <47D208AA.9090405@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > Hi, > > You can check on mingw, but also the Visual C++ Express edition > (although it is not open source, but I don't see why it should be). > To write a module, you can check ctypes (and there is a tutorial in > the scipy's cookbook ;)) Note that you won't be able to easily build extensions with Visual C++ express edition (or any VS which is not VS 2003 for that matter), at least for the official binaries for python 2.5 (that will change for python 2.6, which is in alpha stage right now). mingw32 is directly supported by distutils. And since it is gcc, it should be more familiar if you are coming from mac os X. cheers, David From aisaac at american.edu Fri Mar 7 23:26:20 2008 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 7 Mar 2008 23:26:20 -0500 Subject: [SciPy-user] C compilers and Python wrappers In-Reply-To: <47D208AA.9090405@ar.media.kyoto-u.ac.jp> References: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> <47D208AA.9090405@ar.media.kyoto-u.ac.jp> Message-ID: On Sat, 08 Mar 2008, David Cournapeau apparently wrote: > Note that you won't be able to easily build extensions > with Visual C++ express edition (or any VS which is not VS > 2003 for that matter), at least for the official binaries > for python 2.5 (that will change for python 2.6, which is > in alpha stage right now). How will this change? (Aside from changing to VS 2008, I mean.) Can you point to a discussion? Thank you, Alan Isaac From david at ar.media.kyoto-u.ac.jp Fri Mar 7 23:41:08 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 08 Mar 2008 13:41:08 +0900 Subject: [SciPy-user] C compilers and Python wrappers In-Reply-To: References: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> <47D208AA.9090405@ar.media.kyoto-u.ac.jp> Message-ID: <47D218E4.6070604@ar.media.kyoto-u.ac.jp> Alan G Isaac wrote: > > How will this change? (Aside from changing to VS 2008, > I mean.) Can you point to a discussion? > That's the change: you will be able to use the compiler used for official binaries for free. For python 3k (or later), there may be a bigger change: since posix functions are hopelessly broken in VS runtime (which is at least one of the reasons why you cannot safely change compilers on windows), it is being rewritten with the win32 api, which is guaranteed to be ABI compatible across compilers by Microsoft, I guess. But this will take time, and is certainly no fun. http://mail.python.org/pipermail/python-3000/2008-February/012201.html cheers, David From david at ar.media.kyoto-u.ac.jp Sat Mar 8 06:26:41 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 08 Mar 2008 20:26:41 +0900 Subject: [SciPy-user] [ANN] numscons 0.5.1: building scipy Message-ID: <47D277F1.4030302@ar.media.kyoto-u.ac.jp> Hi, Mumscons 0.5.1 is available through pypi (eggs and tarballs). This is the first version which can build the whole scipy source tree. To build scipy with numscons, you should first get the code in the branch: svn co http://svn.scipy.org/svn/scipy/branches/build_with_scons And then build it like numpy: python setupscons.py install Technically speaking, you can build scipy with numscons above a numpy build the standard way, but that's not a good idea (because of potential libraries and compilers mismatches between distutils and numscons). See http://projects.scipy.org/scipy/numpy/wiki/NumScons for more details. The only tested platform for now are: - linux + gcc; other compilers on linux should work as well. - solaris + sunstudio with sunperf. On both those platforms, only a few tests do not pass. I don't expect windows or mac OS X to work yet, but I can not test those platforms ATM. I am releasing the current state of numscons because I won't have much time to work on numscons the next few weeks unfortunately. PLEASE DO NOT USE IT FOR PRODUCTION USE ! There are still some serious issues: - I painfully discovered that at least g77 is extremely sensitive to different orders of linker flags (can cause crashes). I don't have any problem anymore on my workstation (Ubuntu 32 bits, atlas + gcc/g77), but this needs more testing. - there are some race conditions with f2py which I do not fully understand yet, and which prevents parallel build to work (so do not use the scons command --jobs option) - optimization flags of proprietary compilers: they are a PITA. They often break IEEE conformance in quite a hard way, and this causes crashes or wrong results (for example, the -fast option of sun compilers breaks the argsort function of numpy). So again, this is really just a release for people to test things if they want, but nothing else. cheers, David From adam.ginsburg at colorado.edu Sat Mar 8 17:27:36 2008 From: adam.ginsburg at colorado.edu (Adam Ginsburg) Date: Sat, 8 Mar 2008 15:27:36 -0700 Subject: [SciPy-user] Problem with scipy.optimize.leastsq: Improper input parameters In-Reply-To: References: Message-ID: I'm not sure if my previous e-mail (below) went out earlier, but I've spent a while trying to understand what the problem really meant, and I think it's that I'm trying to use leastsq for something it was not intended for. However, the Levenberg-Marquardt algorithm should be capable of this task. Specifically, I'm trying to minimize a function of 5 variables, while leastsq expects the number of functions m to be larger than the number of variables n. I figured out that my arg() variable(s?) determine the size of m, which seems like a bad idea to me but I can deal with it. My new error, then, is a claim that my 2d array somehow doesn't work (traceback below). Am I wrong in thinking that the function should be able to minimize over a 2D space, or am I just not specifying that properly in my function call? Thanks, and sorry to be flooding the list a little, Adam --------------------------------------------------------------------------- Traceback (most recent call last) : object too deep for desired array Error in sys.excepthook: Traceback (most recent call last): File "/sw/lib/python2.5/site-packages/IPython/iplib.py", line 1714, in excepthook self.showtraceback((etype,value,tb),tb_offset=0) File "/sw/lib/python2.5/site-packages/IPython/iplib.py", line 1514, in showtraceback self.InteractiveTB(etype,value,tb,tb_offset=tb_offset) File "/sw/lib/python2.5/site-packages/IPython/ultraTB.py", line 872, in __call__ self.debugger() File "/sw/lib/python2.5/site-packages/IPython/ultraTB.py", line 729, in debugger while self.tb.tb_next is not None: AttributeError: 'NoneType' object has no attribute 'tb_next' Original exception was: ValueError: object too deep for desired array --------------------------------------------------------------------------- Traceback (most recent call last) /Users/adam/classes/probstat/hw7.py in () 52 #a,b,x0,y0,s = fmin(chi2gaussnoglob,x0in,args=[im,RON]) 53 #print "Parameters a: %f b: %f x0: %f y0: %f sigma: %f chi2: %f" % (a,b,x0,y0,s,chi2gaussnoglob([a,b,x0,y0,s],im,RON)) ---> 54 leastsq(chi2gaussnoglob,x0in,args=(im),Dfun=None) 55 #leastsq(chi2gauss,x0in,maxfev=1000) 56 /sw/lib/python2.5/site-packages/scipy/optimize/minpack.py in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag, warning) 267 if (maxfev == 0): 268 maxfev = 200*(n+1) --> 269 retval = _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) 270 else: 271 if col_deriv: : Result from function call is not a proper array of floats. > /sw/lib/python2.5/site-packages/scipy/optimize/minpack.py(269)leastsq() 268 maxfev = 200*(n+1) --> 269 retval = _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) 270 else: On Sat, Mar 8, 2008 at 1:39 PM, Adam Ginsburg wrote: > Hi, I've been trying to get the Levenberg-Marquardt minimization routine > leastsq to work and have received the error "Improper input parameters" when > using my own non-trivial function. > > The function I'm trying to minimize: > > def chi2gauss(ARR): > a,b,x0,y0,s = ARR > noisemap = sqrt(im+RON**2) > mychi2 = (( ( im - gi(a,b,x0,y0,s,im) ) / noisemap )**2).sum() > return mychi2 > > def gi(a,b,x0,y0,s,im): > myx,myy = indices(im.shape) > gi = b + a * exp ( - ( (myx-x0)**2 + (myy-y0)**2)/(2*s**2) ) > return gi > > where in this case im and RON are global variable, though I have also > tested with im and RON specified as input parameters using the args=() input > for leastsq. > > example: > > In [123]: leastsq(chi2gauss,[1,1,1,1,1],full_output=1) > 5 1 1 0.000000 0.000000 0.000000 1200 100.000000Out[123]: > (array([ 1., 1., 1., 1., 1.]), > None, > {'fjac': array([[ 0.], > [ 0.], > [ 0.], > [ 0.], > [ 0.]]), > 'fvec': 24222.5789746, > 'ipvt': array([0, 0, 0, 0, 0]), > 'nfev': 0, > 'qtf': array([ 0., 0., 0., 0., 0.])}, > 'Improper input parameters.', > 0) > > Can anyone help me out? What about my input is improper? > > Thanks, > Adam > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggellner at uoguelph.ca Sun Mar 9 01:38:22 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Sat, 08 Mar 2008 23:38:22 -0700 Subject: [SciPy-user] C compilers and Python wrappers In-Reply-To: <20080307193932.GA6936@giton> References: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> <20080307193932.GA6936@giton> Message-ID: <20080309063822.GA6341@giton> I mean the January issue . . . Gabriel On Fri, Mar 07, 2008 at 12:39:32PM -0700, Gabriel Gellner wrote: > Also check out http://pymag.phparch.com > The December issue (which you can purchase separately) has a thorough > introduction for ctypes, with a science library bent! I have been wrapping > libraries like mad since! > > Gabriel > > On Fri, Mar 07, 2008 at 08:08:13PM +0100, Matthieu Brucher wrote: > > Hi, > > > > You can check on mingw, but also the Visual C++ Express edition (although > > it is not open source, but I don't see why it should be). > > To write a module, you can check ctypes (and there is a tutorial in the > > scipy's cookbook ;)) > > > > Matthieu > > > > 2008/3/7, Nuttall, Brandon C <[1]bnuttall at uky.edu>: > > > > Folks, > > > > > > > > I may be asked to help port an application written in C that runs on > > Macs to run on a Windows-based PC. I have two questions: > > > > > > > > 1) Does anyone have recommendations for a free, open source C > > compiler for Windows? > > > > 2) Is there a step-by-step guide or tutorial for writing a Python > > module to act as a wrapper for the C code? > > > > > > > > Thanks. > > > > > > > > Brandon Nuttall > > > > _______________________________________________ > > SciPy-user mailing list > > [2]SciPy-user at scipy.org > > [3]http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -- > > French PhD student > > Website : [4]http://matthieu-brucher.developpez.com/ > > Blogs : [5]http://matt.eifelle.com and > > [6]http://blog.developpez.com/?blog=92 > > LinkedIn : [7]http://www.linkedin.com/in/matthieubrucher > > > > References > > > > Visible links > > 1. mailto:bnuttall at uky.edu > > 2. mailto:SciPy-user at scipy.org > > 3. http://projects.scipy.org/mailman/listinfo/scipy-user > > 4. http://matthieu-brucher.developpez.com/ > > 5. http://matt.eifelle.com/ > > 6. http://blog.developpez.com/?blog=92 > > 7. http://www.linkedin.com/in/matthieubrucher > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From dave.hirschfeld at gmail.com Mon Mar 10 06:59:26 2008 From: dave.hirschfeld at gmail.com (Dave) Date: Mon, 10 Mar 2008 10:59:26 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Problem_with_scipy=2Eoptimize=2Eleastsq=3A?= =?utf-8?q?_Improper=09input_parameters?= References: Message-ID: I may be on the wrong track here, but what happens if you remove the squaring and summing from the chi2gauss - i.e. def chi2gauss(ARR): a,b,x0,y0,s = ARR noisemap = sqrt(im+RON**2) return (im - gi(a,b,x0,y0,s,im))/noisemap -Dave From markbak at gmail.com Mon Mar 10 13:07:01 2008 From: markbak at gmail.com (Mark Bakker) Date: Mon, 10 Mar 2008 18:07:01 +0100 Subject: [SciPy-user] Problem with scipy.optimize.leastsq Message-ID: <6946b9500803101007x1d3b3919r8d2d080160a1d946@mail.gmail.com> I think Dave is right. Whenever I use leastsq, I have a function that gives an array of errors. Leastsq does the squaring and summing for you, Mark I may be on the wrong track here, but what happens if you remove the > squaring > and summing from the chi2gauss - i.e. > > def chi2gauss(ARR): > a,b,x0,y0,s = ARR > noisemap = sqrt(im+RON**2) > return (im - gi(a,b,x0,y0,s,im))/noisemap > > > -Dave > > > > ------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-user Digest, Vol 55, Issue 19 > ****************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berthe.loic at gmail.com Mon Mar 10 14:24:36 2008 From: berthe.loic at gmail.com (LB) Date: Mon, 10 Mar 2008 11:24:36 -0700 (PDT) Subject: [SciPy-user] fitting a parametric spline from a set of 2D points. Message-ID: Hi, I've got a set of 2D points (~ 200 points) and I would like to fit a parametric spline with a limited number of control points (~ 10). These points can be seen a measures and contain an intrinsic error, so I would like to minimize the distance between theses points and the spline. Is there any tool in scipy which could help me in this task ? Otherwise, does someone know about an external library wich could help ? Thanks, -- LB From peridot.faceted at gmail.com Mon Mar 10 14:37:14 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 10 Mar 2008 19:37:14 +0100 Subject: [SciPy-user] fitting a parametric spline from a set of 2D points. In-Reply-To: References: Message-ID: On 10/03/2008, LB wrote: > I've got a set of 2D points (~ 200 points) and I would like to fit a > parametric spline with a limited number of control points (~ 10). > These points can be seen a measures and contain an intrinsic error, so > I would like to minimize the distance between theses points and the > spline. > > Is there any tool in scipy which could help me in this task ? > Otherwise, does someone know about an external library wich could > help ? scipy.interpolate.splprep is designed to do exactly this. It is based on FORTRAN code from FITPACK and should be very robust and efficient. Anne From nmelgarejodiaz at gmail.com Mon Mar 10 16:31:04 2008 From: nmelgarejodiaz at gmail.com (Natali Melgarejo Diaz) Date: Mon, 10 Mar 2008 21:31:04 +0100 Subject: [SciPy-user] Problem with plot Message-ID: Hello everyone, How can i plot an sparse matrix with complex numbers? It seems to be a problem for sparse matrix to plot the values with a simple 'plot'. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Mar 10 16:38:11 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 10 Mar 2008 21:38:11 +0100 Subject: [SciPy-user] Problem with plot In-Reply-To: References: Message-ID: <20080310203811.GA10221@phare.normalesup.org> On Mon, Mar 10, 2008 at 09:31:04PM +0100, Natali Melgarejo Diaz wrote: > How can i plot an sparse matrix with complex numbers? It seems to be a > problem for sparse matrix to plot the values with a simple 'plot'. Could you give us a bit of more precisions on your problem. How do you want to plot this matrix (or array?)? What is the plot package you are using? Cheers, Ga?l From mnandris at btinternet.com Tue Mar 11 04:50:51 2008 From: mnandris at btinternet.com (Michael Nandris) Date: Tue, 11 Mar 2008 08:50:51 +0000 (GMT) Subject: [SciPy-user] Problem with plot In-Reply-To: <20080310203811.GA10221@phare.normalesup.org> Message-ID: <872493.2413.qm@web86508.mail.ird.yahoo.com> from numpy import matrix from scipy.linalg import inv, det, eig from pylab import plot, show A=matrix([[1,1,1],[4,4,3],[7,8,5]]) # 3 lines 3 rows b = matrix([1,2,1]).transpose() # 3 lines 1 rows. print 'det(A) ' print det(A); print # We can check, whether the matrix is regular print 'inv(A)*b ' print inv(A)*b; print # print the solution of the Ax=b linear equation system. print 'eig(A) ' print eig(A) plot(A) show() plot(b) show() Gael Varoquaux wrote: On Mon, Mar 10, 2008 at 09:31:04PM +0100, Natali Melgarejo Diaz wrote: > How can i plot an sparse matrix with complex numbers? It seems to be a > problem for sparse matrix to plot the values with a simple 'plot'. Could you give us a bit of more precisions on your problem. How do you want to plot this matrix (or array?)? What is the plot package you are using? Cheers, Ga?l _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger.herikstad at gmail.com Tue Mar 11 05:02:49 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Tue, 11 Mar 2008 17:02:49 +0800 Subject: [SciPy-user] Incremental histogram? Message-ID: Hi list, I need to histogram an array of long ints, but the array itself is too big to keep in memory. I was thinking of using an incremental approach, i.e. assign each sample in the array to the appropriate bin, sample by sample. Right now, I have the array (well, list really) constructed as a generator, and I was wondering if anyone has an efficient algorithm for doing histogram count on such a generator object? ~ Roger From emanuele at relativita.com Tue Mar 11 06:43:37 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Tue, 11 Mar 2008 11:43:37 +0100 Subject: [SciPy-user] citing numpy, scipy openopt Message-ID: <47D66259.80305@relativita.com> Hi Numpy, Scipy and OpenOpt people, I'd like to cite these three projects in a scientific paper. According from what I've read in the lists these are the main references for Numpy: 1) Travis E. Oliphant, "Python for Scientific Computing," Computing in Science & Engineering, vol. 9, no. 3, May/June 2007, pp. 10-20. 2) David Ascher, Paul F. Dubois, Konrad Hinsen, Jim Hugunin, Travis Oliphant, "Numerical Python", tech. report UCRL-MA-128569, Lawrence Livermore National Laboratory, 2001; http://numpy.scipy.org. 3) Travis E. Oliphant (2006) Guide to NumPy, Trelgol Publishing, USA I guess the most appropriate reference would be (1) since it introduces the reader to both Python and Numpy. What do you suggest for Scipy? About OpenOpt: I'm using it for non-linear problems (NLP), mainly with "scipy_cg" and "ralg" algorithms. Then I believe I should cite both OpenOpt and Naum Z. Shor's work (ralg). Dmitrey, what do you suggest? Best Regards, Emanuele From dmitrey.kroshko at scipy.org Tue Mar 11 06:59:11 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 11 Mar 2008 12:59:11 +0200 Subject: [SciPy-user] citing numpy, scipy openopt In-Reply-To: <47D66259.80305@relativita.com> References: <47D66259.80305@relativita.com> Message-ID: <47D665FF.30206@scipy.org> Emanuele Olivetti wrote: > Hi Numpy, Scipy and OpenOpt people, > > I'd like to cite these three projects in a scientific paper. According > from what I've read in the lists these are the main references for > Numpy: > > 1) Travis E. Oliphant, "Python for Scientific Computing," Computing in > Science & Engineering, vol. 9, no. 3, May/June 2007, pp. 10-20. > > 2) David Ascher, Paul F. Dubois, Konrad Hinsen, Jim Hugunin, Travis > Oliphant, > "Numerical Python", tech. report UCRL-MA-128569, > Lawrence Livermore National Laboratory, 2001; http://numpy.scipy.org. > > 3) Travis E. Oliphant (2006) Guide to NumPy, Trelgol Publishing, USA > > I guess the most appropriate reference would be (1) since it introduces > the reader to both Python and Numpy. > > What do you suggest for Scipy? > > About OpenOpt: I'm using it for non-linear problems (NLP), mainly > with "scipy_cg" and "ralg" algorithms. Then I believe I should cite > both OpenOpt and Naum Z. Shor's work (ralg). Dmitrey, what do you suggest? > > Best Regards, > > Emanuele > I'm not familiar with all those cite rules. I'll be happy enough to see OO users comment(s) on my guestbook, it would increase my chances for obtaining GSoC and/or other finance support for further openopt development, - currently I can't convince even my dept that openopt is something essential. As for ralg, current Python implementation (written by me during several months) is just a mere shadow of the Fortran ralg version, that had been written dozens of years by dozens of people from our dept. Maybe I'll try to connect some Fortran-written soft by our dept (especially if I'll be lucky to become GSoC member once again), but currently I'm busy with my 1st f2py experience with bvls. Regards, D. From david at ar.media.kyoto-u.ac.jp Tue Mar 11 07:02:44 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 11 Mar 2008 20:02:44 +0900 Subject: [SciPy-user] citing numpy, scipy openopt In-Reply-To: <47D665FF.30206@scipy.org> References: <47D66259.80305@relativita.com> <47D665FF.30206@scipy.org> Message-ID: <47D666D4.50408@ar.media.kyoto-u.ac.jp> dmitrey wrote: > I'm not familiar with all those cite rules. I'll be happy enough to see > OO users comment(s) on my guestbook, it would increase my chances for > obtaining GSoC and/or other finance support for further openopt > development, - currently I can't convince even my dept that openopt is > something essential. I don't know your exact situation, but having your work cited by peer-reviewed papers is certainly one of the best way to get official recognition, wrt grants and co. cheers, David From emanuele at relativita.com Tue Mar 11 07:35:21 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Tue, 11 Mar 2008 12:35:21 +0100 Subject: [SciPy-user] citing numpy, scipy openopt In-Reply-To: <47D665FF.30206@scipy.org> References: <47D66259.80305@relativita.com> <47D665FF.30206@scipy.org> Message-ID: <47D66E79.8040303@relativita.com> dmitrey wrote: > I'm not familiar with all those cite rules. I'll be happy enough to see > OO users comment(s) on my guestbook, it would increase my chances for > obtaining GSoC and/or other finance support for further openopt > development, - currently I can't convince even my dept that openopt is > something essential. > I'll definitely sign your guest book and hope you'll find finance support for further development. We are using your code with more satisfaction than original scipy optimization package (which is nice, indeed). I do believe that, if you get citations through OpenOpt, they would appreciate it more. > As for ralg, current Python implementation (written by me during several > months) is just a mere shadow of the Fortran ralg version, that had been > written dozens of years by dozens of people from our dept. Maybe I'll > try to connect some Fortran-written soft by our dept (especially if I'll > be lucky to become GSoC member once again), but currently I'm busy with > my 1st f2py experience with bvls. > Regards, D. In the meanwhile could you send a basic reference about the ralg algorithm? You could ask to your teacher, Shor, which is the most appropriate in this case (i.e., people from different scientific community). Thanks, Emanuele From robince at gmail.com Tue Mar 11 08:03:53 2008 From: robince at gmail.com (Robin) Date: Tue, 11 Mar 2008 12:03:53 +0000 Subject: [SciPy-user] citing numpy, scipy openopt In-Reply-To: <47D66259.80305@relativita.com> References: <47D66259.80305@relativita.com> Message-ID: On Tue, Mar 11, 2008 at 10:43 AM, Emanuele Olivetti wrote: > Hi Numpy, Scipy and OpenOpt people, > > I'd like to cite these three projects in a scientific paper. According > from what I've read in the lists these are the main references for > Numpy: Did you see this page: http://scipy.org/Citing_SciPy Robin From dmitrey.kroshko at scipy.org Tue Mar 11 09:00:11 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 11 Mar 2008 15:00:11 +0200 Subject: [SciPy-user] citing numpy, scipy openopt In-Reply-To: <47D66E79.8040303@relativita.com> References: <47D66259.80305@relativita.com> <47D665FF.30206@scipy.org> <47D66E79.8040303@relativita.com> Message-ID: <47D6825B.9000907@scipy.org> Emanuele Olivetti wrote: > dmitrey wrote: > >> I'm not familiar with all those cite rules. I'll be happy enough to see >> OO users comment(s) on my guestbook, it would increase my chances for >> obtaining GSoC and/or other finance support for further openopt >> development, - currently I can't convince even my dept that openopt is >> something essential. >> >> > > I'll definitely sign your guest book and hope you'll find finance > support for further development. We are using your code with > more satisfaction than original scipy optimization package > (which is nice, indeed). I do believe that, if you get citations > through OpenOpt, they would appreciate it more. > it would be nice to know where openopt is used, both for me and other OO users. IIRC TOMOPT informed his users that their TOMLAB is used at international space station. Mb someday OO will be used so commonly as well?:) > >> As for ralg, current Python implementation (written by me during several >> months) is just a mere shadow of the Fortran ralg version, that had been >> written dozens of years by dozens of people from our dept. Maybe I'll >> try to connect some Fortran-written soft by our dept (especially if I'll >> be lucky to become GSoC member once again), but currently I'm busy with >> my 1st f2py experience with bvls. >> Regards, D. >> > > In the meanwhile could you send a basic reference about the ralg > algorithm? There are lots of ralg modifications: ralg-5 (initial version, that is implemented currently in OO), ralg-4 (more advanced implementation, it requires only 4n^2 multiplications vs 5n^2 in ralg-5). As for Fortran version, it is capable of handling nVars ~= 10^4...10^5, while ordinary ralg only ~1000. Also, it consumes much less memory: m*nVars vs nVars^2 (m< You could ask to your teacher, Shor, which is the most > appropriate in this case (i.e., people from different scientific > community). > Unfortunately, he had died. > Thanks, > > Emanuele > Regards, D. From emanuele at relativita.com Tue Mar 11 09:19:52 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Tue, 11 Mar 2008 14:19:52 +0100 Subject: [SciPy-user] citing numpy, scipy openopt In-Reply-To: References: <47D66259.80305@relativita.com> Message-ID: <47D686F8.7050206@relativita.com> Robin wrote: > > Did you see this page: > http://scipy.org/Citing_SciPy > > Thanks for the pointer. E. From jdh2358 at gmail.com Tue Mar 11 09:37:57 2008 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 11 Mar 2008 08:37:57 -0500 Subject: [SciPy-user] Problem with plot In-Reply-To: <872493.2413.qm@web86508.mail.ird.yahoo.com> References: <20080310203811.GA10221@phare.normalesup.org> <872493.2413.qm@web86508.mail.ird.yahoo.com> Message-ID: <88e473830803110637q1b07a22fn4073fef449b017c8@mail.gmail.com> On Tue, Mar 11, 2008 at 3:50 AM, Michael Nandris wrote: > from numpy import matrix > from scipy.linalg import inv, det, eig > from pylab import plot, show The plotting function spy should help (plot is designed to plot line data). See also imshow for 2D array plotting def spy(self, Z, precision=None, marker=None, markersize=None, aspect='equal', **kwargs): """ spy(Z) plots the sparsity pattern of the 2-D array Z If precision is None, any non-zero value will be plotted; else, values of absolute(Z)>precision will be plotted. The array will be plotted as it would be printed, with the first index (row) increasing down and the second index (column) increasing to the right. By default aspect is 'equal' so that each array element occupies a square space; set the aspect kwarg to 'auto' to allow the plot to fill the plot box, or to any scalar number to specify the aspect ratio of an array element directly. Two plotting styles are available: image or marker. Both are available for full arrays, but only the marker style works for scipy.sparse.spmatrix instances. If marker and markersize are None, an image will be returned and any remaining kwargs are passed to imshow; else, a Line2D object will be returned with the value of marker determining the marker type, and any remaining kwargs passed to the axes plot method. If marker and markersize are None, useful kwargs include: cmap alpha See documentation for imshow() for details. For controlling colors, e.g. cyan background and red marks, use: cmap = mcolors.ListedColormap(['c','r']) If marker or markersize is not None, useful kwargs include: marker markersize color See documentation for plot() for details. Useful values for marker include: 's' square (default) 'o' circle '.' point ',' pixel """ From emanuele at relativita.com Tue Mar 11 09:55:41 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Tue, 11 Mar 2008 14:55:41 +0100 Subject: [SciPy-user] citing numpy, scipy openopt In-Reply-To: <47D6825B.9000907@scipy.org> References: <47D66259.80305@relativita.com> <47D665FF.30206@scipy.org> <47D66E79.8040303@relativita.com> <47D6825B.9000907@scipy.org> Message-ID: <47D68F5D.4090700@relativita.com> dmitrey wrote: > ... > it would be nice to know where openopt is used, both for me and other OO > users. IIRC TOMOPT informed his users that their TOMLAB is used at > international space station. Mb someday OO will be used so commonly as > well?:) > Surely you noticed that there is the need of having a free software / opensource alternative to tomopt/tomlab. Since python+numpy+scipy are gaining popularity you can only expect a larger user base, as time goes on. >> In the meanwhile could you send a basic reference about the ralg >> algorithm? >> > There are lots of ralg modifications: ralg-5 (initial version, that is > implemented currently in OO), ralg-4 (more advanced implementation, it > requires only 4n^2 multiplications vs 5n^2 in ralg-5). As for Fortran > version, it is capable of handling nVars ~= 10^4...10^5, while ordinary > ralg only ~1000. Also, it consumes much less memory: m*nVars vs nVars^2 > (m< not pure r-alg, there are some modifications of course, as well as > different ways of handling constraints (pure r-alg is unconstrained). > > You could try google "N.Z.Shor" or "Naum Z. Shor" for references. > >From wikipedia I've found this document "Congratulations to Naum Shor on his 65th birthday" from Journal of Global Optimization. It has a nice list of references which is presented in the text: http://www.springerlink.com/content/j263467v12w40172/fulltext.pdf (I can access it, but I don't know if it is free access) I do believe you should put similar references, contextually, in the documentation of your OpenOpt library. As David said, it is common practice to mention such references to algorithms (and possibly software) when writing scientific articles. It should be definitely relevant for your department (so directly or indirectly for you). Emanuele P.S.: I'm sorry for the inappropriate question about asking Shor. From lev at columbia.edu Tue Mar 11 11:58:13 2008 From: lev at columbia.edu (Lev Givon) Date: Tue, 11 Mar 2008 11:58:13 -0400 Subject: [SciPy-user] illegal instruction error in scipy.linalg.decomp.svd In-Reply-To: <20080104164410.GA11663@localhost.cc.columbia.edu> References: <20071218003816.GE15380@localhost.cc.columbia.edu> <20080104164410.GA11663@localhost.cc.columbia.edu> Message-ID: <20080311155812.GC25692@localhost.cc.columbia.edu> Received from Lev Givon on Fri, Jan 04, 2008 at 11:44:11AM EST: > Received from Lev Givon on Mon, Dec 17, 2007 at 07:38:18PM EST: > > On a Pentium 4 Linux box running python 2.5.1, scipy 0.6.0, numpy > > 1.0.4, and lapack 3.0, I recently noticed that > > scipy.linalg.decomp.svd() fails (and causes python to crash) with an > > "illegal instruction" error. A bit of debugging revealed that the > > error occurs in the line > > > > lwork = calc_lwork.gesdd(gesdd.prefix,m,n,compute_uv)[1] > > > > in scipy/linalg/decomp.py. Interestingly, scipy 0.5.2.1 on the same > > box (with all of the other software packages unchanged) does not > > exhibit this problem. Moreover, when I install scipy 0.6.0 along with > > all of the other packages on a Linux machine containing a Pentium D > > CPU, I do not observe the problem either. > > > > Being that I am running Mandriva Linux, closed scipy bug 540 caught my > > eye. I'm not sure how it could be related to the above problem, though > > (and I also do not know what the lapack patch mentioned in the ticket > > could have been - even though I have been maintaining Mandriva's > > lapack lately :-). > > > > Thoughts? > > This problem is also present in the latest svn release of scipy (3781). > > L.G. This problem seems to have been resolved; numpy revision 4863 / scipy revision 4010 (built against the same python and lapack libraries described earlier) do not exhibit the above behavior. Some other illegal instruction events I had observed on a Pentium 4 system involving routines in cephes also appear to have been resolved. L.G. From t.charrett at Cranfield.ac.uk Tue Mar 11 12:58:02 2008 From: t.charrett at Cranfield.ac.uk (Charrett, Thomas) Date: Tue, 11 Mar 2008 16:58:02 -0000 Subject: [SciPy-user] FFTW use Message-ID: <17BBF49306835748938EF9728B418E4301FA9C6A@ccexchange-3.cns.cranfield.ac.uk> Hello, I'm fairly new to using python, having converted from matlab and I have some questions concerning the FFT functions in scipy.fftpack. I have some matlab code which calculates the fft of an image (2d array) before carrying out some processing and then doing the inverse transform. However the same code converted to Python gives a different result. I've been trying to track down the problem and it appears that the problem occurs when calculate the fft, with matlab calculating slightly different values than scipy. I've have tried to build scipy to use the FFTW library that Matlab uses and think I have succeeded (scipy.show_config() says fftw3 is available). Does fft2 and fftn automatically use FFTW when installed? If not how would I set this? Are the functions in scipy.fftpack the correct ones? - I have seen post suggesting scipy.fftw which doesn't appear to exist. Also I say I've tried to build scipy with FFTW as the instructions for windows are unclear and it took my a while to get it working - could someone post some instructions on the correct way to compile scipy with FFTW. I used MinGW and MSYS to compile FFTW using the configure and make files, copied libfftw3.a and fftw3.h to a new folder. Downloaded the prebuilt atlas version from the scipy webpage and used the following in my site.cfg file: [atlas] library_dirs = C:\build\atlas3.6.0_WinNT_P3 atlas_libs = lapack, f77blas, cblas, atlas [fftw3] library_dirs = [path to fftw files] include_dirs = [path to fftw libs] libraries = fftw3 I then ran and installed the result using python setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst When I import scipy, scipy.test() crashed but the fftpack.test() works fine, However my fft results look exactly the same am I actually using FFTW? Any help greatly appreciated, I'd really like to ditch matlab completely. Tom From robert.kern at gmail.com Tue Mar 11 13:30:31 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Mar 2008 12:30:31 -0500 Subject: [SciPy-user] FFTW use In-Reply-To: <17BBF49306835748938EF9728B418E4301FA9C6A@ccexchange-3.cns.cranfield.ac.uk> References: <17BBF49306835748938EF9728B418E4301FA9C6A@ccexchange-3.cns.cranfield.ac.uk> Message-ID: <3d375d730803111030y1b7a5b2ah2ed5516dcd722efb@mail.gmail.com> On Tue, Mar 11, 2008 at 11:58 AM, Charrett, Thomas wrote: > Hello, > > I'm fairly new to using python, having converted from matlab and I have some questions concerning the FFT functions in scipy.fftpack. > > I have some matlab code which calculates the fft of an image (2d array) before carrying out some processing and then doing the inverse transform. However the same code converted to Python gives a different result. I've been trying to track down the problem and it appears that the problem occurs when calculate the fft, with matlab calculating slightly different values than scipy. > > I've have tried to build scipy to use the FFTW library that Matlab uses and think I have succeeded (scipy.show_config() says fftw3 is available). Does fft2 and fftn automatically use FFTW when installed? Yes. > If not how would I set this? Are the functions in scipy.fftpack the correct ones? - I have seen post suggesting scipy.fftw which doesn't appear to exist. The scipy.fftpack module is misleadingly named. The functions in there will use whatever FFT backend it was built with, not just FFTPACK. > Also I say I've tried to build scipy with FFTW as the instructions for windows are unclear and it took my a while to get it working - could someone post some instructions on the correct way to compile scipy with FFTW. > > I used MinGW and MSYS to compile FFTW using the configure and make files, copied libfftw3.a and fftw3.h to a new folder. Downloaded the prebuilt atlas version from the scipy webpage and used the following in my site.cfg file: > > [atlas] > library_dirs = C:\build\atlas3.6.0_WinNT_P3 > atlas_libs = lapack, f77blas, cblas, atlas > [fftw3] > library_dirs = [path to fftw files] > include_dirs = [path to fftw libs] > libraries = fftw3 > > I then ran and installed the result using > > python setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst > > When I import scipy, scipy.test() crashed Can you run scipy.test(verbosity=2)? The name of the test will be printed before it is executed. Please show us this name so we can figure out which test is crashing. > but the fftpack.test() works fine, However my fft results look exactly the same am I actually using FFTW? If you attach the build output, we could answer that definitively. The results *should* look the same. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Mar 11 13:47:40 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Mar 2008 12:47:40 -0500 Subject: [SciPy-user] Incremental histogram? In-Reply-To: References: Message-ID: <3d375d730803111047y76cfb9f8r9aa5b9238bb42734@mail.gmail.com> On Tue, Mar 11, 2008 at 4:02 AM, Roger Herikstad wrote: > Hi list, > I need to histogram an array of long ints, but the array itself is > too big to keep in memory. I was thinking of using an incremental > approach, i.e. assign each sample in the array to the appropriate bin, > sample by sample. Right now, I have the array (well, list really) > constructed as a generator, and I was wondering if anyone has an > efficient algorithm for doing histogram count on such a generator > object? Where does this array usually live? Is it constructed algorithmically, or is it on disk? Anyways, I would batch up the elements into largish but comfortably-sized arrays, use numpy.histogram() on each, and add together the histograms. If the arrays live on disk in memory-mappable form, I recommend Roberto De Almeida's arrayterator to do the batching for you: http://pypi.python.org/pypi/arrayterator/0.2.8 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From berthe.loic at gmail.com Tue Mar 11 14:06:13 2008 From: berthe.loic at gmail.com (LB) Date: Tue, 11 Mar 2008 11:06:13 -0700 (PDT) Subject: [SciPy-user] Incremental histogram? In-Reply-To: <3d375d730803111047y76cfb9f8r9aa5b9238bb42734@mail.gmail.com> References: <3d375d730803111047y76cfb9f8r9aa5b9238bb42734@mail.gmail.com> Message-ID: <2c177637-58a8-4a24-80a1-4e3ab0acd306@i12g2000prf.googlegroups.com> I would also divide this large array in medium sized array and use numpy.histogram on each. But, be careful : you have to define the bins of your histogram first, and numpy.histogram as a weird behaviour : it discard all values lower than the bins and do not allocate them to the lowest bins, as it is said in its docstring (see http://scipy.org/scipy/numpy/ticket/605 ). -- LB From Adam.Ginsburg at Colorado.EDU Tue Mar 11 17:33:56 2008 From: Adam.Ginsburg at Colorado.EDU (Adam Ginsburg) Date: Tue, 11 Mar 2008 15:33:56 -0600 Subject: [SciPy-user] Problem with scipy.optimize.leastsq: Improper input parameters Message-ID: Thanks Dave, Mark. I eventually figured out the input parameters and got my minimization to work out; the biggest problem was that you can't simply output a 2D array if you're fitting image data, you have to do something like the process outlined http://www.scipy.org/Cookbook/FittingData#head-11870c917b101bb0d4b34900a0da1b7deb613bf7 (which I had a surprisingly difficult time finding - maybe links to the cookbooks could be put in the documentation?). Basically, my issue wasn't putting in the errors vs. the function, but that I didn't ravel() my data. I have come across a new problem in my leastsq fitting when trying to pass in my own analytic derivative/gradient function, which I finally discovered after a few days of hunting is because a patch has not been applied to the latest version of scipy, specifically this one: http://article.gmane.org/gmane.comp.python.scientific.devel/5848 . I'm in the process of applying the patch and recompiling all of scipy with it. Is there any reason the patch wasn't applied / should I be asking the scipy-dev list this question? Thanks for the help, Adam From adam.ginsburg at colorado.edu Tue Mar 11 17:56:50 2008 From: adam.ginsburg at colorado.edu (Adam Ginsburg) Date: Tue, 11 Mar 2008 15:56:50 -0600 Subject: [SciPy-user] Problem with scipy.optimize.leastsq: Improper input parameters Message-ID: OK... follow-up... can anyone help me re-compile __minpack.h with the patch in place? I have no idea how to do it, and I can't get scipy itself to compile, I think because it's failing a lot of dependencies. I installed scipy from a .deb package originally, not from source, and I'm really not sure whether it will be worse to try to resolve all those dependencies or just work around the problem. Thanks, Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger.herikstad at gmail.com Tue Mar 11 20:10:39 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Wed, 12 Mar 2008 08:10:39 +0800 Subject: [SciPy-user] Incremental histogram? In-Reply-To: <3d375d730803111047y76cfb9f8r9aa5b9238bb42734@mail.gmail.com> References: <3d375d730803111047y76cfb9f8r9aa5b9238bb42734@mail.gmail.com> Message-ID: Hi, Thanks, I'll definitely look into the arrayiterator. The array is constructed algorithmically, and is actually a pairwise difference between data points belonging to different clusters. I need to histogram these differences to look for points close than a certain threshold, and also look at the distribution of the differences. Each cluster can contain as many as a few hundred thousand points, and since the data points are long ints, I quickly run out of memory. What I was thinking of was to use an iterator that will allow me to iterate over chunks of an iterator, doing a histogram on each chunk separately. However, I couldn't find any such iterator in the itertools module. Maybe the arrayiterator does that? I'll look into it. Thanks! ~ Roger On Wed, Mar 12, 2008 at 1:47 AM, Robert Kern wrote: > > On Tue, Mar 11, 2008 at 4:02 AM, Roger Herikstad > wrote: > > Hi list, > > I need to histogram an array of long ints, but the array itself is > > too big to keep in memory. I was thinking of using an incremental > > approach, i.e. assign each sample in the array to the appropriate bin, > > sample by sample. Right now, I have the array (well, list really) > > constructed as a generator, and I was wondering if anyone has an > > efficient algorithm for doing histogram count on such a generator > > object? > > Where does this array usually live? Is it constructed algorithmically, > or is it on disk? > > Anyways, I would batch up the elements into largish but > comfortably-sized arrays, use numpy.histogram() on each, and add > together the histograms. If the arrays live on disk in memory-mappable > form, I recommend Roberto De Almeida's arrayterator to do the batching > for you: > > http://pypi.python.org/pypi/arrayterator/0.2.8 > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Tue Mar 11 20:22:04 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Mar 2008 19:22:04 -0500 Subject: [SciPy-user] Incremental histogram? In-Reply-To: References: <3d375d730803111047y76cfb9f8r9aa5b9238bb42734@mail.gmail.com> Message-ID: <3d375d730803111722o1869d73fw2747724ac05a30b9@mail.gmail.com> On Tue, Mar 11, 2008 at 7:10 PM, Roger Herikstad wrote: > Hi, > Thanks, I'll definitely look into the arrayiterator. The array is > constructed algorithmically, and is actually a pairwise difference > between data points belonging to different clusters. I need to > histogram these differences to look for points close than a certain > threshold, and also look at the distribution of the differences. Each > cluster can contain as many as a few hundred thousand points, and > since the data points are long ints, I quickly run out of memory. What > I was thinking of was to use an iterator that will allow me to iterate > over chunks of an iterator, doing a histogram on each chunk > separately. However, I couldn't find any such iterator in the > itertools module. It's easiest to do manually. > Maybe the arrayiterator does that? No, it works particularly on arrays. Essentially, it generates slice indices for each of the chunks and yields the slices of the base array. Thus, it works well on memory-mapped arrays. It won't work for you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Wed Mar 12 00:17:45 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 12 Mar 2008 13:17:45 +0900 Subject: [SciPy-user] RHEL 5 and CENTOS 5 rpms for blas/lapack/numpy/scipy available on ashigabou Message-ID: <47D75969.2040204@ar.media.kyoto-u.ac.jp> Hi, Since some people had problems with RHEL/CENTOS, and since the opensuse build system does provide facilities to build rpms for RHEL and CENTOS for some time, I quickly updated the ashigabou repository to handle those distributions. I also added opensuse 10.3 and FC 8, but those did not require any changes: http://download.opensuse.org/repositories/home:/ashigabou/ (note that it may take time for the rpms to appear there from the time they successfully build on the compiler farm, which they just did). cheers, David From t.charrett at Cranfield.ac.uk Wed Mar 12 06:34:08 2008 From: t.charrett at Cranfield.ac.uk (Charrett, Thomas) Date: Wed, 12 Mar 2008 10:34:08 -0000 Subject: [SciPy-user] FFTW use Message-ID: <17BBF49306835748938EF9728B418E4301FA9C6B@ccexchange-3.cns.cranfield.ac.uk> Thanks for the prompt reply, I've worked it out now, the problem was not with the FFT (although the numbers did differ slightly from Matlab - rounding errors?) but in my processing where I cut out a window from the FFT and place into a new array, however I forgot to set the type of the array to complex128, hence the strange results when computing the inverse FFT. I'll probably go back to using the precompiled version of scipy for now as it seems to work fine. Tom Charrett From lo.maximo73 at gmail.com Wed Mar 12 10:20:43 2008 From: lo.maximo73 at gmail.com (luis cota) Date: Wed, 12 Mar 2008 10:20:43 -0400 Subject: [SciPy-user] Python Spreadsheet with Python as Core Macro Language Message-ID: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> Has anyone seen a Python spreadsheet with Python as the core macro language? I'm aware of IronPython and this is about as close as I have seen to what I am looking for except that it runs on the wrong OS and can't use the standard python libraries. :) I've run into a few other options that look promising, just not.quite.theresuch as GNU Numeric, OpenOffice, and Picalo. All I'm looking for is a basic grid that i can use to dynamically test functions and to print results for evaluation. Perhaps what is needed is a new project using a wxPython grid? - Luis -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Wed Mar 12 10:30:30 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 12 Mar 2008 15:30:30 +0100 Subject: [SciPy-user] Python Spreadsheet with Python as Core Macro Language In-Reply-To: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> Message-ID: <20080312143030.GA27424@phare.normalesup.org> On Wed, Mar 12, 2008 at 10:20:43AM -0400, luis cota wrote: > Has anyone seen a Python spreadsheet with Python as the core macro > language? Resolver (http://resolversystems.com/) but it is based on ironpython. Ga?l From wjdandreta at att.net Wed Mar 12 10:54:44 2008 From: wjdandreta at att.net (Bill Dandreta) Date: Wed, 12 Mar 2008 10:54:44 -0400 Subject: [SciPy-user] Python Spreadsheet with Python as Core Macro Language In-Reply-To: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> Message-ID: <47D7EEB4.9020101@att.net> luis cota wrote: > Has anyone seen a Python spreadsheet with Python as the core macro > language? http://www.koffice.org/kspread/ From travis at enthought.com Wed Mar 12 11:36:20 2008 From: travis at enthought.com (Travis Vaught) Date: Wed, 12 Mar 2008 10:36:20 -0500 Subject: [SciPy-user] ANN: EuroSciPy 2008 Conference - Leipzig, Germany Message-ID: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> Greetings, We're pleased to announce the EuroSciPy 2008 Conference to be held in Leipzig, Germany on July 26-27, 2008. http://www.scipy.org/EuroSciPy2008 We are very excited to create a venue for the European community of users of the Python programming language in science. This conference will bring the presentations and collaboration that we've enjoyed at Caltech each year closer to home for many users of SciPy, NumPy and Python generally--with a similar focus and schedule. Call for Participation: ---------------------- If you are a scientist using Python for your computational work, we'd love to have you formally present your results, methods or experiences. To apply to present a talk at this year's EuroSciPy, please submit an abstract of your talk as a PDF, MS Word or plain text file to euroabstracts at scipy.org. The deadline for abstract submission is April 30, 2008. Papers and/or presentation slides are acceptable and are due by June 15, 2008. Presentations will be allotted 30 minutes. Registration: ------------ Registration will open April 1, 2008. The registration fee will be 100.00? for early registrants and will increase to 150.00? for late registration. Registration will include breakfast, snacks and lunch for Saturday and Sunday. Volunteers Welcome: ------------------ If you're interested in volunteering to help organize things, please email us at info at scipy.org. From bnuttall at uky.edu Wed Mar 12 11:37:18 2008 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Wed, 12 Mar 2008 11:37:18 -0400 Subject: [SciPy-user] Python Spreadsheet with Python as Core Macro Language In-Reply-To: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> Message-ID: I haven't taken advantage of the feature, but I believe Python is the scripting language used across all components of the OpenOffice integrated software. Brandon ________________________________ From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of luis cota Sent: Wednesday, March 12, 2008 10:21 AM To: SciPy-user at scipy.org; enthought-dev at mail.enthought.com Subject: [SciPy-user] Python Spreadsheet with Python as Core Macro Language Has anyone seen a Python spreadsheet with Python as the core macro language? I'm aware of IronPython and this is about as close as I have seen to what I am looking for except that it runs on the wrong OS and can't use the standard python libraries. :) I've run into a few other options that look promising, just not.quite.there such as GNU Numeric, OpenOffice, and Picalo. All I'm looking for is a basic grid that i can use to dynamically test functions and to print results for evaluation. Perhaps what is needed is a new project using a wxPython grid? - Luis -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Wed Mar 12 11:56:29 2008 From: robince at gmail.com (Robin) Date: Wed, 12 Mar 2008 15:56:29 +0000 Subject: [SciPy-user] Python Spreadsheet with Python as Core Macro Language In-Reply-To: <20080312143030.GA27424@phare.normalesup.org> References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> <20080312143030.GA27424@phare.normalesup.org> Message-ID: On Wed, Mar 12, 2008 at 2:30 PM, Gael Varoquaux wrote: > On Wed, Mar 12, 2008 at 10:20:43AM -0400, luis cota wrote: > > Has anyone seen a Python spreadsheet with Python as the core macro > > language? > > Resolver (http://resolversystems.com/) but it is based on ironpython. I'm not affiliated in any way, but I think Resolver One looks like a great product. It's not open source but has a sensible license (free for academic/non-commercial use provided any created sheet is under an open source license). They are also working on getting CPython extensions (numpy and maybe one day scipy) working with IronPython: http://code.google.com/p/ironclad/ Robin From bhendrix at enthought.com Wed Mar 12 11:56:07 2008 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 12 Mar 2008 10:56:07 -0500 Subject: [SciPy-user] [Enthought-dev] Python Spreadsheet with Python as Core Macro Language In-Reply-To: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> Message-ID: <47D7FD17.3020808@enthought.com> luis cota wrote: > Has anyone seen a Python spreadsheet with Python as the core macro > language? > > I'm aware of IronPython and this is about as close as I have seen to > what I am looking for except that it runs on the wrong OS and can't > use the standard python libraries. :) Do you mean Resolver One (www.resolversystems.com)? There is no technical reason IronPython can't run on Mono instead of the MS CLR, but I have no idea if any particular app is portable. > > I've run into a few other options that look promising, just > not.quite.there such as GNU Numeric, OpenOffice, and Picalo. All I'm > looking for is a basic grid that i can use to dynamically test > functions and to print results for evaluation. > A quick search shows that PyQt4 and KSpread can be used together vi Kross (search for "kspread scripting"). I would be surprised if there isn't a Gnome app with similar functionality. > Perhaps what is needed is a new project using a wxPython grid? I guess this really depends on what you're aiming for. If there is another spreadsheet which offers most of the features you want, its probably more advisable to try to add the python functionality to it rather than writing your own. Bryce From aisaac at american.edu Wed Mar 12 13:45:47 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 12 Mar 2008 13:45:47 -0400 Subject: [SciPy-user] [Enthought-dev] Python Spreadsheet with Python as Core Macro Language In-Reply-To: <47D7FD17.3020808@enthought.com> References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com><47D7FD17.3020808@enthought.com> Message-ID: Gnumeric provides some support for Python scripting: http://www.gnome.org/projects/gnumeric/download.shtml (I have not tried this functionality.) Cheers, Alan Isaac From hoytak at gmail.com Wed Mar 12 16:08:01 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Wed, 12 Mar 2008 13:08:01 -0700 Subject: [SciPy-user] scipy installation without root access In-Reply-To: <47C791EB.8090103@ar.media.kyoto-u.ac.jp> References: <47C791EB.8090103@ar.media.kyoto-u.ac.jp> Message-ID: <4db580fd0803121308u297f103mdd6fa10d290ff1be@mail.gmail.com> I know this thread is old, but for the sake of completeness and future reference I'll share a new find. It seems important on some systems (e.g. my school system, running suse 10.1) and with some of the packages (e.g. lapack compiling from source) to also set the environment variable F77=gfortran during the build process. Otherwise it seems to build some with g77 and some with gfortran, which then causes weird issues that can take a while to track down. --Hoyt On Thu, Feb 28, 2008 at 10:02 PM, David Cournapeau wrote: > Alastair Basden wrote: > > Hi Hoyt/David, > > thanks for the replies... > > the problem is that it installs fine without errors, but the special.kv > > function doesn't work, and since my code needs this, its a problem! > > > > Does anyone have any idea how I could try to correct this? From what I > > can gather, the function is in scipy/special/amos/zbesk.f > > Give us the build log. It is possible that something went wrong during > the build, even if successful. > > cheers, > > David > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From R.Springuel at umit.maine.edu Wed Mar 12 16:23:08 2008 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Wed, 12 Mar 2008 16:23:08 -0400 Subject: [SciPy-user] "in" vs. looped "==" Message-ID: <47D83BAC.4010504@umit.maine.edu> I have some code that uses a "1. in x" statement early on, where x is a rank-1 ndarray with dtype=float. This statement returns False and the program continues accordingly. However, later on in the same code I loop over the elements of x and the statement "x[i] == 1" is evaluated and comes back as True for some elements of x. To my mind, that shouldn't be happening. Is there a difference in how "1. in x" (i.e. the __contains__ method of ndarray) and "x[i] == 1." (i.e. the __eq__ property of float) behave that I'm not aware of? -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By appointment only From ggellner at uoguelph.ca Wed Mar 12 16:32:09 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 12 Mar 2008 14:32:09 -0600 Subject: [SciPy-user] Python Spreadsheet with Python as Core Macro Language In-Reply-To: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> Message-ID: <20080312203209.GA6774@giton> I find Picalo is a very interesting 'spreadsheet'. Not very traditional, but similiar in functionality with a serious python scripting interface. www.picalo.org Gabriel On Wed, Mar 12, 2008 at 10:20:43AM -0400, luis cota wrote: > Has anyone seen a Python spreadsheet with Python as the core macro > language? > > I'm aware of IronPython and this is about as close as I have seen to what > I am looking for except that it runs on the wrong OS and can't use the > standard python libraries. :) > > I've run into a few other options that look promising, just > not.quite.there such as GNU Numeric, OpenOffice, and Picalo. All I'm > looking for is a basic grid that i can use to dynamically test functions > and to print results for evaluation. > > Perhaps what is needed is a new project using a wxPython grid? > > - Luis > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From haase at msg.ucsf.edu Wed Mar 12 16:40:09 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 12 Mar 2008 21:40:09 +0100 Subject: [SciPy-user] Python Spreadsheet with Python as Core Macro Language In-Reply-To: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> Message-ID: Hi Luis, Could you clarify what you mean by "Python Spreadsheet" ? If you want something entirely implemented in Python, then like most of OpenOffice, KDE-office, Gnumeric and so on would be excluded .... Also: maybe it would help, to say something about what this is for ?! Do you need cross platform for Linux, OS-X and Windows ? Since the GUI part is practically given with some simple wxPython code, one could instead ask for a "spreadsheed engine" !! This would be a (small) set of functions which can analysis the dependencies a given spreadsheet and evaluate the cells in a (non-circular) sequence taking the found dependencies into account. Might already exist ... !? Regards, Sebastian On Wed, Mar 12, 2008 at 3:20 PM, luis cota wrote: > Has anyone seen a Python spreadsheet with Python as the core macro language? > > I'm aware of IronPython and this is about as close as I have seen to what I > am looking for except that it runs on the wrong OS and can't use the > standard python libraries. :) > > I've run into a few other options that look promising, just not.quite.there > such as GNU Numeric, OpenOffice, and Picalo. All I'm looking for is a basic > grid that i can use to dynamically test functions and to print results for > evaluation. > > Perhaps what is needed is a new project using a wxPython grid? > > - Luis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From aisaac at american.edu Wed Mar 12 17:16:06 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 12 Mar 2008 17:16:06 -0400 Subject: [SciPy-user] "in" vs. looped "==" In-Reply-To: <47D83BAC.4010504@umit.maine.edu> References: <47D83BAC.4010504@umit.maine.edu> Message-ID: On Wed, 12 Mar 2008, "R. Padraic Springuel" apparently wrote: > I have some code that uses a "1. in x" statement early on, > where x is a rank-1 ndarray with dtype=float. This > statement returns False and the program continues > accordingly. However, later on in the same code I loop > over the elements of x and the statement "x[i] == 1" is > evaluated and comes back as True for some elements of x. > To my mind, that shouldn't be happening. Is there > a difference in how "1. in x" (i.e. the __contains__ > method of ndarray) and "x[i] == 1." (i.e. the __eq__ > property of float) behave that I'm not aware of? There is a difference, if I recall correctly, but it will not cause this. The difference is that ``in`` will first check an item for ``is`` and then check it for ``__eq__``. But since this would not affect your results, the most likely possibility seems that you are changing the array somehow. A small example would help. Cheers, Alan Isaac From pkienzle at nist.gov Wed Mar 12 17:59:32 2008 From: pkienzle at nist.gov (Paul Kienzle) Date: Wed, 12 Mar 2008 17:59:32 -0400 Subject: [SciPy-user] Python Spreadsheet with Python as Core Macro Language In-Reply-To: ; from haase@msg.ucsf.edu on Wed, Mar 12, 2008 at 09:40:09PM +0100 References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> Message-ID: <20080312175932.E1388113@jazz.ncnr.nist.gov> On Wed, Mar 12, 2008 at 09:40:09PM +0100, Sebastian Haase wrote: > Since the GUI part is practically given with some simple wxPython > code, one could instead ask for a "spreadsheed engine" !! This would > be a (small) set of functions which can analysis the dependencies a > given spreadsheet and evaluate the cells in a (non-circular) sequence > taking the found dependencies into account. Might already exist ... Indeed. See attached. Not the highest performance code, but it should be enough to get one started. All you need is to parse the expressions to identify cell references for building the dependency graph. - Paul -------------- next part -------------- # This program is public domain # Author: Paul Kienzle """ Helper classes for implementing constraint dependency ordering. Constraints must be evaluated in the correct order. If parameter A depends on parameter B, then parameter B must be evaluated first. This ConstraintsDAG class allows you to specify the links between the constraints, and from there calculate the extract an order in which to evaluate them. """ # TODO: the current algorithm is recursive, and is subject to max recursion # depth errors if too many constraints are used. Need to construct a # non-recursive algorithm for ordering constraints. class Error(Exception): """Exception raised for errors in the constraint module.""" pass class CyclicError(Exception): """ The constraint expressions have a cycle. """ pass class ParsingError(Exception): """ A constraint expression is invalid. """ pass def _order_children(depends_on, target, visited, stack): """ Recursion for dependency ordering algorithm. """ #print target,visited,stack if target in stack: raise CyclicError, "Cyclic constraint containing parameter %s"%(target) if target not in depends_on or target in visited: return [] stack += [target] visited.add(target) order = [] for child in depends_on[target]: #print "checking",target,"->",child order += _order_children(depends_on, child, visited, stack) return order+[target] class ConstraintDAG(object): """ Structure to store constraint dependencies. """ def __init__(self, pairs=[]): self.depends_on = {} for a,b in pairs: self.link(a,b) def link(self,a,b): """ Make a depend on b. """ if a not in self.depends_on: self.depends_on[a] = [] self.depends_on[a].append(b) def links(self,a,b): """ Set all dependencies of a. """ if b == []: if a in self.depends_on: del self.depends_on[a] else: self.depends_on[a] = b def order(self): """ Return the set of nodes A in the order they need to be evaluated, raising CyclicError if their are circular dependencies. """ visited = set() order = [] for target in self.depends_on.iterkeys(): #print "checking tree",target order += _order_children(self.depends_on, target, visited, []) return order # ========= Test code ======== def check(msg,n,items,partialorder): """ Verify that the list n contains the given items, and that the list satisfies the partial ordering given by the pairs in partial order. """ success = True # optimism if set(n) != set(items) or len(n) != len(items): success = False print msg,"expect",n,"to contain only",items for lo,hi in partialorder: if lo in n and hi in n and n.index(hi) <= n.index(lo): print msg,"expect",lo,"before",hi success = False if success: print msg,"passes" def test(): import numpy DAG = ConstraintDAG([(7,2),(5,1),(4,1),(1,2),(1,3),(6,5)]) n = DAG.order() check("test1",n,[1,4,5,6,7],[(1,3),(1,4),(1,5),(5,6)]) DAG = ConstraintDAG([(1,6),(3,7),(4,7),(7,6),(7,5),(2,3)]) n = DAG.order() check("test1-renumbered",n,[1,2,3,4,7],[(7,5),(7,4),(7,3),(3,2)]) DAG = ConstraintDAG(numpy.array([(7,2),(5,1),(4,1),(1,2),(1,3),(6,5)])) n = DAG.order() check("test1-numpy",n,[1,4,5,6,7],[(1,3),(1,4),(1,5),(5,6)]) DAG = ConstraintDAG([(1,4),(2,3),(8,4)]) n = DAG.order() check("test2",n,[1,2,8],[]) DAG = ConstraintDAG([(1,4),(4,3),(4,5),(5,1)]) try: n = DAG.order() print "test3 expected CyclicError but got",n except CyclicError,msg: print "test3 correctly fails - error message:",msg # large test for gross speed check A = numpy.random.randint(4000,size=(10000,2)) A[:,1] += 4000 # Avoid cycles DAG = ConstraintDAG(A) n = DAG.order() # depth test k = 100 A = numpy.array([range(0,k),range(1,k+1)]).T DAG = ConstraintDAG(A) n = DAG.order() A = numpy.array([range(1,k+1),range(0,k)]).T DAG = ConstraintDAG(A) n = DAG.order() if __name__ == "__main__": test() From lo.maximo73 at gmail.com Wed Mar 12 22:55:16 2008 From: lo.maximo73 at gmail.com (luis cota) Date: Wed, 12 Mar 2008 22:55:16 -0400 Subject: [SciPy-user] Python Spreadsheet with Python as Core Macro Language In-Reply-To: <20080312175932.E1388113@jazz.ncnr.nist.gov> References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> <20080312175932.E1388113@jazz.ncnr.nist.gov> Message-ID: <2e598d7f0803121955s5609307yc2e5eba249056dc3@mail.gmail.com> Thanks for the responses - I've used ResolverOne quite a bit and am fond of its capabilities. My primary needs are to have a "spreadsheet interface" that allows me to call functions using "=myfunc(A1, A2)" which are defined in python code. Most of my work makes use of standard python libraries, so using ResolverOne had me rewriting a lot of the same code that I don't want to rewrite (math/science manipulation stuff). Also, I'd like to use it as a viewer of intermediate states of data. eg - seeing what the data looks like at each step in the overall analytic system. I've checked out a lot of those tools - picalo looks very cool, though not quite what im looking for at the moment. OpenOffice looked so promising, until I began inspecting the PyUNO bridge - that stuff looks very heavy when compared ti Resolver, which would be exactly right if I could use C Python libs.... I've also thought about embedding Python into MS Excel through XLW - this currently seems like a great option, though i'm trying to avoid windows code altogether. Hoping someone has some options ....plz! - Luis On Wed, Mar 12, 2008 at 5:59 PM, Paul Kienzle wrote: > On Wed, Mar 12, 2008 at 09:40:09PM +0100, Sebastian Haase wrote: > > Since the GUI part is practically given with some simple wxPython > > code, one could instead ask for a "spreadsheed engine" !! This would > > be a (small) set of functions which can analysis the dependencies a > > given spreadsheet and evaluate the cells in a (non-circular) sequence > > taking the found dependencies into account. Might already exist ... > > Indeed. See attached. Not the highest performance code, but it should be > enough to get one started. All you need is to parse the expressions > to identify cell references for building the dependency graph. > > - Paul > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zane at ideotrope.org Wed Mar 12 23:07:27 2008 From: zane at ideotrope.org (Zane Selvans) Date: Wed, 12 Mar 2008 20:07:27 -0700 Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> Message-ID: <47D89A6F.8040807@ideotrope.org> I'm looking for a Python Runge-Kutta diffeq integrator, and I've found conflicting information online as to whether this is included within SciPy (the SciPy to Matlab comparison says it's included in Matlab, and not in SciPy, but some mailing list commentary says it's part of the odepack). Does anyone know for sure? I don't actually know a whole lot about how numerical ode solvers work (and truthfully, I don't want to learn, which is why I'm hoping I can just find an open source solver...) The reason I'm asking instead of just playing with scipy.integrate.ode is for some reason my SciPy installation seems to be having problems loading the ode module. I'm using Chris Fonnesbeck's "SciPy Superpack" installation. I get this: In [170]: from scipy.integrate import ode --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /Users/zane/svn/stress/pySatStress/ in () /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/integrate/__init__.py in () 7 from info import __doc__ 8 ----> 9 from quadrature import * 10 from odepack import * 11 from quadpack import * /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/integrate/quadrature.py in () 3 'cumtrapz','newton_cotes','composite'] 4 ----> 5 from scipy.special.orthogonal import p_roots 6 from scipy.special import gammaln 7 from numpy import sum, ones, add, diff, isinf, isscalar, \ /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/__init__.py in () 6 #from special_version import special_version as __version__ 7 ----> 8 from basic import * 9 import specfun 10 import orthogonal /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/basic.py in () 6 7 from numpy import * ----> 8 from _cephes import * 9 import types 10 import specfun ImportError: dlopen(/Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/_cephes.so, 2): Library not loaded: /usr/local/lib/libgfortran.2.dylib Referenced from: /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/_cephes.so Reason: image not found And lo, it turns out I don't have libgfortran.2.dylib on my system. I do have libgfortran.3.dylib though. If I create a symbolic link from 3 to 2, and then try the import I get a different error: ImportError: dlopen(/Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/_cephes.so, 2): Symbol not found: __gfortran_pow_r4_i4 Referenced from: /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/_cephes.so Expected in: /usr/local/lib/libgfortran.2.dylib It turns out there was a libgfortran.2.0.0.dylib included with the superpack, but it's for the Mach-0 architecture, not Intel, which is what I've got (MacBook Pro). Any insight appreciated, Thanks! Zane -- Zane Selvans Amateur Human zane at ideotrope.org 303/815-6866 PGP Key: 55E0815F -------------- next part -------------- A non-text attachment was scrubbed... Name: zane.vcf Type: text/x-vcard Size: 254 bytes Desc: not available URL: From peridot.faceted at gmail.com Wed Mar 12 23:25:57 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 12 Mar 2008 23:25:57 -0400 Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: <47D89A6F.8040807@ideotrope.org> References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> Message-ID: On 12/03/2008, Zane Selvans wrote: > I'm looking for a Python Runge-Kutta diffeq integrator, and I've found > conflicting information online as to whether this is included within > SciPy (the SciPy to Matlab comparison says it's included in Matlab, and > not in SciPy, but some mailing list commentary says it's part of the > odepack). > > Does anyone know for sure? I don't actually know a whole lot about how > numerical ode solvers work (and truthfully, I don't want to learn, which > is why I'm hoping I can just find an open source solver...) Scipy has (at least) two good ODE solvers, scipy.integrate.odeint and scipy.integrate.vode. I don't think either is Runge-Kutta, they're both based on established FORTRAN codes that implement adaptive solvers that automatically switch between stiff and non-stiff solvers as needed. Anne From s.mientki at ru.nl Thu Mar 13 05:03:33 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 13 Mar 2008 10:03:33 +0100 Subject: [SciPy-user] Python Spreadsheet with Python as Core Macro Language In-Reply-To: <2e598d7f0803121955s5609307yc2e5eba249056dc3@mail.gmail.com> References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com> <20080312175932.E1388113@jazz.ncnr.nist.gov> <2e598d7f0803121955s5609307yc2e5eba249056dc3@mail.gmail.com> Message-ID: <47D8EDE5.6080406@ru.nl> hi Luis, are you looking for something like this: http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jalcc_swb_akto.html http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jallcc_signal_workbench.html cheers, Stef luis cota wrote: > Thanks for the responses - I've used ResolverOne quite a bit and am > fond of its capabilities. My primary needs are to have a "spreadsheet > interface" that allows me to call functions using "=myfunc(A1, A2)" > which are defined in python code. > > Most of my work makes use of standard python libraries, so using > ResolverOne had me rewriting a lot of the same code that I don't want > to rewrite (math/science manipulation stuff). > > Also, I'd like to use it as a viewer of intermediate states of data. > eg - seeing what the data looks like at each step in the overall > analytic system. > > I've checked out a lot of those tools - picalo looks very cool, though > not quite what im looking for at the moment. > > OpenOffice looked so promising, until I began inspecting the PyUNO > bridge - that stuff looks very heavy when compared ti Resolver, which > would be exactly right if I could use C Python libs.... > > I've also thought about embedding Python into MS Excel through XLW - > this currently seems like a great option, though i'm trying to avoid > windows code altogether. Hoping someone has some options ....plz! > > - Luis > > > On Wed, Mar 12, 2008 at 5:59 PM, Paul Kienzle > wrote: > > On Wed, Mar 12, 2008 at 09:40:09PM +0100, Sebastian Haase wrote: > > Since the GUI part is practically given with some simple wxPython > > code, one could instead ask for a "spreadsheed engine" !! This would > > be a (small) set of functions which can analysis the dependencies a > > given spreadsheet and evaluate the cells in a (non-circular) > sequence > > taking the found dependencies into account. Might already exist ... > > Indeed. See attached. Not the highest performance code, but it > should be > enough to get one started. All you need is to parse the expressions > to identify cell references for building the dependency graph. > > - Paul > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. From grh at mur.at Thu Mar 13 05:52:05 2008 From: grh at mur.at (Georg Holzmann) Date: Thu, 13 Mar 2008 10:52:05 +0100 Subject: [SciPy-user] [ann] aureservoir0.1 - library for analog reservoir computing neural networks (Echo State Networks) Message-ID: <47D8F945.2060109@mur.at> Hallo! This is a release of aureservoir, an open-source C++ library for analog reservoir computing neural networks (mainly Echo State Networks) with bindings to python/numpy. The goal of this library is low memory usage, be efficient, extensively tested and easy to extend with new algorithms. Networks can be used with double or singe precision floating points and various different activation functions, simulation, training and adaptation algorithms are implemented and can be exchanged at runtime. For an introduction into reservoir computing neural networks, a feature overview, installation instructions, examples and documentation see: http://aureservoir.sourceforge.net/ Any feedback is welcome and I am happy to help out with installation/usage problems ! LG Georg From aisaac at american.edu Thu Mar 13 09:49:48 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 13 Mar 2008 09:49:48 -0400 Subject: [SciPy-user] =?iso-8859-1?q?Python_Spreadsheet_with_Python_as_Cor?= =?iso-8859-1?q?e_Macro=09Language?= In-Reply-To: <2e598d7f0803121955s5609307yc2e5eba249056dc3@mail.gmail.com> References: <2e598d7f0803120720s21304423xf7f1a5a192d447f3@mail.gmail.com><20080312175932.E1388113@jazz.ncnr.nist.gov><2e598d7f0803121955s5609307yc2e5eba249056dc3@mail.gmail.com> Message-ID: On Wed, 12 Mar 2008, luis cota apparently wrote: > Resolver, which would be exactly right if I could use > C Python libs. Ask them about this? My guess is that they will be very interested. > I've also thought about embedding Python into MS Excel through XLW - this > currently seems like a great option, though i'm trying to > avoid windows code altogether. Might be easier with Gnumeric? fwiw, Alan Isaac From 302302 at centrum.cz Thu Mar 13 08:49:27 2008 From: 302302 at centrum.cz (302302) Date: Thu, 13 Mar 2008 13:49:27 +0100 Subject: [SciPy-user] How to forbid a standard output of OpenOpt solvers Message-ID: <200803131349.1651@centrum.cz> Hi, I've used linear solver (cvxopt_glpk) from OpenOpt. Can I somehow forbid a printing description of solving problem to the standard output? I supposed there should be some parameter in the running command, but I can't find any useful documentation with this topic. example of standard output of th solver: starting solver cvxopt_glpk (license: GPL v.2) with problem unnamed lpx_simplex: original LP has 135 rows, 51 columns, 755 non-zeros lpx_simplex: presolved LP has 80 rows, 51 columns, 700 non-zeros lpx_adv_basis: size of triangular part = 80 0: objval = 0.000000000e+00 infeas = 1.000000000e+00 (0) 56: objval = 6.888030023e+01 infeas = 0.000000000e+00 (0) * 56: objval = 6.888030023e+01 infeas = 0.000000000e+00 (0) * 86: objval = 4.104563153e+01 infeas = 0.000000000e+00 (0) OPTIMAL SOLUTION FOUND solver cvxopt_glpk has finished solving the problem unnamed istop: 1000 (optimal) Solver: Time Elapsed = 0.01 CPU Time Elapsed = 0.01 objFunValue: 41.0456315344 (feasible, max constraint = 8.88178e-16) python's executing commands p = LP(f, A=A, b=b, lb=lb, ub=ub) r = p.solve('cvxopt_glpk') Thanks Cz From ggellner at uoguelph.ca Thu Mar 13 12:27:05 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Thu, 13 Mar 2008 10:27:05 -0600 Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: <47D89A6F.8040807@ideotrope.org> References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> Message-ID: <20080313162705.GA6473@giton> On Wed, Mar 12, 2008 at 08:07:27PM -0700, Zane Selvans wrote: > I'm looking for a Python Runge-Kutta diffeq integrator, and I've found > conflicting information online as to whether this is included within SciPy > (the SciPy to Matlab comparison says it's included in Matlab, and not in > SciPy, but some mailing list commentary says it's part of the odepack). > Odepack is the name of the fortran code that the scipy odeint wraps. It is a addaptive linear multistep algroithm and not in the runge kutta familiy. It works well, and depending on your problem can be very efficient. Is there any reason you need runge kutta specifically? I imagine you want a Dormand Prince pair as you mention matlab. In this case your only option at this stage is to use PyDSTool, which has its own model syntax (not normal python functions). Or you could try out Sage which has an experimental wrapper for the GSL codes (which include a range of runge kutta algorithms). > Does anyone know for sure? I don't actually know a whole lot about how > numerical ode solvers work (and truthfully, I don't want to learn, which is > why I'm hoping I can just find an open source solver...) > If you don't have specific needs, I would just use odeint (scipy.integrate.odeint), check out my example in the cookbook: http://www.scipy.org/Cookbook/Theoretical_Ecology/Hastings_and_Powell I use a fortran callback for speed, but using a normal python function is the same. If is still confusing I can send you an easier example. I don't have a mac . . . but can you do: import scipy.integrate.odeint What does scipy.test() do, looks like a broken installation. Gabriel > The reason I'm asking instead of just playing with scipy.integrate.ode is > for some reason my SciPy installation seems to be having problems loading > the ode module. I'm using Chris Fonnesbeck's "SciPy Superpack" > installation. > > I get this: > > In [170]: from scipy.integrate import ode > --------------------------------------------------------------------------- > ImportError Traceback (most recent call last) > > /Users/zane/svn/stress/pySatStress/ in () > > /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/integrate/__init__.py > in () > 7 from info import __doc__ > 8 > ----> 9 from quadrature import * > 10 from odepack import * > 11 from quadpack import * > > /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/integrate/quadrature.py > in () > 3 'cumtrapz','newton_cotes','composite'] > 4 > ----> 5 from scipy.special.orthogonal import p_roots > 6 from scipy.special import gammaln > 7 from numpy import sum, ones, add, diff, isinf, isscalar, \ > > /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/__init__.py > in () > 6 #from special_version import special_version as __version__ > 7 > ----> 8 from basic import * > 9 import specfun > 10 import orthogonal > > /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/basic.py > in () > 6 > 7 from numpy import * > ----> 8 from _cephes import * > 9 import types > 10 import specfun > > ImportError: > dlopen(/Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/_cephes.so, > 2): Library not loaded: /usr/local/lib/libgfortran.2.dylib > Referenced from: > /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/_cephes.so > Reason: image not found > > And lo, it turns out I don't have libgfortran.2.dylib on my system. I do > have libgfortran.3.dylib though. If I create a symbolic link from 3 to 2, > and then try the import I get a different error: > > ImportError: > dlopen(/Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/_cephes.so, > 2): Symbol not found: __gfortran_pow_r4_i4 > Referenced from: > /Library/Python/2.5/site-packages/scipy-0.7.0.dev3998-py2.5-macosx-10.5-i386.egg/scipy/special/_cephes.so > Expected in: /usr/local/lib/libgfortran.2.dylib > > It turns out there was a libgfortran.2.0.0.dylib included with the > superpack, but it's for the Mach-0 architecture, not Intel, which is what > I've got (MacBook Pro). > > Any insight appreciated, > Thanks! > > Zane > > -- > Zane Selvans > Amateur Human > zane at ideotrope.org > 303/815-6866 > PGP Key: 55E0815F > begin:vcard > fn:Zane Selvans > n:Selvans;Zane > org:Earthlings > adr:;;200 S. Parkwood Ave.;Pasadena;CA;91107;USA > email;internet:zane at ideotrope.org > title:Amateur Human > tel;cell:(303) 815-6866 > x-mozilla-html:TRUE > url:https://ideotrope.org > version:2.1 > end:vcard > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From mcleane at math.ubc.ca Thu Mar 13 12:49:51 2008 From: mcleane at math.ubc.ca (Mclean Edwards) Date: Thu, 13 Mar 2008 09:49:51 -0700 (PDT) Subject: [SciPy-user] How to forbid a standard output of OpenOpt solvers In-Reply-To: <200803131349.1651@centrum.cz> References: <200803131349.1651@centrum.cz> Message-ID: On Thu, 13 Mar 2008, 302302 wrote: > Hi, > I've used linear solver (cvxopt_glpk) from OpenOpt. Can I somehow forbid a printing description of solving problem to the standard output? > I supposed there should be some parameter in the running command, but I can't find any useful documentation with this topic. > My clunky solution to this problem in general is by defining a class: class StandardOutputEater: def write(self, string): pass Then when I need to disable standard output: #turn off standard output saveout = sys.stdout sys.stdout = StandardOutputEater() When you want standard output again, call: sys.stdout = saveout This gets the job done in all such cases. This is also useful to quickly turn off some self-reporting messages every once and a while. (Such as when you demo some code still under development to an interested party.) Cheers, Mclean From dmitrey.kroshko at scipy.org Thu Mar 13 12:54:11 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 13 Mar 2008 18:54:11 +0200 Subject: [SciPy-user] How to forbid a standard output of OpenOpt solvers In-Reply-To: <200803131349.1651@centrum.cz> References: <200803131349.1651@centrum.cz> Message-ID: <47D95C33.9040701@scipy.org> hi, try using p.iprint=0 (final output only) and moreover p.iprint <0 (no output) Regards, D. 302302 wrote: > Hi, > I've used linear solver (cvxopt_glpk) from OpenOpt. Can I somehow forbid a printing description of solving problem to the standard output? > I supposed there should be some parameter in the running command, but I can't find any useful documentation with this topic. > > example of standard output of th solver: > > starting solver cvxopt_glpk (license: GPL v.2) with problem unnamed > lpx_simplex: original LP has 135 rows, 51 columns, 755 non-zeros > lpx_simplex: presolved LP has 80 rows, 51 columns, 700 non-zeros > lpx_adv_basis: size of triangular part = 80 > 0: objval = 0.000000000e+00 infeas = 1.000000000e+00 (0) > 56: objval = 6.888030023e+01 infeas = 0.000000000e+00 (0) > * 56: objval = 6.888030023e+01 infeas = 0.000000000e+00 (0) > * 86: objval = 4.104563153e+01 infeas = 0.000000000e+00 (0) > OPTIMAL SOLUTION FOUND > solver cvxopt_glpk has finished solving the problem unnamed > istop: 1000 (optimal) > Solver: Time Elapsed = 0.01 CPU Time Elapsed = 0.01 > objFunValue: 41.0456315344 (feasible, max constraint = 8.88178e-16) > > > python's executing commands > > p = LP(f, A=A, b=b, lb=lb, ub=ub) > r = p.solve('cvxopt_glpk') > > Thanks Cz > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From lou_boog2000 at yahoo.com Thu Mar 13 13:17:18 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 13 Mar 2008 10:17:18 -0700 (PDT) Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: <20080313162705.GA6473@giton> Message-ID: <383327.54699.qm@web34401.mail.mud.yahoo.com> Gabriel, When you use a call-back that's an extension (in this case a Fortran one for you), do you wrap it in Python code or just import it from a shared library and give it directly to the integrator (as the vector field in this case, I would guess)? What I am thinking is that it would be better to avoid the Python overhead and call the extension directly from within odeint. Is that easy to do? I will be trying to do something similar for numerical quadrature and speed will be important. Can you show me a simple example of what you do. Thanks. --- Gabriel Gellner wrote: > I use a fortran callback for speed, but using a > normal python function is the > same. If is still confusing I can send you an easier > example. -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ From 302302 at centrum.cz Thu Mar 13 14:39:35 2008 From: 302302 at centrum.cz (302302) Date: Thu, 13 Mar 2008 19:39:35 +0100 Subject: [SciPy-user] How to forbid a standard output of OpenOpt solvers In-Reply-To: <47D95C33.9040701@scipy.org> References: 200803131349.1651@centrum.cz> <200803131349.1651@centrum.cz> <47D95C33.9040701@scipy.org> Message-ID: <200803131939.30595@centrum.cz> Great! Thanks a lot. It works perfectly. Cz ______________________________________________________________ > >hi, >try using p.iprint=0 (final output only) and moreover p.iprint <0 (no >output) > >Regards, D. > >302302 wrote: >> Hi, >> I've used linear solver (cvxopt_glpk) from OpenOpt. Can I somehow forbid a printing description of solving problem to the standard output? >> I supposed there should be some parameter in the running command, but I can't find any useful documentation with this topic. >> >> example of standard output of th solver: >> >> starting solver cvxopt_glpk (license: GPL v.2) with problem unnamed >> lpx_simplex: original LP has 135 rows, 51 columns, 755 non-zeros >> lpx_simplex: presolved LP has 80 rows, 51 columns, 700 non-zeros >> lpx_adv_basis: size of triangular part = 80 >> 0: objval = 0.000000000e+00 infeas = 1.000000000e+00 (0) >> 56: objval = 6.888030023e+01 infeas = 0.000000000e+00 (0) >> * 56: objval = 6.888030023e+01 infeas = 0.000000000e+00 (0) >> * 86: objval = 4.104563153e+01 infeas = 0.000000000e+00 (0) >> OPTIMAL SOLUTION FOUND >> solver cvxopt_glpk has finished solving the problem unnamed >> istop: 1000 (optimal) >> Solver: Time Elapsed = 0.01 CPU Time Elapsed = 0.01 >> objFunValue: 41.0456315344 (feasible, max constraint = 8.88178e-16) >> >> >> python's executing commands >> >> p = LP(f, A=A, b=b, lb=lb, ub=ub) >> r = p.solve('cvxopt_glpk') >> >> Thanks Cz >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > From justus.schwabedal at gmx.de Thu Mar 13 18:57:09 2008 From: justus.schwabedal at gmx.de (Justus Schwabedal) Date: Thu, 13 Mar 2008 23:57:09 +0100 Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: <47D89A6F.8040807@ideotrope.org> References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> Message-ID: <951660A7-3AB2-45F4-8917-3FC13B4D0401@gmx.de> > > The reason I'm asking instead of just playing with > scipy.integrate.ode is for some reason my SciPy installation seems > to be having problems loading the ode module. I'm using Chris > Fonnesbeck's "SciPy Superpack" installation. Usually something like from scipy.integrate import ode should work. However The installation might not be so widly tested. Try using fink to install python and scipy. It will most likely work. Cheers, js From zane at ideotrope.org Thu Mar 13 19:07:37 2008 From: zane at ideotrope.org (Zane Selvans) Date: Thu, 13 Mar 2008 16:07:37 -0700 Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: <20080313162705.GA6473@giton> References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> <20080313162705.GA6473@giton> Message-ID: <47D9B3B9.9030603@ideotrope.org> Gabriel Gellner wrote: > On Wed, Mar 12, 2008 at 08:07:27PM -0700, Zane Selvans wrote: >> I'm looking for a Python Runge-Kutta diffeq integrator, and I've found >> conflicting information online as to whether this is included within SciPy >> (the SciPy to Matlab comparison says it's included in Matlab, and not in >> SciPy, but some mailing list commentary says it's part of the odepack). >> > Odepack is the name of the fortran code that the scipy odeint wraps. It is a > addaptive linear multistep algroithm and not in the runge kutta familiy. It > works well, and depending on your problem can be very efficient. Is there any > reason you need runge kutta specifically? The only reason I ask about Runge-Kutta specifically is I know two people who have the solution to my problem coded up already, one in Fortran, and one in C, and they both used a Runge-Kutta integrator. I want to open-source my model code, but it depends on their codes, and if I can't get them to let me publicize their work, I'm going to have to re-write it from scratch... unless someone else has already done it. I don't know if it will help, but the code is calculating displacements within a radially symmetric viscoelastic body due to a time-variable perturbation to the gravitational potential, so the equations that are coupled are the equations of motion in a continuous viscoelastic medium, and the equations that determine the gravitational potential. I don't know why they would have both chosen to write their own numerical solutions from scratch if something publicly available would have worked... but I guess it's possible. A lot of people don't seem to like to build on the work of others. > I don't have a mac . . . but can you do: > import scipy.integrate.odeint > What does scipy.test() do, looks like a broken installation. No, I get the same errors when I try that import. It's a problem with the gfortran library that was installed (v3 instead of v2), so yeah, broken install. scipy.test() also fails miserably - it can't import the "nose" package. I went with the binary "superpack" (.egg) because every time I've previously tried to get all of the various prerequisites installed and playing nice, it's turned into some kind of nightmare and I've given up altogether... and this seemed to "just work" (graphical interface and all). Alas, it's bleeding edge SVN nightly builds, and so probably pretty unreliable. Installing under Fink it's impossible to get graphics working so far as I can tell, and the main thing I want to do is use Matplotlib and basemap for model visualization. I really (really) wish there was some semi-official, and well tested, OS X package that just included everything I could possibly want for doing analysis and visualization with Python. :( Zane -- Zane Selvans Amateur Human zane at ideotrope.org 303/815-6866 PGP Key: 55E0815F -------------- next part -------------- A non-text attachment was scrubbed... Name: zane.vcf Type: text/x-vcard Size: 254 bytes Desc: not available URL: From dwarnold45 at suddenlink.net Fri Mar 14 02:32:07 2008 From: dwarnold45 at suddenlink.net (David Arnold) Date: Thu, 13 Mar 2008 23:32:07 -0700 Subject: [SciPy-user] Matplotlib improper rendering of latex Message-ID: All, I have: #! /usr/local/bin/python from pylab import * x=arange(0,2,0.01) y=2*sin(2*pi*(x-pi/4)) plot(x,y) xlabel('x-axis') ylabel('y-axis') title(r'$y=2\sin (2\pi(x-\pi/4))$') show() Running gives me this error: code $ python simple.py --verbose-helpful matplotlib data path /Library/Frameworks/Python.framework/Versions/ 2.5/lib/python2.5/site-packages/matplotlib/mpl-data $HOME=/Users/darnold CONFIGDIR=/Users/darnold/.matplotlib loaded rc file /Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mpl-data/matplotlibrc matplotlib version 0.90.1 verbose.level helpful interactive is False units is True platform is darwin numerix numpy 1.0.4 font search path ['/Library/Frameworks/Python.framework/Versions/2.5/ lib/python2.5/site-packages/matplotlib/mpl-data/fonts/ttf', '/Library/ Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/ matplotlib/mpl-data/fonts/afm'] loaded ttfcache file /Users/darnold/.matplotlib/ttffont.cache backend TkAgg version 8.4 Could not match Bitstream Vera Serif, New Century Schoolbook, Century Schoolbook L, Utopia, ITC Bookman, Bookman, Nimbus Roman No9 L, Times New Roman, Times, Palatino, Charter, serif, normal, normal. Returning /Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mpl-data/fonts/ttf/Vera.ttf Could not match Bitstream Vera Serif, New Century Schoolbook, Century Schoolbook L, Utopia, ITC Bookman, Bookman, Nimbus Roman No9 L, Times New Roman, Times, Palatino, Charter, serif, normal, normal. Returning /Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mpl-data/fonts/ttf/Vera.ttf Exception in Tkinter callback Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/lib-tk/Tkinter.py", line 1403, in __call__ return self.func(*args) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/backends/backend_tkagg.py", line 151, in resize self.show() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/backends/backend_tkagg.py", line 154, in draw FigureCanvasAgg.draw(self) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/backends/backend_agg.py", line 392, in draw self.figure.draw(renderer) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/figure.py", line 601, in draw for a in self.axes: a.draw(renderer) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/axes.py", line 1286, in draw a.draw(renderer) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/text.py", line 410, in draw bbox, info = self._get_layout(renderer) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/text.py", line 255, in _get_layout line, self._fontproperties, ismath=self.is_math_text()) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/backends/backend_agg.py", line 246, in get_text_width_height s, self.dpi.get(), prop.get_size_in_points()) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mathtext.py", line 1587, in __call__ handler.expr.set_size_info(fontsize, dpi) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mathtext.py", line 1203, in set_size_info self.elements[0].set_size_info(self._scale*fontsize, dpi) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mathtext.py", line 1114, in set_size_info Element.set_size_info(self, fontsize, dpi) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mathtext.py", line 1027, in set_size_info element.set_size_info(self.fontsize, dpi) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mathtext.py", line 1114, in set_size_info Element.set_size_info(self, fontsize, dpi) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mathtext.py", line 1027, in set_size_info element.set_size_info(self.fontsize, dpi) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mathtext.py", line 1114, in set_size_info Element.set_size_info(self, fontsize, dpi) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mathtext.py", line 1027, in set_size_info element.set_size_info(self.fontsize, dpi) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mathtext.py", line 1116, in set_size_info self.font, self.sym, self.fontsize, dpi) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mathtext.py", line 597, in get_metrics self._get_info(font, sym, fontsize, dpi) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/matplotlib/mathtext.py", line 616, in _get_info raise ValueError('unrecognized symbol "%s"' % sym) ValueError: unrecognized symbol "\sin" From mjakubik at ta3.sk Fri Mar 14 02:44:12 2008 From: mjakubik at ta3.sk (Marian Jakubik) Date: Fri, 14 Mar 2008 07:44:12 +0100 Subject: [SciPy-user] Matplotlib improper rendering of latex In-Reply-To: References: Message-ID: <20080314074412.3f2eb96d@jakubik.ta3.sk> Hello, the title of your figure will be correct using this: ... title(r'$y=2 sin(2\pi(x-\pi/4))$') ... Best, Marian D?a Thu, 13 Mar 2008 23:32:07 -0700 David Arnold nap?sal: > All, > > I have: > > #! /usr/local/bin/python > > from pylab import * > > x=arange(0,2,0.01) > y=2*sin(2*pi*(x-pi/4)) > > plot(x,y) > xlabel('x-axis') > ylabel('y-axis') > title(r'$y=2\sin (2\pi(x-\pi/4))$') > > show() > > Running gives me this error: > > code $ python simple.py --verbose-helpful > matplotlib data path /Library/Frameworks/Python.framework/Versions/ > 2.5/lib/python2.5/site-packages/matplotlib/mpl-data > $HOME=/Users/darnold > CONFIGDIR=/Users/darnold/.matplotlib > loaded rc file /Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mpl-data/matplotlibrc > matplotlib version 0.90.1 > verbose.level helpful > interactive is False > units is True > platform is darwin > numerix numpy 1.0.4 > font search path ['/Library/Frameworks/Python.framework/Versions/2.5/ > lib/python2.5/site-packages/matplotlib/mpl-data/fonts/ttf', '/Library/ > Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/ > matplotlib/mpl-data/fonts/afm'] > loaded ttfcache file /Users/darnold/.matplotlib/ttffont.cache > backend TkAgg version 8.4 > Could not match Bitstream Vera Serif, New Century Schoolbook, Century > Schoolbook L, Utopia, ITC Bookman, Bookman, Nimbus Roman No9 L, Times > New Roman, Times, Palatino, Charter, serif, normal, normal. > Returning /Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mpl-data/fonts/ttf/Vera.ttf > Could not match Bitstream Vera Serif, New Century Schoolbook, Century > Schoolbook L, Utopia, ITC Bookman, Bookman, Nimbus Roman No9 L, Times > New Roman, Times, Palatino, Charter, serif, normal, normal. > Returning /Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mpl-data/fonts/ttf/Vera.ttf > Exception in Tkinter callback > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/lib-tk/Tkinter.py", line 1403, in __call__ > return self.func(*args) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/backends/backend_tkagg.py", line > 151, in resize > self.show() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/backends/backend_tkagg.py", line > 154, in draw > FigureCanvasAgg.draw(self) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/backends/backend_agg.py", line > 392, in draw > self.figure.draw(renderer) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/figure.py", line 601, in draw > for a in self.axes: a.draw(renderer) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/axes.py", line 1286, in draw > a.draw(renderer) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/text.py", line 410, in draw > bbox, info = self._get_layout(renderer) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/text.py", line 255, in _get_layout > line, self._fontproperties, ismath=self.is_math_text()) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/backends/backend_agg.py", line > 246, in get_text_width_height > s, self.dpi.get(), prop.get_size_in_points()) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mathtext.py", line 1587, in __call__ > handler.expr.set_size_info(fontsize, dpi) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mathtext.py", line 1203, in > set_size_info > self.elements[0].set_size_info(self._scale*fontsize, dpi) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mathtext.py", line 1114, in > set_size_info > Element.set_size_info(self, fontsize, dpi) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mathtext.py", line 1027, in > set_size_info > element.set_size_info(self.fontsize, dpi) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mathtext.py", line 1114, in > set_size_info > Element.set_size_info(self, fontsize, dpi) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mathtext.py", line 1027, in > set_size_info > element.set_size_info(self.fontsize, dpi) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mathtext.py", line 1114, in > set_size_info > Element.set_size_info(self, fontsize, dpi) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mathtext.py", line 1027, in > set_size_info > element.set_size_info(self.fontsize, dpi) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mathtext.py", line 1116, in > set_size_info > self.font, self.sym, self.fontsize, dpi) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mathtext.py", line 597, in > get_metrics > self._get_info(font, sym, fontsize, dpi) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/matplotlib/mathtext.py", line 616, in _get_info > raise ValueError('unrecognized symbol "%s"' % sym) > ValueError: unrecognized symbol "\sin" > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Fri Mar 14 02:56:44 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Mar 2008 01:56:44 -0500 Subject: [SciPy-user] Matplotlib improper rendering of latex In-Reply-To: References: Message-ID: <3d375d730803132356g18760c76t64d20ca988a792dc@mail.gmail.com> Please ask on the matplotlib mailing list. https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwarnold45 at suddenlink.net Fri Mar 14 03:00:31 2008 From: dwarnold45 at suddenlink.net (David Arnold) Date: Fri, 14 Mar 2008 00:00:31 -0700 Subject: [SciPy-user] Matplotlib improper rendering of latex In-Reply-To: <20080314074412.3f2eb96d@jakubik.ta3.sk> References: <20080314074412.3f2eb96d@jakubik.ta3.sk> Message-ID: <24224053-1E47-41F2-ADCA-8EA8CE4A7D1B@suddenlink.net> True, but the tutorial at: http://matplotlib.sourceforge.net/tutorial.html Indicates that \sin should work. D. On Mar 13, 2008, at 11:44 PM, Marian Jakubik wrote: > Hello, > > the title of your figure will be correct using this: > > ... > title(r'$y=2 sin(2\pi(x-\pi/4))$') > ... > > Best, > Marian > > > D?a Thu, 13 Mar 2008 23:32:07 -0700 > David Arnold nap?sal: > >> All, >> >> I have: >> >> #! /usr/local/bin/python >> >> from pylab import * >> >> x=arange(0,2,0.01) >> y=2*sin(2*pi*(x-pi/4)) >> >> plot(x,y) >> xlabel('x-axis') >> ylabel('y-axis') >> title(r'$y=2\sin (2\pi(x-\pi/4))$') >> >> show() >> >> Running gives me this error: >> >> code $ python simple.py --verbose-helpful >> matplotlib data path /Library/Frameworks/Python.framework/Versions/ >> 2.5/lib/python2.5/site-packages/matplotlib/mpl-data >> $HOME=/Users/darnold >> CONFIGDIR=/Users/darnold/.matplotlib >> loaded rc file /Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mpl-data/matplotlibrc >> matplotlib version 0.90.1 >> verbose.level helpful >> interactive is False >> units is True >> platform is darwin >> numerix numpy 1.0.4 >> font search path ['/Library/Frameworks/Python.framework/Versions/2.5/ >> lib/python2.5/site-packages/matplotlib/mpl-data/fonts/ttf', '/ >> Library/ >> Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/ >> matplotlib/mpl-data/fonts/afm'] >> loaded ttfcache file /Users/darnold/.matplotlib/ttffont.cache >> backend TkAgg version 8.4 >> Could not match Bitstream Vera Serif, New Century Schoolbook, Century >> Schoolbook L, Utopia, ITC Bookman, Bookman, Nimbus Roman No9 L, Times >> New Roman, Times, Palatino, Charter, serif, normal, normal. >> Returning /Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mpl-data/fonts/ttf/Vera.ttf >> Could not match Bitstream Vera Serif, New Century Schoolbook, Century >> Schoolbook L, Utopia, ITC Bookman, Bookman, Nimbus Roman No9 L, Times >> New Roman, Times, Palatino, Charter, serif, normal, normal. >> Returning /Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mpl-data/fonts/ttf/Vera.ttf >> Exception in Tkinter callback >> Traceback (most recent call last): >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/lib-tk/Tkinter.py", line 1403, in __call__ >> return self.func(*args) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/backends/backend_tkagg.py", line >> 151, in resize >> self.show() >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/backends/backend_tkagg.py", line >> 154, in draw >> FigureCanvasAgg.draw(self) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/backends/backend_agg.py", line >> 392, in draw >> self.figure.draw(renderer) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/figure.py", line 601, in draw >> for a in self.axes: a.draw(renderer) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/axes.py", line 1286, in draw >> a.draw(renderer) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/text.py", line 410, in draw >> bbox, info = self._get_layout(renderer) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/text.py", line 255, in _get_layout >> line, self._fontproperties, ismath=self.is_math_text()) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/backends/backend_agg.py", line >> 246, in get_text_width_height >> s, self.dpi.get(), prop.get_size_in_points()) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mathtext.py", line 1587, in >> __call__ >> handler.expr.set_size_info(fontsize, dpi) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mathtext.py", line 1203, in >> set_size_info >> self.elements[0].set_size_info(self._scale*fontsize, dpi) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mathtext.py", line 1114, in >> set_size_info >> Element.set_size_info(self, fontsize, dpi) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mathtext.py", line 1027, in >> set_size_info >> element.set_size_info(self.fontsize, dpi) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mathtext.py", line 1114, in >> set_size_info >> Element.set_size_info(self, fontsize, dpi) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mathtext.py", line 1027, in >> set_size_info >> element.set_size_info(self.fontsize, dpi) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mathtext.py", line 1114, in >> set_size_info >> Element.set_size_info(self, fontsize, dpi) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mathtext.py", line 1027, in >> set_size_info >> element.set_size_info(self.fontsize, dpi) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mathtext.py", line 1116, in >> set_size_info >> self.font, self.sym, self.fontsize, dpi) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mathtext.py", line 597, in >> get_metrics >> self._get_info(font, sym, fontsize, dpi) >> File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/matplotlib/mathtext.py", line 616, in >> _get_info >> raise ValueError('unrecognized symbol "%s"' % sym) >> ValueError: unrecognized symbol "\sin" >> >> >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From nmelgarejodiaz at gmail.com Sat Mar 15 13:30:20 2008 From: nmelgarejodiaz at gmail.com (Natali Melgarejo Diaz) Date: Sat, 15 Mar 2008 18:30:20 +0100 Subject: [SciPy-user] Zeros Message-ID: Good afternoon to everyone!! I want to make a column vector using numpy, like in Matlab when you do zeros(N,1). I've been doing it with zeros(N) and i already get an array. It maybe be the same but the results get are different with the formulas involved. Thanks ;)) ********Natali******** -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sat Mar 15 13:57:06 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 15 Mar 2008 10:57:06 -0700 Subject: [SciPy-user] Zeros In-Reply-To: References: Message-ID: <9457e7c80803151057n335aaeeam59b004a9bffbb39d@mail.gmail.com> Hey Natali On Sat, Mar 15, 2008 at 10:30 AM, Natali Melgarejo Diaz wrote: > I want to make a column vector using numpy, like in Matlab when you do > zeros(N,1). I've been doing it with zeros(N) and i already get an array. It > maybe be the same but the results get are different with the formulas > involved. You can either create the array with the extra dimension, i.e. zeroes([N,1]) or you can use the matrix class, which always returns arrays with a minimum of two dimensions. I recommend the first approach. Regards St?fan From hoytak at gmail.com Sat Mar 15 13:58:54 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Sat, 15 Mar 2008 10:58:54 -0700 Subject: [SciPy-user] Zeros In-Reply-To: References: Message-ID: <4db580fd0803151058j63d7032dhc34de1194c64d982@mail.gmail.com> Don't know if this helps, but it might: In matlab, all the typical operations assume the variables are matrices and tries to do matrix multiplication on them. Array-type operations are done using .*, .^, etc. In numpy and scipy, however, there are two types, arrays and matrices, which you can easily convert back and forth from using mat(X) and M.A or array(M). Array operations are, roughly speaking, element wise, and matrix operations do what you'd expect. It takes a little getting used to, but I find that distinguishing the types has a lot of advantages. So to get the behavior you're expecting, you probably need to declare a column matrix, so try mat(zeros(N)).T (the .T to transpose it from a row to a column). --Hoyt On Sat, Mar 15, 2008 at 10:30 AM, Natali Melgarejo Diaz wrote: > Good afternoon to everyone!! > > I want to make a column vector using numpy, like in Matlab when you do > zeros(N,1). I've been doing it with zeros(N) and i already get an array. It > maybe be the same but the results get are different with the formulas > involved. > Thanks ;)) > > ********Natali******** > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From dmitrey.kroshko at scipy.org Sat Mar 15 15:26:07 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 15 Mar 2008 21:26:07 +0200 Subject: [SciPy-user] [optimization] OpenOpt 0.17 Message-ID: <47DC22CF.5080000@scipy.org> Greetings, We're pleased to announce: OpenOpt 0.17 (release), free (license: BSD) optimization framework for Python language programmers, is available for download. Changes since previous release 0.15 (December 15): * new classes: GLP (global problem), MMP (mini-max problem) * several new solvers written: goldenSection, nsmm * some more solvers connected: scipy_slsqp, bvls, galileo * possibility to change default solver parameters * user-defined callback functions * changes in auto derivatives check * "noise" parameter for noisy functions * some changes to NLP/NSP solver ralg * some changes in graphical output, initial estimations xlim, ylim * scaling * some bugfixes Newsline: http://openopt.blogspot.com/ Homepage: http://scipy.org/scipy/scikits/wiki/OpenOpt From matthieu.brucher at gmail.com Sat Mar 15 15:33:32 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 15 Mar 2008 20:33:32 +0100 Subject: [SciPy-user] [optimization] OpenOpt 0.17 In-Reply-To: <47DC22CF.5080000@scipy.org> References: <47DC22CF.5080000@scipy.org> Message-ID: Hi, I have some questions : - why was the golden section reimplemented ? OpenOpt has a generic framework that provides a golden section line search for... several months - was the setupy.py problems solved ? that is, is the sys.path still changed when importing the scikit ? Matthieu 2008/3/15, dmitrey : > > Greetings, > We're pleased to announce: > OpenOpt 0.17 (release), free (license: BSD) optimization framework for > Python language programmers, is available for download. > > Changes since previous release 0.15 (December 15): > > * new classes: GLP (global problem), MMP (mini-max problem) > * several new solvers written: goldenSection, nsmm > * some more solvers connected: scipy_slsqp, bvls, galileo > * possibility to change default solver parameters > * user-defined callback functions > * changes in auto derivatives check > * "noise" parameter for noisy functions > * some changes to NLP/NSP solver ralg > * some changes in graphical output, initial estimations xlim, ylim > * scaling > * some bugfixes > > Newsline: > http://openopt.blogspot.com/ > > Homepage: > http://scipy.org/scipy/scikits/wiki/OpenOpt > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Sat Mar 15 15:52:13 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 15 Mar 2008 21:52:13 +0200 Subject: [SciPy-user] [optimization] OpenOpt 0.17 In-Reply-To: References: <47DC22CF.5080000@scipy.org> Message-ID: <47DC28ED.8060807@scipy.org> Matthieu Brucher wrote: > Hi, > > I have some questions : > - why was the golden section reimplemented ? OpenOpt has a generic > framework that provides a golden section line search for... several months I had noticed it too late. But the solver is just several lines of code, so not much efforts were elapsed. > - was the setupy.py problems solved ? that is, is the sys.path still > changed when importing the scikit ? Yes, it still is. Regards, D. From zunzun at zunzun.com Sat Mar 15 16:03:05 2008 From: zunzun at zunzun.com (James Phillips) Date: Sat, 15 Mar 2008 15:03:05 -0500 Subject: [SciPy-user] [optimization] OpenOpt 0.17 In-Reply-To: <47DC22CF.5080000@scipy.org> References: <47DC22CF.5080000@scipy.org> Message-ID: <268756d30803151303w22a37c18k9c1d2cbcf5e6c3d@mail.gmail.com> Does this have any parallel computation code? I was recently given access to a 4-core server, so I'm curious to know if it might speed up the OpenOpt calculations if multiple SMP CPU cores are available for parallel processing. James Phillips http://zunzun.com On 3/15/08, dmitrey wrote: > > Greetings, > We're pleased to announce: > OpenOpt 0.17 (release), free (license: BSD) optimization framework for > Python language programmers, is available for download. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Sat Mar 15 16:22:49 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 15 Mar 2008 22:22:49 +0200 Subject: [SciPy-user] [optimization] OpenOpt 0.17 In-Reply-To: <268756d30803151303w22a37c18k9c1d2cbcf5e6c3d@mail.gmail.com> References: <47DC22CF.5080000@scipy.org> <268756d30803151303w22a37c18k9c1d2cbcf5e6c3d@mail.gmail.com> Message-ID: <47DC3019.4020108@scipy.org> Unfortunately, AFAIK no (however, maybe some C or Fortran written solvers that are connected to OO can be somehow turned to use parallel CPU). MATLAB OpenOpt version could calculate 1st derivative df numerically via parallel cycle by parfor. However, it can benefit for costly funcs only. I intended to implement something like that in Python, but I don't know which library is better, and those that I had seen have (as for me) inconvenient syntax & complicated documentation. If someone is familiar with a Python library for parallel calculation, he could easily use it to calculate df, dc, dh numerically (func(x+dx[i])-func(x)/dx). Regards, D. James Phillips wrote: > Does this have any parallel computation code? I was > recently given access to a 4-core server, so I'm > curious to know if it might speed up the OpenOpt > calculations if multiple SMP CPU cores are available > for parallel processing. > > James Phillips > http://zunzun.com > > On 3/15/08, *dmitrey* > wrote: > > Greetings, > We're pleased to announce: > OpenOpt 0.17 (release), free (license: BSD) optimization framework for > Python language programmers, is available for download. > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Sat Mar 15 16:32:57 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 15 Mar 2008 21:32:57 +0100 Subject: [SciPy-user] [optimization] OpenOpt 0.17 In-Reply-To: <47DC3019.4020108@scipy.org> References: <47DC22CF.5080000@scipy.org> <268756d30803151303w22a37c18k9c1d2cbcf5e6c3d@mail.gmail.com> <47DC3019.4020108@scipy.org> Message-ID: 2008/3/15, dmitrey : > > Unfortunately, AFAIK no (however, maybe some C or Fortran written > solvers that are connected to OO can be somehow turned to use parallel > CPU). > MATLAB OpenOpt version could calculate 1st derivative df numerically via > parallel cycle by parfor. However, it can benefit for costly funcs only. > I intended to implement something like that in Python, but I don't know > which library is better, and those that I had seen have (as for me) > inconvenient syntax & complicated documentation. > If someone is familiar with a Python library for parallel calculation, > he could easily use it to calculate df, dc, dh numerically > (func(x+dx[i])-func(x)/dx). > Regards, D. > This is a good idea, it won't be much trouble to add a new numerical function class in the generic framework. I'll think about this ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From zunzun at zunzun.com Sat Mar 15 16:40:01 2008 From: zunzun at zunzun.com (James Phillips) Date: Sat, 15 Mar 2008 15:40:01 -0500 Subject: [SciPy-user] [optimization] OpenOpt 0.17 In-Reply-To: References: <47DC22CF.5080000@scipy.org> <268756d30803151303w22a37c18k9c1d2cbcf5e6c3d@mail.gmail.com> <47DC3019.4020108@scipy.org> Message-ID: <268756d30803151340g34331999ue1e02dc001125ba@mail.gmail.com> Take a peek at http://www.parallelpython.com/ James On 3/15/08, Matthieu Brucher wrote: > > > This is a good idea, it won't be much trouble to add a new numerical > function class in the generic framework. I'll think about this ;) > > Matthieu > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Sat Mar 15 19:19:24 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 15 Mar 2008 19:19:24 -0400 Subject: [SciPy-user] Zeros In-Reply-To: <4db580fd0803151058j63d7032dhc34de1194c64d982@mail.gmail.com> References: <4db580fd0803151058j63d7032dhc34de1194c64d982@mail.gmail.com> Message-ID: On 15/03/2008, Hoyt Koepke wrote: > Don't know if this helps, but it might: > > In matlab, all the typical operations assume the variables are > matrices and tries to do matrix multiplication on them. Array-type > operations are done using .*, .^, etc. In numpy and scipy, however, > there are two types, arrays and matrices, which you can easily convert > back and forth from using mat(X) and M.A or array(M). Array > operations are, roughly speaking, element wise, and matrix operations > do what you'd expect. It takes a little getting used to, but I find > that distinguishing the types has a lot of advantages. > > So to get the behavior you're expecting, you probably need to declare > a column matrix, so try mat(zeros(N)).T (the .T to transpose it from > a row to a column). I would put it differently. Numpy arrays do not always have two or more dimensions. A one-dimensional array is neither a row nor a column vector, it's just an array of numbers. Also, the operations on arrays are all elementwise, like matlab's .* and .^ operators. If you want matrix multiplication, use the function dot(). If you're doing so many matrix operations that all those dot()s become cumbersome, you may want to consider using the matrix() wrapper around arrays; without affecting the underlying data, this rebinds * to do matrix multiplication, and enforces at-least-two-dimensionality. But like all wrappers, this is imperfect, and you can expect to stumble over a few inconveniences (functions that inadvertently convert your matrices to arrays, for example), so I recommend getting used to arrays first, and only switching to matrices if absolutely necessary. Anne From s.mientki at ru.nl Sun Mar 16 09:12:09 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 16 Mar 2008 14:12:09 +0100 Subject: [SciPy-user] array slice problems ... Message-ID: <47DD1CA9.10405@ru.nl> hello, from any input ( list, 1-dim array, 2-dim array), I need to take a slice and pass it as a 2-dimensional array. But I'm having trouble with taking slices from such an array. If I have a 2-dimensional array [[1 2 3 4]] then a_array [:][1:3] returns an empty 2-dimensional array. while I think the expression means: - take all elements along the first axis - then take from each of the found elements, the subelements 1 up to 3 which certainly shouldn't be empty ??? I also tried with a real 2-dimensional array, with the same results :-( What am I doing wrong ? Maybe someone can enlighten me about array slices. (a copy of the program is below) thanks, Stef Mientki from numpy import * ## VERSION 1.0.3.dev3722 # whatever I get, I must construct a 2-dimensional array # where the first axis is the signal # and the second axis are the datasamples of that signal a_list = [ 1,2,3,4 ] a_array = array ( a_list ) a_array = a_array.reshape ( len ( a_array ), 1 ) r = a_array print '2-dim array',r.shape, r.ndim, r ## 1-dim array (1, 4) 2 [[1 2 3 4]] r = a_array [:] print '2-dim array [:]',r.shape, r.ndim, r ## 1-dim array [:] (1, 4) 2 [[1 2 3 4]] # DOESN'T WORK r = a_array [:][1:3] print '2-dim array [:][1:3]',r.shape, r.ndim, r ## 1-dim array [:][1:3] (0, 4) 2 [] # WORKS b = a_array [0] b = b [1:3] b = b.reshape ( 1, len ( b ) ) r = b print '1-dim array special',r.shape, r.ndim, r ## 1-dim array special (1, 2) 2 [[2 3]] From aisaac at american.edu Sun Mar 16 09:34:30 2008 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 16 Mar 2008 09:34:30 -0400 Subject: [SciPy-user] array slice problems ... In-Reply-To: <47DD1CA9.10405@ru.nl> References: <47DD1CA9.10405@ru.nl> Message-ID: On Sun, 16 Mar 2008, Stef Mientki apparently wrote: > If I have a 2-dimensional array [[1 2 3 4]] > then a_array [:][1:3] returns an empty 2-dimensional array. This is the same with lists. You initially copy the list, which just has one element at index 0. You then ask for the elements from 1:3, and of course there aren't any. Cheers, Alan Isaac From se.berg at stud.uni-goettingen.de Sun Mar 16 09:54:53 2008 From: se.berg at stud.uni-goettingen.de (Sebastian Stephan Berg) Date: Sun, 16 Mar 2008 14:54:53 +0100 Subject: [SciPy-user] array slice problems ... In-Reply-To: <47DD1CA9.10405@ru.nl> References: <47DD1CA9.10405@ru.nl> Message-ID: <1205675693.5495.12.camel@sebook> Hello, maybe I am doing something wrong or misunderstood it, but with your code I get: a_array = [[1], [2], [3], [4]] (so the second part does not actually work). While I think you want the array to be shaped [[1,2,3,4]]? But anyways, that second array can be sliced with [:,1:3] as you want giving you all of the first dimension and from the second dimension only items 1:3. So I think what you want is simply not using two seperate slicings. Another example: a = array([[1,2,3,4], [2,3,4,5]]) a[:,1:3] giving array([[2,3], [3,4]]) This of course does only work with arrays and not with lists. Regards, Sebastian From berthe.loic at gmail.com Sun Mar 16 10:24:36 2008 From: berthe.loic at gmail.com (LB) Date: Sun, 16 Mar 2008 07:24:36 -0700 (PDT) Subject: [SciPy-user] Pb with scipy.interpolate.spalde and numpy.float64 Message-ID: <50d2379f-bd20-4785-8a03-8345725160ac@a1g2000hsb.googlegroups.com> Hi, scipy.interpolate.spalde accepts float as first argument but does not work with numpy.float64 : With python's float : In [166]: c, type(c) Out[166]: (0.52752752752752752, ) In [167]: x, y = interpolate.spalde(c, tck) => no pb. With numpy.float64 : In [168]: b, type(b) Out[168]: (0.52752752752752752, ) In [169]: x, y = interpolate.spalde(b, tck) --------------------------------------------------------------------------- Traceback (most recent call last) /home/loic/Python/test_profil/ in () /home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ interpolate/fitpack.py in spalde(x, tck) 575 parametric = False 576 if parametric: --> 577 return _ntlist(map(lambda c,x=x,t=t,k=k:spalde(x, [t,c,k]),c)) 578 else: 579 try: x=x.tolist() /home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ interpolate/fitpack.py in (c, x, t, k) 575 parametric = False 576 if parametric: --> 577 return _ntlist(map(lambda c,x=x,t=t,k=k:spalde(x, [t,c,k]),c)) 578 else: 579 try: x=x.tolist() /home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ interpolate/fitpack.py in spalde(x, tck) 581 try: x=list(x) 582 except: x=[x] --> 583 if len(x)>1: 584 return map(lambda x,tck=tck:spalde(x,tck),x) 585 d,ier=_fitpack._spalde(t,c,k,x[0]) : object of type 'float' has no len() It seems to come from the method 'tolist' of numpy.float64, which does not return a list : In [174]: c.tolist() --------------------------------------------------------------------------- Traceback (most recent call last) /home/loic/Python/test_profil/ in () : 'float' object has no attribute 'tolist' In [175]: b.tolist() Out[175]: 0.52752752752752752 So my question is : why do numpy.float64 have a tolist method, and why does it not return a list ? for information, In [179]: numpy.__version__, scipy.__version__ Out[179]: ('1.0.5.dev4854', '0.7.0.dev4004') -- LB From stef.mientki at gmail.com Sun Mar 16 13:27:37 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Sun, 16 Mar 2008 18:27:37 +0100 Subject: [SciPy-user] array slice problems ... In-Reply-To: <1205675693.5495.12.camel@sebook> References: <47DD1CA9.10405@ru.nl> <1205675693.5495.12.camel@sebook> Message-ID: <47DD5889.8000500@gmail.com> thanks Alan and Sebastien, Sebastian Stephan Berg wrote: > Hello, > > maybe I am doing something wrong or misunderstood it, but with your code > I get: > > a_array = [[1], [2], [3], [4]] > ??? > (so the second part does not actually work). While I think you want the > array to be shaped [[1,2,3,4]]? > But anyways, that second array can be sliced with [:,1:3] as you want > giving you all of the first dimension and from the second dimension only > items 1:3. So I think what you want is simply not using two seperate > slicings. Another example: > > a = array([[1,2,3,4], [2,3,4,5]]) > a[:,1:3] giving array([[2,3], [3,4]]) > > I din't know there was a difference between separate slices like [:][1:3] and a double dimension slice like [:,1:3], thanks. btw Alan mentioned that separate slices takes a copy (just like lists), while I thought arrays were never copied unless you explicitly tells the array to do ?? > This of course does only work with arrays and not with lists. > > yes, I knew that ;-) cheers, Stef > Regards, > > Sebastian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bevan07 at gmail.com Sun Mar 16 19:01:49 2008 From: bevan07 at gmail.com (bevan) Date: Sun, 16 Mar 2008 23:01:49 +0000 (UTC) Subject: [SciPy-user] timeseries and maskedarrays Message-ID: Hello, I recently posted on the numpy forum looking for a solution to creating totals and averages for time series of rainfall. It was suggested to me that I try the timeseries package. I have managed to get it working (struggling due to limited abilities rather than any issues with the package) - thanks by the way to the developers. However, when I run my code i get the following warning: C:\Python25\lib\site-packages\scipy\sandbox\maskedarray\core.py:1521: UserWarning: Warning: converting a masked element to nan. warnings.warn("Warning: converting a masked element to nan.") Is this an issue? The main question I have is: How would i create an average on values based on month, that is sum the rainfall from daily values to monthly totals (done), then average all the Januarys, Februarys etc in the timeseries? Also any other issues with my code (below) that is likely to trip me up later? Thanks for your time. import time import numpy import maskedarray as MA import timeseries as TS def readTSFtimeseries(filename): """ # PURPOSE: read a HYDROL .tsf file into a timeseries array. # # INPUT: #20/11/2003 @ 00:00:00 Q255 T5 #21/11/2003 @ 00:00:00 0.000000 Q30 T5 #............................................ #07/05/2008 @ 00:00:00 2.500000 Q10 T5 # # OUTPUT: a timeseries array # # EXAMPLE: >>> data = readTSFtimeseries(r'B:\Python\test.tsf') # >>> x = data.dates # >>> y = data.series """ MYFILE = open(filename, "r") datafile = MYFILE.readlines() MYFILE.close() DateList=[] Rain_mmList=[] QualityList=[] TypeList=[] jd=[] for datalines in datafile: dataflds = datalines.split() datefld =time.strptime(dataflds[0]+'_'+dataflds[2], '%d/%m/%Y_%H:%M:%S') jd.append(datefld[7]) DateList.append(TS.Date('D',year=datefld[0],month=datefld[1],day=datefld [2])) if len(dataflds) == 6: Rain_mmList.append(float(dataflds[3])) QualityList.append(dataflds[4]) TypeList.append(dataflds[5]) if len(dataflds) == 5: #Rain_mmList.append(float(-9.99)) Rain_mmList.append(TS.tsmasked) QualityList.append(dataflds[3]) TypeList.append(dataflds[4]) DateArr=TS.DateArray(dates=DateList,freq='D',copy =False) data=TS.time_series(Rain_mmList,DateArr) print data.dates.size,data.series.size return(data) Raindata =readTSFtimeseries(r"C:\Documents and Settings\bevanj\Desktop\rain.tsf") import pylab import matplotlib from timeseries import plotlib as TSPL MonRaindata = Raindata.convert('monthly', MA.sum) print MonRaindata print Raindata.convert('yearly', MA.sum) #fig1=TSPL.tsfigure()3 #fplt1=fig1.add_tsplot(111) #fplt1.tsplot(RainData,'-') #fplt1.tsplot(MonRainData,'-') #pylab.show() From amg at iri.columbia.edu Sun Mar 16 19:26:31 2008 From: amg at iri.columbia.edu (Arthur M. Greene) Date: Sun, 16 Mar 2008 19:26:31 -0400 Subject: [SciPy-user] timeseries and maskedarrays In-Reply-To: References: Message-ID: <47DDACA7.8020103@iri.columbia.edu> For manipulating climate data you might have a look at CDAT: http://www-pcmdi.llnl.gov/software-portal/cdat/ It is a fairly large package (or rather, set of packages) but includes tools for extracting climatologies, computing anomalies and various other manipulations of time axes. There are also extensive plotting routines... Cheers, Arthur bevan wrote: > Hello, > > I recently posted on the numpy forum looking for a solution to creating totals > and averages for time series of rainfall. It was suggested to me that I try > the timeseries package. > > I have managed to get it working (struggling due to limited abilities rather > than any issues with the package) - thanks by the way to the developers. > > However, when I run my code i get the following warning: > C:\Python25\lib\site-packages\scipy\sandbox\maskedarray\core.py:1521: > UserWarning: Warning: converting a masked element to nan. > warnings.warn("Warning: converting a masked element to nan.") > > Is this an issue? > The main question I have is: > How would i create an average on values based on month, that is sum the > rainfall from daily values to monthly totals (done), then average all the > Januarys, Februarys etc in the timeseries? > Also any other issues with my code (below) that is likely to trip me up later? > > Thanks for your time. > > > import time > import numpy > import maskedarray as MA > import timeseries as TS > > def readTSFtimeseries(filename): > """ > # PURPOSE: read a HYDROL .tsf file into a timeseries array. > # > # INPUT: > #20/11/2003 @ 00:00:00 Q255 T5 > #21/11/2003 @ 00:00:00 0.000000 Q30 T5 > #............................................ > #07/05/2008 @ 00:00:00 2.500000 Q10 T5 > # > # OUTPUT: a timeseries array > # > # EXAMPLE: >>> data = readTSFtimeseries(r'B:\Python\test.tsf') > # >>> x = data.dates > # >>> y = data.series > """ > > MYFILE = open(filename, "r") > datafile = MYFILE.readlines() > MYFILE.close() > > DateList=[] > Rain_mmList=[] > QualityList=[] > TypeList=[] > jd=[] > > for datalines in datafile: > dataflds = datalines.split() > datefld =time.strptime(dataflds[0]+'_'+dataflds[2], '%d/%m/%Y_%H:%M:%S') > jd.append(datefld[7]) > DateList.append(TS.Date('D',year=datefld[0],month=datefld[1],day=datefld > [2])) > > if len(dataflds) == 6: > Rain_mmList.append(float(dataflds[3])) > QualityList.append(dataflds[4]) > TypeList.append(dataflds[5]) > > if len(dataflds) == 5: > #Rain_mmList.append(float(-9.99)) > Rain_mmList.append(TS.tsmasked) > QualityList.append(dataflds[3]) > TypeList.append(dataflds[4]) > > > DateArr=TS.DateArray(dates=DateList,freq='D',copy =False) > data=TS.time_series(Rain_mmList,DateArr) > print data.dates.size,data.series.size > return(data) > > > Raindata =readTSFtimeseries(r"C:\Documents and > Settings\bevanj\Desktop\rain.tsf") > > import pylab > import matplotlib > from timeseries import plotlib as TSPL > > MonRaindata = Raindata.convert('monthly', MA.sum) > print MonRaindata > print Raindata.convert('yearly', MA.sum) > #fig1=TSPL.tsfigure()3 > #fplt1=fig1.add_tsplot(111) > #fplt1.tsplot(RainData,'-') > #fplt1.tsplot(MonRainData,'-') > #pylab.show() > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- *^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~* Arthur M. Greene, Ph.D. The International Research Institute for Climate and Society (IRI) ......................... amg at iri dot columbia dot edu | http://iri.columbia.edu *^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~* From mattknox_ca at hotmail.com Sun Mar 16 20:46:27 2008 From: mattknox_ca at hotmail.com (Matt Knox) Date: Mon, 17 Mar 2008 00:46:27 +0000 (UTC) Subject: [SciPy-user] timeseries and maskedarrays References: Message-ID: > However, when I run my code i get the following warning: > C:\Python25\lib\site-packages\scipy\sandbox\maskedarray\core.py:1521: > UserWarning: Warning: converting a masked element to nan. > warnings.warn("Warning: converting a masked element to nan.") When you are loading the data for your time series, specify the mask separately. Instead of doing: > if len(dataflds) == 6: > Rain_mmList.append(float(dataflds[3])) > QualityList.append(dataflds[4]) > TypeList.append(dataflds[5]) > > if len(dataflds) == 5: > #Rain_mmList.append(float(-9.99)) > Rain_mmList.append(TS.tsmasked) > QualityList.append(dataflds[3]) > TypeList.append(dataflds[4]) try: > if len(dataflds) == 6: > Rain_mmList.append(float(dataflds[3])) > Rain_mask.append(False) > QualityList.append(dataflds[4]) > TypeList.append(dataflds[5]) > > if len(dataflds) == 5: > Rain_mmList.append(-9.99) > Rain_mask.append(True) > QualityList.append(dataflds[3]) > TypeList.append(dataflds[4]) and then specify "mask=Rain_mask" as another parameter when calling ts.time_series to create the TimeSeries object. Also, note that the new location for the timeseries scikit docs is: http://scipy.org/scipy/scikits/wiki/TimeSeries , and it relies on the latest version of numpy svn currently and doesn't use the separate scipy sandbox maskedarray anymore (since that has been merged into numpy). It has been in the scikits repository for a while now, so you are using a somewhat old version. Once numpy 1.0.5 is released it will have no external dependencies outside of numpy itself. > The main question I have is: > How would i create an average on values based on month, that is sum the > rainfall from daily values to monthly totals (done), then average all the > Januarys, Februarys etc in the timeseries? try the following code (change the import statements if you are still using the older versions of the packages): import numpy as np from numpy import ma import scikits.timeseries as ts # create a simple test series at daily frequency series = ts.time_series(np.random.normal(size=2000), start_date=ts.now('d')) monthly_sums = series.convert('monthly', ma.sum) jan_sums = monthly_sums[monthly_sums.month == 1] jan_average = jan_sums.mean() print jan_average > Also any other issues with my code (below) that is likely to trip me up later? you could slightly simplify the date loading using the following approach: .......... from datetime import datetime as dt _date = TS.Date(freq='d', datetime=dt.strptime(dataflds[0], '%d/%m/%Y')) DateList = [] for datalines in datafile: dataflds = datalines.split() DateList.append(dt.strptime(dataflds[0], '%d/%m/%Y')) DateArr=TS.date_array(DateList, freq='D') .......... This may generate some warnings with the version of the package you are using right now, but should otherwise work fine - and the warnings are gone with the latest versions of the package. - Matt From ac1201 at gmail.com Sun Mar 16 22:45:12 2008 From: ac1201 at gmail.com (Andrew Charles) Date: Mon, 17 Mar 2008 13:45:12 +1100 Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: <47D9B3B9.9030603@ideotrope.org> References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> <20080313162705.GA6473@giton> <47D9B3B9.9030603@ideotrope.org> Message-ID: On Fri, Mar 14, 2008 at 10:07 AM, Zane Selvans wrote: > I don't know why they would have both chosen to write their own > numerical solutions from scratch if something publicly available would > have worked... but I guess it's possible. A lot of people don't seem to > like to build on the work of others. Well they were building on the work of others, in particular Runge, and Kutta :-) Simple integrators like this aren't that difficult to implement (under 100 lines or so), and it is often easier to write code that works nicely with your model than to wrestle with someone else's interface. I recently ported an RK4 solver to python - it's designed for generic n-body ODEs, so it might not be suitable for what you're doing, and it's written in pure python, so is not especially optimised for speed, but if you're interested, drop me a line and I can send you a copy of the code. Andrew Charles From wnbell at gmail.com Sun Mar 16 23:13:36 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 16 Mar 2008 22:13:36 -0500 Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: <47D9B3B9.9030603@ideotrope.org> References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> <20080313162705.GA6473@giton> <47D9B3B9.9030603@ideotrope.org> Message-ID: On Thu, Mar 13, 2008 at 6:07 PM, Zane Selvans wrote: > > The only reason I ask about Runge-Kutta specifically is I know two > people who have the solution to my problem coded up already, one in > Fortran, and one in C, and they both used a Runge-Kutta integrator. I > want to open-source my model code, but it depends on their codes, and if > I can't get them to let me publicize their work, I'm going to have to > re-write it from scratch... unless someone else has already done it. > > I don't know why they would have both chosen to write their own > numerical solutions from scratch if something publicly available would > have worked... but I guess it's possible. A lot of people don't seem to > like to build on the work of others. As Andrew mentioned, writing a generic RK method (that accepts used defined derivatives) is pretty trivial. If you stored the coefficients in the so-called "Butcher arrays" as arrays, as opposed to hard coding for a particular order (e.g. RK2 or RK4), then you can unify all (explicit) RK methods easily too. Furthermore, adaptive RK methods (e.g. RK45) are simply a post-process after a standard RK step that can be handled in a uniform fashion. I suspect that you could write a generic RK scheme in 20-30 lines by using numpy for the vector operations . There's really no reason to use someone else's C/Fortran code for this problem (and many reasons not to). -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From peridot.faceted at gmail.com Mon Mar 17 01:00:03 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 17 Mar 2008 01:00:03 -0400 Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> <20080313162705.GA6473@giton> <47D9B3B9.9030603@ideotrope.org> Message-ID: On 16/03/2008, Nathan Bell wrote: > On Thu, Mar 13, 2008 at 6:07 PM, Zane Selvans wrote: > > > > The only reason I ask about Runge-Kutta specifically is I know two > > people who have the solution to my problem coded up already, one in > > Fortran, and one in C, and they both used a Runge-Kutta integrator. I > > want to open-source my model code, but it depends on their codes, and if > > I can't get them to let me publicize their work, I'm going to have to > > re-write it from scratch... unless someone else has already done it. > > > > > I don't know why they would have both chosen to write their own > > numerical solutions from scratch if something publicly available would > > have worked... but I guess it's possible. A lot of people don't seem to > > like to build on the work of others. > > > As Andrew mentioned, writing a generic RK method (that accepts used > defined derivatives) is pretty trivial. If you stored the > coefficients in the so-called "Butcher arrays" as arrays, as opposed > to hard coding for a particular order (e.g. RK2 or RK4), then you can > unify all (explicit) RK methods easily too. Furthermore, adaptive RK > methods (e.g. RK45) are simply a post-process after a standard RK step > that can be handled in a uniform fashion. > > I suspect that you could write a generic RK scheme in 20-30 lines by > using numpy for the vector operations . There's really no reason to > use someone else's C/Fortran code for this problem (and many reasons > not to). I have to disagree with this. Getting numerical code right, and robust, and bug-free, requires a lot of knowledge, skill and care. Even if it's only thirty lines. For example, suppose you implement embedded adaptive RK 4/5 integration. How do you design a test case that ensures you didn't get one of the coefficients wrong? After all, the adaptiveness means that even quite serious miscalculations often still converge to the right answer, they just take many more iterations than they should. Or what about stiff ODEs? Simple RK schemes will take *forever*. Better, if you can, to use an ODE solver somebody else has beaten the bugs out of over many years. Especially if all it takes is loading it in from scipy. Focus the effort that would have gone into debugging and testing your RK solver on solving the quirks of your own problem. Once in a while you'll run into a problem when you need more performance, or different abilities, or special techniques, that the library doesn't cover. But until that happens, why waste your time reinventing the wheel? Try the stock solutions first. If they don't work, understand why, and whether reimplementing them will solve the problem or just hide it. Anne P.S. your problem is really making me curious - are you really simulating self-gravitating viscoelastic material? Is this geophysics? Somehow that sounds more like a PDE type of problem than ODE; PDE solvers are much less standardized and I don't know that there are stock tools to attack them. -A From wnbell at gmail.com Mon Mar 17 02:43:28 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 17 Mar 2008 01:43:28 -0500 Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> <20080313162705.GA6473@giton> <47D9B3B9.9030603@ideotrope.org> Message-ID: On Mon, Mar 17, 2008 at 12:00 AM, Anne Archibald wrote: > I have to disagree with this. Getting numerical code right, and > robust, and bug-free, requires a lot of knowledge, skill and care. As does wrapping libraries. Keep in mind that only a minority of SciPy users/contributors will ever know SWIG or f2py, etc. Keeping things in Python greatly enhances accessibility of the implementation. > Even if it's only thirty lines. For example, suppose you implement > embedded adaptive RK 4/5 integration. How do you design a test case > that ensures you didn't get one of the coefficients wrong? After all, > the adaptiveness means that even quite serious miscalculations often > still converge to the right answer, they just take many more > iterations than they should. How do you know that you didn't introduce a subtle error when wrapping the library? In either case you need comprehensive unit tests. I don't see how this is a win for existing codes. > Or what about stiff ODEs? Simple RK schemes will take *forever*. Don't move the goalposts, the OP wanted an RK method. In the case of stiff solvers, which can be considerably more complicated, wrapping an existing code may prove to be the better option. > Better, if you can, to use an ODE solver > somebody else has beaten the bugs out of over many years. Especially > if all it takes is loading it in from scipy. Wrapping a library and "loading it in from scipy" is not always a trivial exercise. What if the interface is a poor match for wrapping? What if the library only supports double precision FP and you want singles or extended precision? What if the callback procedure is tedious/inflexible? What if the library doesn't support RK23 and that's what you really want? What if the library doesn't provide all the information that you might want? What happens when the person who wraps the code abandons SciPy? What if the original authors of the library abandon their code? What if the library changes licenses? All of these problems are easily solved by a Python implementation. > Focus the effort that would have gone into debugging and testing your RK solver on solving > the quirks of your own problem. You ignore all the potential problems that may arise when wrapping someone else's code and the difficulty with providing a Pythonic interface. > Once in a while you'll run into a problem when you need more > performance, or different abilities, or special techniques, that the > library doesn't cover. But until that happens, why waste your time > reinventing the wheel? Try the stock solutions first. If they don't > work, understand why, and whether reimplementing them will solve the > problem or just hide it. In this case the wheel is 20 lines of dead-simple Python + NumPy code. I'd wager that the handling of parameters and callbacks is more effort than the numerical component. IMO the pure Python + NumPy approach wins hands down here. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From peridot.faceted at gmail.com Mon Mar 17 03:08:42 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 17 Mar 2008 03:08:42 -0400 Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> <20080313162705.GA6473@giton> <47D9B3B9.9030603@ideotrope.org> Message-ID: On 17/03/2008, Nathan Bell wrote: > On Mon, Mar 17, 2008 at 12:00 AM, Anne Archibald > wrote: > > I have to disagree with this. Getting numerical code right, and > > robust, and bug-free, requires a lot of knowledge, skill and care. > > As does wrapping libraries. Keep in mind that only a minority of > SciPy users/contributors will ever know SWIG or f2py, etc. Keeping > things in Python greatly enhances accessibility of the implementation. Yes, well, as the OP made it fairly clear that they just needed an ODE solver, not specifically an RK solver, I had in mind using one of the several good solvers built into scipy. We could argue about the relative difficulties of wrapping libraries versus implementing subtle numerical code, but I don't think that's very productive. Both wrapping a library and implementing numerical code are reinventing the wheel compared to using what's already available (if it does what you want); whether you find it easier to track down subtle numerical bugs or failures in a wrapper tool will depend on your case. Anne From wnbell at gmail.com Mon Mar 17 03:30:32 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 17 Mar 2008 02:30:32 -0500 Subject: [SciPy-user] Runge-Kutta ODE integrator in SciPy, odepack problems on SciPy Superpack for OSX? In-Reply-To: References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> <20080313162705.GA6473@giton> <47D9B3B9.9030603@ideotrope.org> Message-ID: On Mon, Mar 17, 2008 at 2:08 AM, Anne Archibald wrote: > Yes, well, as the OP made it fairly clear that they just needed an ODE > solver, not specifically an RK solver, I had in mind using one of the > several good solvers built into scipy. I don't see where the OP made such a statement, but obviously if he's content with scipy.integrate.odeint then he should use probably use scipy.integrate.odeint :) -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From edschofield at gmail.com Mon Mar 17 06:46:59 2008 From: edschofield at gmail.com (edschofield at gmail.com) Date: Mon, 17 Mar 2008 05:46:59 -0500 (CDT) Subject: [SciPy-user] hello Message-ID: <20080317104659.BF45439C0E1@new.scipy.org> Your mail 192.168.0.31:3148->216.62.213.231:25 contains contaminated file _Fromedschofield_gmail.com__Dateedschofield_gmail.com__Subjhello_/_document.zip_/document.htm______________________________________________________________________.scr with virus Net-Worm.Win32.Mytob.q,so it is dropped. From edschofield at gmail.com Mon Mar 17 07:48:57 2008 From: edschofield at gmail.com (edschofield at gmail.com) Date: Mon, 17 Mar 2008 06:48:57 -0500 (CDT) Subject: [SciPy-user] Mwzjgzvvdpbdvwx Message-ID: <20080317114857.B64CA39C0B7@new.scipy.org> Your mail 192.168.1.152:2090->216.62.213.231:25 contains contaminated file _Fromedschofield_gmail.com__Dateedschofield_gmail.com__SubjMwzjgzvvdpbdvwx_/_document.scr_ with virus Net-Worm.Win32.Mytob.q,so it is dropped. From kurtjx at gmail.com Mon Mar 17 13:58:13 2008 From: kurtjx at gmail.com (Kurt J) Date: Mon, 17 Mar 2008 17:58:13 +0000 Subject: [SciPy-user] indexes in very large matrices Message-ID: Hi List, I have a rather large matrix (15k x 15k) and I was hoping there was a nice efficient way to search thru it. Currently I am just using for loops to go thru the rows and columns... for i, iKey in enumerate(self.matrixKey): > for j, jKey in enumerate(self.matrixKey): > if self.matrix[i][j] <= threshold and not self.matrix[i][j] > == 0.0: > print "threshold at " + str(iKey) + " and " +str(jKey) > Can I do something more clever? Quicker? BTW Numpy and Scipy rock!!! Cheers, Kurt J -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Mar 17 14:09:26 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 17 Mar 2008 13:09:26 -0500 Subject: [SciPy-user] indexes in very large matrices In-Reply-To: References: Message-ID: <3d375d730803171109u56d7214bk46928227adc304d5@mail.gmail.com> On Mon, Mar 17, 2008 at 12:58 PM, Kurt J wrote: > Hi List, > > I have a rather large matrix (15k x 15k) and I was hoping there was a nice > efficient way to search thru it. Currently I am just using for loops to go > thru the rows and columns... > > > for i, iKey in enumerate(self.matrixKey): > > for j, jKey in enumerate(self.matrixKey): > > if self.matrix[i][j] <= threshold and not > self.matrix[i][j] == 0.0: > > print "threshold at " + str(iKey) + " and " +str(jKey) > > > > > Can I do something more clever? Quicker? Sure. iarr, jarr = numpy.nonzero((self.matrix <= threshold) & (self.matrix != 0)) for i, j in zip(iarr, jarr): print 'threshold at %s and %s' % (self.matrixKey[i], self.matrixKey[j]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kurtjx at gmail.com Mon Mar 17 14:32:55 2008 From: kurtjx at gmail.com (Kurt J) Date: Mon, 17 Mar 2008 18:32:55 +0000 Subject: [SciPy-user] indexes in very large matrices In-Reply-To: <3d375d730803171109u56d7214bk46928227adc304d5@mail.gmail.com> References: <3d375d730803171109u56d7214bk46928227adc304d5@mail.gmail.com> Message-ID: Whoa! that is _really_ fast. Thanks Robert!! However, it seems i'm only going thru the first row of my matrix, i'm a bit confused... On Mon, Mar 17, 2008 at 6:09 PM, Robert Kern wrote: > On Mon, Mar 17, 2008 at 12:58 PM, Kurt J wrote: > > Hi List, > > > > I have a rather large matrix (15k x 15k) and I was hoping there was a > nice > > efficient way to search thru it. Currently I am just using for loops to > go > > thru the rows and columns... > > > > > for i, iKey in enumerate(self.matrixKey): > > > for j, jKey in enumerate(self.matrixKey): > > > if self.matrix[i][j] <= threshold and not > > self.matrix[i][j] == 0.0: > > > print "threshold at " + str(iKey) + " and " > +str(jKey) > > > > > > > > > Can I do something more clever? Quicker? > > Sure. > > iarr, jarr = numpy.nonzero((self.matrix <= threshold) & (self.matrix != > 0)) > for i, j in zip(iarr, jarr): > print 'threshold at %s and %s' % (self.matrixKey[i], self.matrixKey > [j]) > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurtjx at gmail.com Mon Mar 17 14:43:50 2008 From: kurtjx at gmail.com (Kurt J) Date: Mon, 17 Mar 2008 18:43:50 +0000 Subject: [SciPy-user] indexes in very large matrices In-Reply-To: References: <3d375d730803171109u56d7214bk46928227adc304d5@mail.gmail.com> Message-ID: Ok. seems the bug was elsewhere in my code... sorry for the confusion. Thanks again!!! On Mon, Mar 17, 2008 at 6:32 PM, Kurt J wrote: > Whoa! that is _really_ fast. Thanks Robert!! However, it seems i'm only > going thru the first row of my matrix, i'm a bit confused... > > > On Mon, Mar 17, 2008 at 6:09 PM, Robert Kern > wrote: > > > On Mon, Mar 17, 2008 at 12:58 PM, Kurt J wrote: > > > Hi List, > > > > > > I have a rather large matrix (15k x 15k) and I was hoping there was a > > nice > > > efficient way to search thru it. Currently I am just using for loops > > to go > > > thru the rows and columns... > > > > > > > for i, iKey in enumerate(self.matrixKey): > > > > for j, jKey in enumerate(self.matrixKey): > > > > if self.matrix[i][j] <= threshold and not > > > self.matrix[i][j] == 0.0: > > > > print "threshold at " + str(iKey) + " and " > > +str(jKey) > > > > > > > > > > > > > Can I do something more clever? Quicker? > > > > Sure. > > > > iarr, jarr = numpy.nonzero((self.matrix <= threshold) & (self.matrix != > > 0)) > > for i, j in zip(iarr, jarr): > > print 'threshold at %s and %s' % (self.matrixKey[i], self.matrixKey > > [j]) > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an enigma, a harmless > > enigma that is made terrible by our own mad attempt to interpret it as > > though it had an underlying truth." > > -- Umberto Eco > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zane at ideotrope.org Mon Mar 17 14:53:27 2008 From: zane at ideotrope.org (Zane Selvans) Date: Mon, 17 Mar 2008 11:53:27 -0700 Subject: [SciPy-user] Runge-Kutta ODE vs. built-in tools In-Reply-To: References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> <20080313162705.GA6473@giton> <47D9B3B9.9030603@ideotrope.org> Message-ID: <47DEBE27.7020605@ideotrope.org> >> As Andrew mentioned, writing a generic RK method (that accepts used >> defined derivatives) is pretty trivial. If you stored the >> coefficients in the so-called "Butcher arrays" as arrays, as opposed >> to hard coding for a particular order (e.g. RK2 or RK4), then you can >> unify all (explicit) RK methods easily too. Furthermore, adaptive RK >> methods (e.g. RK45) are simply a post-process after a standard RK step >> that can be handled in a uniform fashion. Hmm. Maybe these coefficients are what all the mysterious lines of numbers are in the Fortran code I have. No comments. Lots of GOTO statements, and dozens of "magic numbers" so far as I can tell. I'd love to not have to use it any more (and not have to worry about whether I can share someone else's code without upsetting them too). Does anyone know a good straightforward reference for implementing DiffEq solvers? In pure Python would be fine - I'm not super concerned with computational efficiency (the Fortran code I have runs instantaneously, so even if it were 100 times slower I probably wouldn't notice) > Once in a while you'll run into a problem when you need more > performance, or different abilities, or special techniques, that the > library doesn't cover. I discovered that my scipy.integrate.ode package was broken, and had to figure out what was up with my install. It seems to work now so I can attempt to use the built-in tools first. Once I understand how to specify my problem to them. Unfortunately the specification I have of the problem (in our paper) is a little vague, and I haven't done this kind of solving coupled diffeqs before. Maybe I'll try a test case first. Does anyone have a little piece of example code that uses the built in ODE solvers successfully on their system so I can test my system, and get a better idea of how to use the tools? Maybe running on a set of equations with a known analytic solution for comparison? > P.S. your problem is really making me curious - are you really > simulating self-gravitating viscoelastic material? Is this geophysics? > Somehow that sounds more like a PDE type of problem than ODE; PDE > solvers are much less standardized and I don't know that there are > stock tools to attack them. -A I'm a grad student at the University of Colorado/Caltech working on the tectonics of icy satellites - I'm modeling the tidally induced stresses on the surface of an icy satellite that has a subsurface ocean (like Europa or Ganymede or Titan), so that I can compare those stresses to the tectonic features we see in the Galileo and Cassini spacecraft datasets. On long timescales, the ice will behave viscoelastically, and so its response to a periodic forcing has a frequency dependence. That frequency dependence is primarily in the Lame parameters of the material (shear modulus mu, and the other Lame parameter lambda), but through them, via some PDEs, a frequency dependence in the Love numbers (which describe how the body responds to a change in the gravitational potential) also exists. The PDEs can be turned into ODEs through separation of variables. Thanks for your continued attention! Zane -- Zane Selvans Amateur Human zane at ideotrope.org 303/815-6866 PGP Key: 55E0815F -------------- next part -------------- A non-text attachment was scrubbed... Name: zane.vcf Type: text/x-vcard Size: 254 bytes Desc: not available URL: From peridot.faceted at gmail.com Mon Mar 17 16:57:12 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 17 Mar 2008 16:57:12 -0400 Subject: [SciPy-user] Runge-Kutta ODE vs. built-in tools In-Reply-To: <47DEBE27.7020605@ideotrope.org> References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> <20080313162705.GA6473@giton> <47D9B3B9.9030603@ideotrope.org> <47DEBE27.7020605@ideotrope.org> Message-ID: On 17/03/2008, Zane Selvans wrote: > > Once in a while you'll run into a problem when you need more > > performance, or different abilities, or special techniques, that the > > library doesn't cover. > > I discovered that my scipy.integrate.ode package was broken, and had to > figure out what was up with my install. It seems to work now so I can > attempt to use the built-in tools first. Once I understand how to > specify my problem to them. Unfortunately the specification I have of > the problem (in our paper) is a little vague, and I haven't done this > kind of solving coupled diffeqs before. Maybe I'll try a test case first. > > Does anyone have a little piece of example code that uses the built in > ODE solvers successfully on their system so I can test my system, and > get a better idea of how to use the tools? Maybe running on a set of > equations with a known analytic solution for comparison? There is (now) an entry in the scipy tutorial: http://www.scipy.org/SciPy_Tutorial#head-3daf2af2101f73b650604a8e5553301ba5d5cfc6 It's very simple, but it should be enough to get you started. You can also verify that your scipy is installed correctly with scipy.test(). > > P.S. your problem is really making me curious - are you really > > simulating self-gravitating viscoelastic material? Is this geophysics? > > Somehow that sounds more like a PDE type of problem than ODE; PDE > > solvers are much less standardized and I don't know that there are > > stock tools to attack them. -A > > I'm a grad student at the University of Colorado/Caltech working on the > tectonics of icy satellites - I'm modeling the tidally induced stresses > on the surface of an icy satellite that has a subsurface ocean (like > Europa or Ganymede or Titan), so that I can compare those stresses to > the tectonic features we see in the Galileo and Cassini spacecraft > datasets. On long timescales, the ice will behave viscoelastically, and > so its response to a periodic forcing has a frequency dependence. That > frequency dependence is primarily in the Lame parameters of the material > (shear modulus mu, and the other Lame parameter lambda), but through > them, via some PDEs, a frequency dependence in the Love numbers (which > describe how the body responds to a change in the gravitational > potential) also exists. The PDEs can be turned into ODEs through > separation of variables. Fascinating! I'd love to see the results when you get it working... Good luck, Anne From zane at ideotrope.org Mon Mar 17 17:10:01 2008 From: zane at ideotrope.org (Zane Selvans) Date: Mon, 17 Mar 2008 14:10:01 -0700 Subject: [SciPy-user] Runge-Kutta ODE vs. built-in tools In-Reply-To: References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> <20080313162705.GA6473@giton> <47D9B3B9.9030603@ideotrope.org> <47DEBE27.7020605@ideotrope.org> Message-ID: <47DEDE29.7030309@ideotrope.org> Anne Archibald wrote: > On 17/03/2008, Zane Selvans wrote: > >> Does anyone have a little piece of example code that uses the built in >> ODE solvers successfully on their system so I can test my system, and >> get a better idea of how to use the tools? Maybe running on a set of >> equations with a known analytic solution for comparison? > > There is (now) an entry in the scipy tutorial: > http://www.scipy.org/SciPy_Tutorial#head-3daf2af2101f73b650604a8e5553301ba5d5cfc6 > > It's very simple, but it should be enough to get you started. Thanks, I also found another couple of examples of varying complexity: http://www.scipy.org/LoktaVolterraTutorial http://www.scipy.org/SciPyPackages/Integrate > You can also verify that your scipy is installed correctly with scipy.test(). Unfortunately, I can't actually. The version I finally got to work came from the current SVN repository, and the test() framework is undergoing some changes. It's currently not functional. But I did run through the above examples in my interpreter, and it gives the right answers, so it seems I have the integrate package working. Will play with it a bit now. >> I'm a grad student at the University of Colorado/Caltech working on the >> tectonics of icy satellites - I'm modeling the tidally induced stresses >> on the surface of an icy satellite that has a subsurface ocean (like >> Europa or Ganymede or Titan), so that I can compare those stresses to > Fascinating! I'd love to see the results when you get it working... Well, glad to know there's at least one person on Earth who thinks it's interesting :) -- Zane Selvans Amateur Human zane at ideotrope.org 303/815-6866 PGP Key: 55E0815F -------------- next part -------------- A non-text attachment was scrubbed... Name: zane.vcf Type: text/x-vcard Size: 254 bytes Desc: not available URL: From bevan07 at gmail.com Mon Mar 17 17:29:59 2008 From: bevan07 at gmail.com (bevan) Date: Mon, 17 Mar 2008 21:29:59 +0000 (UTC) Subject: [SciPy-user] timeseries and maskedarrays References: Message-ID: >>snip<< Thanks for the replies - Arthur, I will check out the CDAT package in more depth in the future but at this stage I am keen to get my head around the timeseries package. Matt, Thanks heaps, your post has really helped me to get timeseries going. Of course this meant a rewrite to use my new understanding. I have created the maskedarrays as you suggested (i think) but i run into an issue when I create monthly summaries with timeseries data that has a masked array. I have put some sample code below. The issue is the results of the ma.sum when converting masked daily data to monthly. The masked values are being expressed as numbers rather than the mask symbol I would expect. Is this related to the versions I am using? import numpy as np from numpy import ma import timeseries as ts # create a simple test series at daily frequency testmask = [False,False,True,False,False,False,False,False,False,False,False,False,False,Tr ue,False,False,False,False,False,False] series = ts.time_series(np.random.normal(size=20), start_date=ts.now ('d'),mask=testmask) monthly_sums = series.convert('monthly', ma.sum) monthly = series.convert('monthly') print monthly_sums #print monthly >>> [ 2.00000000e+20 -7.82157628e-01] From stefan at sun.ac.za Mon Mar 17 18:14:14 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 17 Mar 2008 23:14:14 +0100 Subject: [SciPy-user] Runge-Kutta ODE vs. built-in tools In-Reply-To: <47DEDE29.7030309@ideotrope.org> References: <1FA8105E-095B-4610-ABE8-57EE9D711AE4@enthought.com> <47D89A6F.8040807@ideotrope.org> <20080313162705.GA6473@giton> <47D9B3B9.9030603@ideotrope.org> <47DEBE27.7020605@ideotrope.org> <47DEDE29.7030309@ideotrope.org> Message-ID: <9457e7c80803171514l6098c865p994bf2e2e6c0fe4a@mail.gmail.com> On Mon, Mar 17, 2008 at 10:10 PM, Zane Selvans wrote: > Anne Archibald wrote: > > On 17/03/2008, Zane Selvans wrote: > > > > >> Does anyone have a little piece of example code that uses the built in > >> ODE solvers successfully on their system so I can test my system, and > >> get a better idea of how to use the tools? Maybe running on a set of > >> equations with a known analytic solution for comparison? > > > > There is (now) an entry in the scipy tutorial: > > http://www.scipy.org/SciPy_Tutorial#head-3daf2af2101f73b650604a8e5553301ba5d5cfc6 > > > > It's very simple, but it should be enough to get you started. > > Thanks, I also found another couple of examples of varying complexity: > > http://www.scipy.org/LoktaVolterraTutorial > http://www.scipy.org/SciPyPackages/Integrate > > > > You can also verify that your scipy is installed correctly with scipy.test(). > > Unfortunately, I can't actually. The version I finally got to work came > from the current SVN repository, and the test() framework is undergoing > some changes. It's currently not functional. But I did run through the > above examples in my interpreter, and it gives the right answers, so it > seems I have the integrate package working. Will play with it a bit now. > > > >> I'm a grad student at the University of Colorado/Caltech working on the > >> tectonics of icy satellites - I'm modeling the tidally induced stresses > >> on the surface of an icy satellite that has a subsurface ocean (like > >> Europa or Ganymede or Titan), so that I can compare those stresses to > > > Fascinating! I'd love to see the results when you get it working... > > Well, glad to know there's at least one person on Earth who thinks it's > interesting :) Certainly, more than one! Let us know when you are done and do show some examples -- or even better, register a talk at the next SciPy conference. Regards St?fan From lou_boog2000 at yahoo.com Mon Mar 17 20:56:45 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Mon, 17 Mar 2008 17:56:45 -0700 (PDT) Subject: [SciPy-user] Runge-Kutta ODE vs. built-in tools In-Reply-To: <47DEBE27.7020605@ideotrope.org> Message-ID: <263415.23733.qm@web34405.mail.mud.yahoo.com> --- Zane Selvans wrote: > Does anyone know a good straightforward reference > for implementing > DiffEq solvers? In pure Python would be fine - I'm > not super concerned > with computational efficiency (the Fortran code I > have runs > instantaneously, so even if it were 100 times slower > I probably wouldn't > notice) In addition to those online tutorials, I would recommend Numerical Recipes by Press, et al. (Cambridge Press, I think). They give some good, physical, applied math introductions to many numerical procedures including ODE solving. Not too much, not too little, but usually enough to help you write your own (or use theirs). By the way, I come down somewhere in the middle of the "argument" going on in this thread. I think using well-tested, robust, optimized code is a good idea. But when you're on the learning curve it's really good to write a few of your own packages. Not with all the bells and whistles, but with enough features that you get good results for some simpler cases. You really benefit by learning what is and is not important and how the machinery works. I think it helps anyone doing numerical calculations to know basically what the function he/she is calling is doing. Even something well established like FFTs require some sense of what is and is not possible (e.g. knowing about Nyquist frequencies). Never use a function as a pure black box about which you understand nothing. I doubt anyone here was suggesting that extreme circumstance, but I am continually amazed at how many people request someone's research software in science so they can just throw data into it and see what comes out. Such trusting souls. :-) -- Lou Pecora, my views are my own. __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From ac1201 at gmail.com Mon Mar 17 23:20:44 2008 From: ac1201 at gmail.com (Andrew Charles) Date: Tue, 18 Mar 2008 14:20:44 +1100 Subject: [SciPy-user] Runge-Kutta ODE vs. built-in tools In-Reply-To: <263415.23733.qm@web34405.mail.mud.yahoo.com> References: <47DEBE27.7020605@ideotrope.org> <263415.23733.qm@web34405.mail.mud.yahoo.com> Message-ID: On Tue, Mar 18, 2008 at 11:56 AM, Lou Pecora wrote: > > --- Zane Selvans wrote: > > > Does anyone know a good straightforward reference > > for implementing > > DiffEq solvers? In pure Python would be fine - I'm > > not super concerned > > with computational efficiency (the Fortran code I > > have runs > > instantaneously, so even if it were 100 times slower > > I probably wouldn't > > notice) I found the SIGGRAPH 95 Introduction to Physically Based Modelling material (http://www.cs.cmu.edu/~baraff/pbm/) to be extremely useful. There's not a great deal of code, and what there is is in C, but it gives what I think is an excellent introduction to basic ODE solvers. From pgmdevlist at gmail.com Tue Mar 18 10:00:57 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 18 Mar 2008 10:00:57 -0400 Subject: [SciPy-user] timeseries and maskedarrays In-Reply-To: References: Message-ID: <200803181000.58526.pgmdevlist@gmail.com> On Monday 17 March 2008 17:29:59 bevan wrote: >> The issue is the results of the > ma.sum when converting masked daily data to monthly. The masked values are > being expressed as numbers rather than the mask symbol I would expect. Is > this related to the versions I am using? Nah, it's more likely a bug of some sort. The conversion functions are written in C, and the transformation from C arrays back to MaskedArrays is a bit flaky. We should spend some time on polishing that up. Please fill some kind of bug report, or contact us off-list with as much info as you can. Please note that I won't be able to try anything before next week, however... Thx for your interest. P. From forrest.bao at gmail.com Tue Mar 18 15:08:36 2008 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Tue, 18 Mar 2008 14:08:36 -0500 Subject: [SciPy-user] normalized frequencies to pi radians / sample Message-ID: <889df5f00803181208g31461b3bme02dc7876b309b52@mail.gmail.com> Hi, I am trying to use buttord function to compute the order and frequency of a Butterworth filter. But I got confused about the passband and stopband edge frequency. According to the doc, wp, ws -- Passband and stopband edge frequencies, normalized from 0 to 1 (1 corresponds to pi radians / sample). For example: Lowpass: wp = 0.2, ws = 0.3 Highpass: wp = 0.3, ws = 0.2 Bandpass: wp = [0.2, 0.5], ws = [0.1, 0.6] Bandstop: wp = [0.1, 0.6], ws = [0.2, 0.5] I should normalize the frequency to pi radians / sample. Now, suppose a frequency 10Hz and my sampling rate is 100 Hz. Is the normalized frequency 0.1? Thanks, Forrest -- Forrest Sheng Bao Ph.D. student, Dept. of Computer Science M.Sc. student, Dept. of Electrical & Computer Engineering Rm 115, Experimental Sciences Building Texas Tech University, USA http://fsbao.net 1-806-577-4592 Forrest is an equal opportunity Email sender. 1. You are encouraged to use the language you prefer. Beyond English, I can also read traditional/simplified Chinese and a bit German. 2. I will only send you files readable to free or open source software. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Mar 18 15:35:13 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Mar 2008 14:35:13 -0500 Subject: [SciPy-user] normalized frequencies to pi radians / sample In-Reply-To: <889df5f00803181208g31461b3bme02dc7876b309b52@mail.gmail.com> References: <889df5f00803181208g31461b3bme02dc7876b309b52@mail.gmail.com> Message-ID: <3d375d730803181235t58b9cb3cubc960888ba1bf8d7@mail.gmail.com> On Tue, Mar 18, 2008 at 2:08 PM, Forrest Sheng Bao wrote: > Hi, > > I am trying to use buttord function to compute the order and frequency of a > Butterworth filter. > > But I got confused about the passband and stopband edge frequency. According > to the doc, > > wp, ws -- Passband and stopband edge frequencies, normalized from 0 > to 1 (1 corresponds to pi radians / sample). For example: > Lowpass: wp = 0.2, ws = 0.3 > Highpass: wp = 0.3, ws = 0.2 > Bandpass: wp = [0.2, 0.5], ws = [0.1, 0.6] > Bandstop: wp = [0.1, 0.6], ws = [0.2, 0.5] > > I should normalize the frequency to pi radians / sample. Now, suppose a > frequency 10Hz and my sampling rate is 100 Hz. Is the normalized frequency > 0.1? 0.2, I believe. If the sampling rate is 100 Hz, the Nyquist frequency (pi radians/sample) is half that: 50 Hz. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at enthought.com Tue Mar 18 15:45:43 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 18 Mar 2008 14:45:43 -0500 Subject: [SciPy-user] normalized frequencies to pi radians / sample In-Reply-To: <889df5f00803181208g31461b3bme02dc7876b309b52@mail.gmail.com> References: <889df5f00803181208g31461b3bme02dc7876b309b52@mail.gmail.com> Message-ID: <47E01BE7.5090305@enthought.com> Forrest Sheng Bao wrote: > Hi, > > I am trying to use buttord function to compute the order and frequency > of a Butterworth filter. > > But I got confused about the passband and stopband edge frequency. > According to the doc, > > wp, ws -- Passband and stopband edge frequencies, normalized from 0 > to 1 (1 corresponds to pi radians / sample). For example: > Lowpass: wp = 0.2, ws = 0.3 > Highpass: wp = 0.3, ws = 0.2 > Bandpass: wp = [0.2, 0.5], ws = [0.1, 0.6] > Bandstop: wp = [0.1, 0.6], ws = [0.2, 0.5] > > I should normalize the frequency to pi radians / sample. Now, suppose > a frequency 10Hz and my sampling rate is 100 Hz. Is the normalized > frequency 0.1? Simple answer: Not quite. pi radians / sample is the Nyquist frequency which is 1/2 the sampling rate (or 50 Hz). Thus, 10Hz corresponds to 0.2 in normalized space. There are two basic steps in the normalization. 1) Convert from analog to digital ( this maps analog frequency to digital [-pi to pi]. 1/2 sampling frequency -> pi radians / sample) 2) Convert from [-pi to pi] to [-1 to 1] ( this maps 1/2 sampling frequency to 1). You can also just think about filtering in the "sampled" domain, recognizing that 1 corresponds to a frequency of 1/2 cycle . So, a 10 Hz wave-form sampled at 100 Hz would correspond to 10 samples / cycle (i.e. 0.1 cycles / sample or 0.2 (1/2-cycles / sample)). Obviously the documentation could use some help to guide the un-initiated. Best regards, -Travis O. From s.mientki at ru.nl Tue Mar 18 17:34:27 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Tue, 18 Mar 2008 22:34:27 +0100 Subject: [SciPy-user] extending a 2-dim array ? Message-ID: <47E03563.1030709@ru.nl> hello, To combine a number of signals (of possible different length), I need to extend a 2-dimensional array. I succeeded, but it looks much too complex in my opinion: a_list = [1,2,3,4,5] a_vector = array ( a_list ) b_list = [11,22,33,44,55] b_vector = array ( b_list ) ab_array = vstack (( a_vector, b_vector )) # extend each signal with 1 zero element extend = transpose ( vstack ( ( transpose ( ab_array ), zeros ((1,2)) )) ) does anyone know a more elegant (= more simple) method ? thanks, Stef Mientki From robert.kern at gmail.com Tue Mar 18 17:40:33 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Mar 2008 16:40:33 -0500 Subject: [SciPy-user] extending a 2-dim array ? In-Reply-To: <47E03563.1030709@ru.nl> References: <47E03563.1030709@ru.nl> Message-ID: <3d375d730803181440x7d47cabfg71c99b6604dec6f0@mail.gmail.com> On Tue, Mar 18, 2008 at 4:34 PM, Stef Mientki wrote: > hello, > > To combine a number of signals (of possible different length), > I need to extend a 2-dimensional array. > > I succeeded, but it looks much too complex in my opinion: > > a_list = [1,2,3,4,5] > a_vector = array ( a_list ) > b_list = [11,22,33,44,55] > b_vector = array ( b_list ) > > ab_array = vstack (( a_vector, b_vector )) > > # extend each signal with 1 zero element > extend = transpose ( vstack ( ( transpose ( ab_array ), zeros ((1,2)) )) ) > > > does anyone know a more elegant (= more simple) method ? Use hstack([ab_array, zeros([2,1])]). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From forrest.bao at gmail.com Tue Mar 18 18:09:09 2008 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Tue, 18 Mar 2008 17:09:09 -0500 Subject: [SciPy-user] the physical frequency of FFT Message-ID: <889df5f00803181509r3a39f9dbu2b04e031af898904@mail.gmail.com> Hi, I have some other questions about the fft function. Suppose I have N sampling points and the sampling rate is Fs. I performed an N-point FFT, using the code *fft(my_list_of_data)*. 1. Doesn't the k-th point (0 From stef.mientki at gmail.com Tue Mar 18 18:22:23 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Tue, 18 Mar 2008 23:22:23 +0100 Subject: [SciPy-user] extending a 2-dim array ? In-Reply-To: <3d375d730803181440x7d47cabfg71c99b6604dec6f0@mail.gmail.com> References: <47E03563.1030709@ru.nl> <3d375d730803181440x7d47cabfg71c99b6604dec6f0@mail.gmail.com> Message-ID: <47E0409F.5040103@gmail.com> Robert Kern wrote: > On Tue, Mar 18, 2008 at 4:34 PM, Stef Mientki wrote: > >> hello, >> >> To combine a number of signals (of possible different length), >> I need to extend a 2-dimensional array. >> >> I succeeded, but it looks much too complex in my opinion: >> >> a_list = [1,2,3,4,5] >> a_vector = array ( a_list ) >> b_list = [11,22,33,44,55] >> b_vector = array ( b_list ) >> >> ab_array = vstack (( a_vector, b_vector )) >> >> # extend each signal with 1 zero element >> extend = transpose ( vstack ( ( transpose ( ab_array ), zeros ((1,2)) )) ) >> >> >> does anyone know a more elegant (= more simple) method ? >> > > Use hstack([ab_array, zeros([2,1])]). > > thanks Robert, I thought I had tried that and got an error, but I must have made a mistake. It now works like a charm. cheers, Stef From robert.kern at gmail.com Tue Mar 18 18:30:43 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Mar 2008 17:30:43 -0500 Subject: [SciPy-user] the physical frequency of FFT In-Reply-To: <889df5f00803181509r3a39f9dbu2b04e031af898904@mail.gmail.com> References: <889df5f00803181509r3a39f9dbu2b04e031af898904@mail.gmail.com> Message-ID: <3d375d730803181530y13962068l55ee04b63b080228@mail.gmail.com> On Tue, Mar 18, 2008 at 5:09 PM, Forrest Sheng Bao wrote: > Hi, > > I have some other questions about the fft function. Suppose I have N > sampling points and the sampling rate is Fs. I performed an N-point FFT, > using the code fft(my_list_of_data). > > > Doesn't the k-th point (0 (k*Fs)/(2*N)? It follows the conventional FFT packing: In [9]: numpy.fft.fftfreq? Type: function Base Class: Namespace: Interactive File: /Users/rkern/svn/numpy/numpy/fft/helper.py Definition: numpy.fft.fftfreq(n, d=1.0) Docstring: fftfreq(n, d=1.0) -> f DFT sample frequencies The returned float array contains the frequency bins in cycles/unit (with zero at the start) given a window length n and a sample spacing d: f = [0,1,...,n/2-1,-n/2,...,-1]/(d*n) if n is even f = [0,1,...,(n-1)/2,-(n-1)/2,...,-1]/(d*n) if n is odd > Consider the single-sided case. If I just plot the FFT result by > plot(fft(my_list_of_data)), the x-coordinate ([0,Pi]) is the image part and > the amplitude is the real part, right? No. I believe that matplotlib (I have to assume this is the plotting library you are using; please state so in the future) just ignores the imaginary part. The X-coordinate is just arange(len(my_list_of_data)), and the Y-coordinate is the real part of the FFT. Because of the packing of the FFT, you will need to rearrange things in order to make a sensible plot. I recommend using fftfreq() to get an array of corresponding frequencies and using fftshift() on both the frequency array and the FFT. In [10]: numpy.fft.fftshift? Type: function Base Class: Namespace: Interactive File: /Users/rkern/svn/numpy/numpy/fft/helper.py Definition: numpy.fft.fftshift(x, axes=None) Docstring: fftshift(x, axes=None) -> y Shift zero-frequency component to center of spectrum. This function swaps half-spaces for all axes listed (defaults to all). Notes: If len(x) is even then the Nyquist component is y[0]. > If I wanna estimate the power spectrum at each frequency, should I square > the real part or computer the norm (square root of the sum of squares of the > real and image part) of the complex number, at each FFT point? Just the sum of squares, I believe. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From forrest.bao at gmail.com Tue Mar 18 18:57:53 2008 From: forrest.bao at gmail.com (Forrest Sheng Bao) Date: Tue, 18 Mar 2008 17:57:53 -0500 Subject: [SciPy-user] the physical frequency of FFT In-Reply-To: <3d375d730803181530y13962068l55ee04b63b080228@mail.gmail.com> References: <889df5f00803181509r3a39f9dbu2b04e031af898904@mail.gmail.com> <3d375d730803181530y13962068l55ee04b63b080228@mail.gmail.com> Message-ID: <889df5f00803181557t1112f160n6d419248e74d66a7@mail.gmail.com> On Tue, Mar 18, 2008 at 5:30 PM, Robert Kern wrote: > On Tue, Mar 18, 2008 at 5:09 PM, Forrest Sheng Bao wrote: > > Hi, > > > > I have some other questions about the fft function. Suppose I have N > > sampling points and the sampling rate is Fs. I performed an N-point FFT, > > using the code fft(my_list_of_data). > > > > > > Doesn't the k-th point (0 > (k*Fs)/(2*N)? > I saw an example at Scipy Getting Started guide: http://www.scipy.org/Getting_Started Can you take a look at the demo In [2] to In [5]. It seems each frequency bin occupies a bandwidth of Fs/N and those frequency bins are ordered from 0 to N-1. > > It follows the conventional FFT packing: > > In [9]: numpy.fft.fftfreq? > Type: function > Base Class: > Namespace: Interactive > File: /Users/rkern/svn/numpy/numpy/fft/helper.py > Definition: numpy.fft.fftfreq(n, d=1.0) > Docstring: > fftfreq(n, d=1.0) -> f > > DFT sample frequencies > > The returned float array contains the frequency bins in > cycles/unit (with zero at the start) given a window length n and a > sample spacing d: > > f = [0,1,...,n/2-1,-n/2,...,-1]/(d*n) if n is even > f = [0,1,...,(n-1)/2,-(n-1)/2,...,-1]/(d*n) if n is odd Ok, each point in the result of FFT correspond to the frequency bin calculated by fftfreq. So the k-th point of the first half points of fft output, indexing from 0 to N/2, correspond the physical frequency k*Fs/N. Thus, if I wanna consider a frequency band 2-4Hz while the sampling rate is 100Hz and I have 1000 sampled points, then I just need to consider the results between the 20th and the 40th points. Is my understanding right? > > > > Consider the single-sided case. If I just plot the FFT result by > > plot(fft(my_list_of_data)), the x-coordinate ([0,Pi]) is the image part > and > > the amplitude is the real part, right? > > No. I believe that matplotlib (I have to assume this is the plotting > library you are using; please state so in the future) just ignores the > imaginary part. The X-coordinate is just arange(len(my_list_of_data)), > and the Y-coordinate is the real part of the FFT. Because of the > packing of the FFT, you will need to rearrange things in order to make > a sensible plot. I recommend using fftfreq() to get an array of > corresponding frequencies and using fftshift() on both the frequency > array and the FFT. > I just wanna compute the power spectrum to each physical frequency band. Since my signal is real, the FFT result is symmetric. So I think that I can neglect the part of negative frequency. > > > In [10]: numpy.fft.fftshift? > Type: function > Base Class: > Namespace: Interactive > File: /Users/rkern/svn/numpy/numpy/fft/helper.py > Definition: numpy.fft.fftshift(x, axes=None) > Docstring: > fftshift(x, axes=None) -> y > > Shift zero-frequency component to center of spectrum. > > This function swaps half-spaces for all axes listed (defaults to all). > > Notes: > If len(x) is even then the Nyquist component is y[0]. > > > If I wanna estimate the power spectrum at each frequency, should I > square > > the real part or computer the norm (square root of the sum of squares of > the > > real and image part) of the complex number, at each FFT point? > > Just the sum of squares, I believe. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Wed Mar 19 04:43:05 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 19 Mar 2008 09:43:05 +0100 Subject: [SciPy-user] the physical frequency of FFT In-Reply-To: <889df5f00803181557t1112f160n6d419248e74d66a7@mail.gmail.com> References: <889df5f00803181509r3a39f9dbu2b04e031af898904@mail.gmail.com> <3d375d730803181530y13962068l55ee04b63b080228@mail.gmail.com> <889df5f00803181557t1112f160n6d419248e74d66a7@mail.gmail.com> Message-ID: <80c99e790803190143i1d9ade8fi8f2671b8834621ff@mail.gmail.com> > > > > > > > It follows the conventional FFT packing: > > > > In [9]: numpy.fft.fftfreq? > > Type: function > > Base Class: > > Namespace: Interactive > > File: /Users/rkern/svn/numpy/numpy/fft/helper.py > > Definition: numpy.fft.fftfreq(n, d=1.0) > > Docstring: > > fftfreq(n, d=1.0) -> f > > > > DFT sample frequencies > > > > The returned float array contains the frequency bins in > > cycles/unit (with zero at the start) given a window length n and a > > sample spacing d: > > > > f = [0,1,...,n/2-1,-n/2,...,-1]/(d*n) if n is even > > f = [0,1,...,(n-1)/2,-(n-1)/2,...,-1]/(d*n) if n is odd > > > Ok, each point in the result of FFT correspond to the frequency bin > calculated by fftfreq. So the k-th point of the first half points of fft > output, indexing from 0 to N/2, correspond the physical frequency k*Fs/N. > > Thus, if I wanna consider a frequency band 2-4Hz while the sampling rate > is 100Hz and I have 1000 sampled points, then I just need to consider the > results between the 20th and the 40th points. Is my understanding right? > I guess you meant: "frequency band 2-4 kHz". Then yes: take the samples between 20 and 40. > > > > > > > > > > Consider the single-sided case. If I just plot the FFT result by > > > plot(fft(my_list_of_data)), the x-coordinate ([0,Pi]) is the image > > part and > > > the amplitude is the real part, right? > > > > No. I believe that matplotlib (I have to assume this is the plotting > > library you are using; please state so in the future) just ignores the > > imaginary part. The X-coordinate is just arange(len(my_list_of_data)), > > and the Y-coordinate is the real part of the FFT. Because of the > > packing of the FFT, you will need to rearrange things in order to make > > a sensible plot. I recommend using fftfreq() to get an array of > > corresponding frequencies and using fftshift() on both the frequency > > array and the FFT. > > > > I just wanna compute the power spectrum to each physical frequency band. > Since my signal is real, the FFT result is symmetric. So I think that I can > neglect the part of negative frequency. > Correct. Just take the abs**2 of the FFT coeffs. L. -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Anderson at hull.ac.uk Wed Mar 19 05:42:07 2008 From: J.Anderson at hull.ac.uk (Joseph Anderson) Date: Wed, 19 Mar 2008 09:42:07 -0000 Subject: [SciPy-user] Newbie, deferring large array generation References: <3d375d730803111047y76cfb9f8r9aa5b9238bb42734@mail.gmail.com> <3d375d730803111722o1869d73fw2747724ac05a30b9@mail.gmail.com> Message-ID: Hello All, I have a problem in that on occasion I need to generate very large arrays that won't fit into memory. For what I'm doing, calling linspace is convenient, so I'd like to be able to use it. A very simple example of my problem is the case: a = linspace(start, stop, very_big_num) As a python/scipy newbie, I'm just becoming familiar with appropriate coding patterns. One thought was whether it is possible to use generators to defer the array creation. I'd like to be able to use linspace (pleasantly simple) but defer generation of the complete array, returning individual values or a range of values on demand. My question is, whether the following generator expression is appropriate or not? Or whether linspace will fill its array (and therefore overflow memory)? g = (val for val in linspace(start, stop, very_big_num)) So will this work? I realize that it is possible for me to write my own version of linspace as a generator--but hoping I don't have to. If the above generator expression doesn't work, are there any other straight forward coding patterns that will do what I'm looking for? Thanks for the help. My best, Jo ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3490 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From lbolla at gmail.com Wed Mar 19 06:51:48 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 19 Mar 2008 11:51:48 +0100 Subject: [SciPy-user] Newbie, deferring large array generation In-Reply-To: References: <3d375d730803111047y76cfb9f8r9aa5b9238bb42734@mail.gmail.com> <3d375d730803111722o1869d73fw2747724ac05a30b9@mail.gmail.com> Message-ID: <80c99e790803190351x720730faif552ffb4f32f1f68@mail.gmail.com> you can maybe use xrange. dx = (stop - start) / (very_big_num - 1) xa = xrange(very_big_num) for ia in xa: a = ia * dx use a... hth. L. On Wed, Mar 19, 2008 at 10:42 AM, Joseph Anderson wrote: > Hello All, > > I have a problem in that on occasion I need to generate very large arrays > that won't fit into memory. For what I'm doing, calling linspace is > convenient, so I'd like to be able to use it. A very simple example of my > problem is the case: > > a = linspace(start, stop, very_big_num) > > As a python/scipy newbie, I'm just becoming familiar with appropriate > coding patterns. One thought was whether it is possible to use generators to > defer the array creation. I'd like to be able to use linspace (pleasantly > simple) but defer generation of the complete array, returning individual > values or a range of values on demand. My question is, whether the following > generator expression is appropriate or not? Or whether linspace will fill > its array (and therefore overflow memory)? > > g = (val for val in linspace(start, stop, very_big_num)) > > So will this work? > > I realize that it is possible for me to write my own version of linspace > as a generator--but hoping I don't have to. > > If the above generator expression doesn't work, are there any other > straight forward coding patterns that will do what I'm looking for? > > > Thanks for the help. > > > My best, > Jo > > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > Dr Joseph Anderson > Lecturer in Music > > School of Arts and New Media > University of Hull, Scarborough Campus, > Scarborough, North Yorkshire, YO11 3AZ, UK > > T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > ***************************************************************************************** > To view the terms under which this email is distributed, please go to > http://www.hull.ac.uk/legal/email_disclaimer.html > > ***************************************************************************************** > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed Mar 19 10:27:02 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 19 Mar 2008 10:27:02 -0400 Subject: [SciPy-user] Newbie, deferring large array generation In-Reply-To: References: <3d375d730803111047y76cfb9f8r9aa5b9238bb42734@mail.gmail.com> <3d375d730803111722o1869d73fw2747724ac05a30b9@mail.gmail.com> Message-ID: On 19/03/2008, Joseph Anderson wrote: > I have a problem in that on occasion I need to generate very large arrays that won't fit into memory. For what I'm doing, calling linspace is convenient, so I'd like to be able to use it. A very simple example of my problem is the case: > > a = linspace(start, stop, very_big_num) > > As a python/scipy newbie, I'm just becoming familiar with appropriate coding patterns. One thought was whether it is possible to use generators to defer the array creation. I'd like to be able to use linspace (pleasantly simple) but defer generation of the complete array, returning individual values or a range of values on demand. My question is, whether the following generator expression is appropriate or not? Or whether linspace will fill its array (and therefore overflow memory)? > > g = (val for val in linspace(start, stop, very_big_num)) > > So will this work? > > I realize that it is possible for me to write my own version of linspace as a generator--but hoping I don't have to. > > If the above generator expression doesn't work, are there any other straight forward coding patterns that will do what I'm looking for? This is tricky. numpy is designed to be convenient and efficient by operating on entire arrays at once. This more or less requires that they fit into memory. When you start dealing with really enormous arrays, I'm afraid you're probably going to have to make your code a bit more complicated. You could operate on your array one element at a time, but that will be very slow (you'll lose all numpy's benefits). The way I would handle this would be to operate on your data in "blocks" of a thousand or a million elements (experiment to see what size is fastest), and use a for loop to chop up your input arrays into blocks. Unfortunately there's no version of linspace that does this for you, but it's not too hard to write one. If you want to iterate over a huge dataset on disk, you might look into arrayterator: http://pypi.python.org/pypi/arrayterator/0.2.8 Good luck, Anne From warren.weckesser at gmail.com Wed Mar 19 11:48:51 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 19 Mar 2008 11:48:51 -0400 Subject: [SciPy-user] Runge-Kutta ODE vs. built-in tools Message-ID: <114880320803190848y470655e5s54f5fc2da3c03459@mail.gmail.com> Zane (and everyone else), You might be interested in trying VFGEN (www.warrenweckesser.net/vfgen). I just released version 2.2.0. VFGEN is a program that takes an XML description of a vector field (i.e. differential equations) and generates code for a wide variety of numerical software packages, including SciPy. A simple example is the damped pendulum: ?' = v v' = -b*v/(m*L^2) - (g/L)*sin(?) An XML vector field file for this system is ---------------------------------------------------------------- ---------------------------------------------------------------- The VFGEN scipy command reads this file and generates this Python code: ---------------------------------------------------------------- # # pendulum.py # # Python file for the vector field named: pendulum # This file implements the vector field as # the function pendulum_vf. This function # can be used with the SciPy ODEINT function. # # This file was generated by the program VFGEN (Version:2.2.0) # Generated on 17-Mar-2008 at 00:48 # from math import * import numpy # # The vector field. # def pendulum_vf(y_,t_,p_): """ The vector field function for the vector field "pendulum" """ Pi = pi theta = y_[0] v = y_[1] g = p_[0] b = p_[1] L = p_[2] m = p_[3] f_ = numpy.zeros((2,)) f_[0] = v f_[1] = -sin(theta)/L*g-1.0/m/(L*L)*b*v return f_ # # The Jacobian. # def pendulum_jac(y_, t_, p_): """ The Jacobian of the vector field "pendulum" """ Pi = pi theta = y_[0] v = y_[1] g = p_[0] b = p_[1] L = p_[2] m = p_[3] # Create the Jacobian matrix: jac_ = numpy.zeros((2,2)) jac_[0,1] = 1.0 jac_[1,0] = -1.0/L*cos(theta)*g jac_[1,1] = -1.0/m/(L*L)*b return jac_ # # User function: energy # def pendulum_energy(y_, t_, p_): """ The user-defined function "energy" for the vector field "pendulum" """ Pi = pi theta = y_[0] v = y_[1] g = p_[0] b = p_[1] L = p_[2] m = p_[3] return m*(L*L)*(v*v)/2.0-m*L*cos(theta)*g ---------------------------------------------------------------- VFGEN can also generate a simple command-line solver written in Python that uses the above function to solve the initial value problem. Take a look here: http://www.warrenweckesser.net/vfgen/menu_scipy.html If you decide you would like to use a different package (e.g. MATLAB, Scilab, C with CVODE or with the GNU Scientific Library, Fortran with LSODA or RADAU5, etc), another one-line VFGEN command will generate the code that you need. Best regards, Warren Weckesser -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Anderson at hull.ac.uk Wed Mar 19 15:31:07 2008 From: J.Anderson at hull.ac.uk (Joseph Anderson) Date: Wed, 19 Mar 2008 19:31:07 -0000 Subject: [SciPy-user] Newbie, deferring large array generation References: <3d375d730803111047y76cfb9f8r9aa5b9238bb42734@mail.gmail.com><3d375d730803111722o1869d73fw2747724ac05a30b9@mail.gmail.com> Message-ID: Thanks all for the discussion and comment. One of the tricks I'm finding is to think pythonically and numpirically. I'll take advice and go ahead and rig up something that returns blocks. Thanks for the help. My best, Jo ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -----Original Message----- From: scipy-user-bounces at scipy.org on behalf of Anne Archibald Sent: Wed 03/19/2008 2:27 PM To: SciPy Users List Subject: Re: [SciPy-user] Newbie, deferring large array generation On 19/03/2008, Joseph Anderson wrote: > I have a problem in that on occasion I need to generate very large arrays that won't fit into memory. For what I'm doing, calling linspace is convenient, so I'd like to be able to use it. A very simple example of my problem is the case: > > a = linspace(start, stop, very_big_num) > > As a python/scipy newbie, I'm just becoming familiar with appropriate coding patterns. One thought was whether it is possible to use generators to defer the array creation. I'd like to be able to use linspace (pleasantly simple) but defer generation of the complete array, returning individual values or a range of values on demand. My question is, whether the following generator expression is appropriate or not? Or whether linspace will fill its array (and therefore overflow memory)? > > g = (val for val in linspace(start, stop, very_big_num)) > > So will this work? > > I realize that it is possible for me to write my own version of linspace as a generator--but hoping I don't have to. > > If the above generator expression doesn't work, are there any other straight forward coding patterns that will do what I'm looking for? This is tricky. numpy is designed to be convenient and efficient by operating on entire arrays at once. This more or less requires that they fit into memory. When you start dealing with really enormous arrays, I'm afraid you're probably going to have to make your code a bit more complicated. You could operate on your array one element at a time, but that will be very slow (you'll lose all numpy's benefits). The way I would handle this would be to operate on your data in "blocks" of a thousand or a million elements (experiment to see what size is fastest), and use a for loop to chop up your input arrays into blocks. Unfortunately there's no version of linspace that does this for you, but it's not too hard to write one. If you want to iterate over a huge dataset on disk, you might look into arrayterator: http://pypi.python.org/pypi/arrayterator/0.2.8 Good luck, Anne _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4606 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From dmitrey.kroshko at scipy.org Wed Mar 19 15:50:51 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 19 Mar 2008 21:50:51 +0200 Subject: [SciPy-user] to openopt users Message-ID: <47E16E9B.1070004@scipy.org> hi all, I decided to create a list of openopt users similar to T0M0PT's (i.e. that one mentioned in OO start page) could anyone provide some info like the one from here? http://scipy.org/scipy/scikits/wiki/OpenOptUsers It would increase my chances for obtaining GSoC2008 (students propositions start soon, some days later) or mb an other one. Regards, D. From kwmsmith at gmail.com Wed Mar 19 16:17:21 2008 From: kwmsmith at gmail.com (Kurt Smith) Date: Wed, 19 Mar 2008 15:17:21 -0500 Subject: [SciPy-user] Reference for scipy.stats.kurtosistest Message-ID: Hello scipy-users (Specifically Robert Kern): I'm making extensive use of scipy.stats.kurtosis, and noticed 'kurtosistest' which calculates the Z-score, and 2-tail Z-probability. The source for this function is opaque to me -- what is being calculated, and how? Is there a reference to which you can refer me? Thanks for your time, Kurt From robert.kern at gmail.com Wed Mar 19 16:45:39 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Mar 2008 15:45:39 -0500 Subject: [SciPy-user] Reference for scipy.stats.kurtosistest In-Reply-To: References: Message-ID: <3d375d730803191345n4c3385d2ydb9a45dc1f0c2039@mail.gmail.com> On Wed, Mar 19, 2008 at 3:17 PM, Kurt Smith wrote: > Hello scipy-users (Specifically Robert Kern): > > I'm making extensive use of scipy.stats.kurtosis, and noticed > 'kurtosistest' which calculates the Z-score, and 2-tail Z-probability. > The source for this function is opaque to me -- what is being > calculated, and how? Is there a reference to which you can refer me? I don't know the details, but I believe the basic outline is this: Gaussian PDFs have a particular kurtosis value, 3 (this is with fisher=False; Fisher's kurtosis just subtracts 3 from the notional value to talk about *excess* kurtosis). N samples drawn from a Gaussian distribution will have an empirical kurtosis drawn from a particular sampling distribution peaking at 3*(N-1)/(N+1). A suitable transformation to that sampling distribution turns it into a standard Gaussian distribution (sigma=1). Apply that transformation to the empirical kurtosis obtained from the dataset and you get a Z-score. A Z-score is just the number of sigmas away from the mean you are on a Gaussian probability distribution. Now, if you take this Z and add up the area under the standard Gaussian <-|Z| and >+|Z|, you get the 2-tail Z probability. Basically, this is the probability of getting an empirical kurtosis value at least as extreme (in either direction) as the value that you actually got if you took N samples from an actual Gaussian distribution. Does that help? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwmsmith at gmail.com Wed Mar 19 17:47:31 2008 From: kwmsmith at gmail.com (Kurt Smith) Date: Wed, 19 Mar 2008 16:47:31 -0500 Subject: [SciPy-user] Reference for scipy.stats.kurtosistest In-Reply-To: <3d375d730803191345n4c3385d2ydb9a45dc1f0c2039@mail.gmail.com> References: <3d375d730803191345n4c3385d2ydb9a45dc1f0c2039@mail.gmail.com> Message-ID: On Wed, Mar 19, 2008 at 3:45 PM, Robert Kern wrote: > > On Wed, Mar 19, 2008 at 3:17 PM, Kurt Smith wrote: > > Hello scipy-users (Specifically Robert Kern): > > > > I'm making extensive use of scipy.stats.kurtosis, and noticed > > 'kurtosistest' which calculates the Z-score, and 2-tail Z-probability. > > The source for this function is opaque to me -- what is being > > calculated, and how? Is there a reference to which you can refer me? > > I don't know the details, but I believe the basic outline is this: > Gaussian PDFs have a particular kurtosis value, 3 (this is with > fisher=False; Fisher's kurtosis just subtracts 3 from the notional > value to talk about *excess* kurtosis). N samples drawn from a > Gaussian distribution will have an empirical kurtosis drawn from a > particular sampling distribution peaking at 3*(N-1)/(N+1). A suitable > transformation to that sampling distribution turns it into a standard > Gaussian distribution (sigma=1). > > Apply that transformation to the empirical kurtosis obtained from the > dataset and you get a Z-score. A Z-score is just the number of sigmas > away from the mean you are on a Gaussian probability distribution. > Now, if you take this Z and add up the area under the standard > Gaussian <-|Z| and >+|Z|, you get the 2-tail Z probability. Basically, > this is the probability of getting an empirical kurtosis value at > least as extreme (in either direction) as the value that you actually > got if you took N samples from an actual Gaussian distribution. > > Does that help? Fantastic, thank you very much. This has got to be one of the most helpful mailing lists I've come across -- I ask for a reference, and you provide the explanation instead. Thanks again, Kurt From rob.clewley at gmail.com Wed Mar 19 21:07:44 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 19 Mar 2008 21:07:44 -0400 Subject: [SciPy-user] JOB: Short-term programming (consultant) work Message-ID: Dear SciPy users, The developers of the PyDSTool dynamical systems software project have money to hire a Python programmer on a short-term, per-task basis as a technical consultant. The work can be done remotely and will be paid after the completion of project milestones. The work must be completed by July, when the current funds expire. Prospective consultants could be professionals or students and will have proven experience and interest in working with NumPy/SciPy, scientific computation in general, and interfacing Python with C and Fortran codes. Detailed work plan, schedule, and project specs are negotiable (if you are talented and experienced we would like your input). The rate of pay is commensurate with experience, and may be up to $45/hr or $1000 per project milestone (no fringe benefits), according to an agreed measure of satisfactory product performance. There is a strong possibility of longer term work depending on progress and funding availability. PyDSTool (pydstool.sourceforge.net) is a multi-platform, open-source environment offering a range of library tools and utils for research in dynamical systems modeling for scientists and engineers. As a research project, it presently contains prototype code that we would like to improve and better integrate into our long-term vision and with other emerging (open-source) software tools. Depending on interest and experience, current projects might include: * Conversion and "pythonification" of old Matlab code for model analysis * Improved interface for legacy C and Fortran code (numerical integrators) via some combination of SWIG, Scons, automake * Overhaul of support for symbolic processing (probably by an interface to SymPy) For more details please contact Dr. Rob Clewley (rclewley) at (@) the Department of Mathematics, Georgia State University (gsu.edu). -- Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-651-2246 http://www.mathstat.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From justus.schwabedal at gmx.de Thu Mar 20 04:20:51 2008 From: justus.schwabedal at gmx.de (Justus Schwabedal) Date: Thu, 20 Mar 2008 09:20:51 +0100 Subject: [SciPy-user] threading Message-ID: <1C20B453-3FC1-4424-9DE3-949F4E57093B@gmx.de> Hi List! I'm looking for a way to distribute jobs among my 4 processors using threading, however top doesn't show me processor activity of 400 procent but something around 150%. Any suggestions? Fork, pthreads? I feel threading is magic; maybe there is a good scientific computing tutorial efficient threading. Yours, Justus From mforbes at physics.ubc.ca Thu Mar 20 05:36:17 2008 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Thu, 20 Mar 2008 02:36:17 -0700 Subject: [SciPy-user] Splines for even functions? Message-ID: Does anyone have a suggestion on how to force a smoothing spline to be an even function f(x) = f(-x)? I am trying to interpolate the following data representing an even function and I need a smooth but even spline. The problem is that, in the process of smoothing, the knots are not chosen evenly so the resulting spline is not even. Increasing the tolerances ultimately forces more knots to be added resulting in an even spline, but it is difficult to automate this process. from scipy import linspace, interpolate, array X = array([-1. , -0.65016502, -0.58856235, -0.26903553, -0.17370892, -0.10011001, 0. , 0.10011001, 0.17370892, 0.26903553, 0.58856235, 0.65016502, 1.]) Y = array([ 1. , 0.62928599, 0.5797223 , 0.39965815, 0.36322694, 0.3508061 , 0.35214793, 0.3508061 , 0.36322694, 0.39965815, 0.5797223 , 0.62928599, 1.]) E = array([ 1.00000000e-12, 1.45164012e-03, 2.04367440e-03, 2.34266209e-03, 1.64542215e-03, 2.21561749e-03, 3.14980262e-03, 2.21561749e-03, 1.64542215e-03, 2.34266209e-03, 2.04367440e-03, 1.45164012e-03, 1.00000000e-12]) f = interpolate.UnivariateSpline(X,Y,1./E,k=3) x = linspace(-1,1,10) max(abs(f(x) - f(-x))) 0.00066571302580914482 Thanks, Michael. From mforbes at physics.ubc.ca Thu Mar 20 06:19:53 2008 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Thu, 20 Mar 2008 03:19:53 -0700 Subject: [SciPy-user] Splines for even functions? In-Reply-To: References: Message-ID: <0A72C1DA-CB83-4919-874C-F2128AF3C842@physics.ubc.ca> One simple workaround is to explicitly form an even function: g = lambda x: (f(x) + f(-x))/2.0 Since this is linear, it is quite easy to provide derivatives etc. but it would still be nice to have this in the spline itself. Michael. On 20 Mar 2008, at 2:36 AM, Michael McNeil Forbes wrote: > Does anyone have a suggestion on how to force a smoothing spline to > be an even function f(x) = f(-x)? From millman at berkeley.edu Thu Mar 20 07:15:35 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 20 Mar 2008 04:15:35 -0700 Subject: [SciPy-user] NumPy (1.0.5) DocDay (Fri, Mar. 21) Message-ID: Hello, As I mentioned yesterday, I am holding a NumPy DocDay on Friday, March 21st. I am in Paris near the RER B or C Saint-Michel station (with Stefan van der Walt, Matthieu Brucher, and Gael Varoquaux). If you are in the area and want to join us just send me an email by the end of tonight and I will let you know where we are meeting. If you can't stop by, but are still willing to help out we will convene on IRC during the day on Friday (9:30am-?? GMT+1). Come join us at irc.freenode.net (channel scipy). We may update the list of priorities which is still located on the NumPy Trac Wiki: http://projects.scipy.org/scipy/numpy/wiki/DocDays While I am hoping to have everyone focus on NumPy, I would be happy if anyone wants to work on SciPy documentation as well: http://projects.scipy.org/scipy/scipy/wiki/DocDays Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From gary.pajer at gmail.com Thu Mar 20 10:22:49 2008 From: gary.pajer at gmail.com (Gary Pajer) Date: Thu, 20 Mar 2008 10:22:49 -0400 Subject: [SciPy-user] the physical frequency of FFT In-Reply-To: <889df5f00803181557t1112f160n6d419248e74d66a7@mail.gmail.com> References: <889df5f00803181509r3a39f9dbu2b04e031af898904@mail.gmail.com> <3d375d730803181530y13962068l55ee04b63b080228@mail.gmail.com> <889df5f00803181557t1112f160n6d419248e74d66a7@mail.gmail.com> Message-ID: <88fe22a0803200722t50f0d692oe1c991cf5838c129@mail.gmail.com> On Tue, Mar 18, 2008 at 6:57 PM, Forrest Sheng Bao wrote: > > > > On Tue, Mar 18, 2008 at 5:30 PM, Robert Kern wrote: > > > > On Tue, Mar 18, 2008 at 5:09 PM, Forrest Sheng Bao wrote: > > > Hi, > > > > > > I have some other questions about the fft function. Suppose I have N > > > sampling points and the sampling rate is Fs. I performed an N-point FFT, > > > using the code fft(my_list_of_data). > > > > > > > > > Doesn't the k-th point (0 > > (k*Fs)/(2*N)? > > > > I saw an example at Scipy Getting Started guide: > http://www.scipy.org/Getting_Started > > Can you take a look at the demo In [2] to In [5]. It seems each frequency > bin occupies a bandwidth of Fs/N and those frequency bins are ordered from 0 > to N-1. > > > > > > > It follows the conventional FFT packing: > > > > In [9]: numpy.fft.fftfreq? > > Type: function > > Base Class: > > Namespace: Interactive > > File: /Users/rkern/svn/numpy/numpy/fft/helper.py > > Definition: numpy.fft.fftfreq(n, d=1.0) > > Docstring: > > fftfreq(n, d=1.0) -> f > > > > DFT sample frequencies > > > > The returned float array contains the frequency bins in > > cycles/unit (with zero at the start) given a window length n and a > > sample spacing d: > > > > f = [0,1,...,n/2-1,-n/2,...,-1]/(d*n) if n is even > > f = [0,1,...,(n-1)/2,-(n-1)/2,...,-1]/(d*n) if n is odd > > Ok, each point in the result of FFT correspond to the frequency bin > calculated by fftfreq. So the k-th point of the first half points of fft > output, indexing from 0 to N/2, correspond the physical frequency k*Fs/N. > > Thus, if I wanna consider a frequency band 2-4Hz while the sampling rate is > 100Hz and I have 1000 sampled points, then I just need to consider the > results between the 20th and the 40th points. Is my understanding right? Yes. You can help your understanding by recognizing that the points in frequency space represent points on a discrete lattice. They do not represent the centers of bins, despite the wording in many textbooks (and the docstring, evidently). I've seen much confusion caused by the words "bin" and "leakage". (Death to both words!) One might think that if you try to represent a pure sine whose period is incommensurate with your sampling period (and hence whose frequency falls in between frequency-space points) would lead to a fft whose frequency lies in the nearby "bin" or fills the "bin" somehow. Instead, one observes "leakage". Very bad words, IMHO. -g From eric at enthought.com Thu Mar 20 12:08:55 2008 From: eric at enthought.com (eric at enthought.com) Date: Thu, 20 Mar 2008 11:08:55 -0500 (CDT) Subject: [SciPy-user] GOOD DAY Message-ID: <20080320160855.26D7EC7C08F@new.scipy.org> Your mail 192.168.1.198:3905->216.62.213.231:25 contains contaminated file _Fromeric_enthought.com__Dateeric_enthought.com__SubjGOOD_DAY_/_pbe.scr_ with virus Net-Worm.Win32.Mytob.q,so it is dropped. From peridot.faceted at gmail.com Thu Mar 20 13:10:59 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 20 Mar 2008 18:10:59 +0100 Subject: [SciPy-user] threading In-Reply-To: <1C20B453-3FC1-4424-9DE3-949F4E57093B@gmx.de> References: <1C20B453-3FC1-4424-9DE3-949F4E57093B@gmx.de> Message-ID: On 20/03/2008, Justus Schwabedal wrote: > Hi List! > I'm looking for a way to distribute jobs among my 4 processors using > threading, however top doesn't show me processor activity of 400 > procent but something around 150%. Any suggestions? Fork, pthreads? I > feel threading is magic; maybe there is a good scientific computing > tutorial efficient threading. > Yours, Justus This is actually a common question on the list. There is (now) a short document on the wiki that talks briefly about some of the issues, and some of the solutions: http://www.scipy.org/ParallelProgramming Anne From aisaac at american.edu Thu Mar 20 17:29:21 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 20 Mar 2008 17:29:21 -0400 Subject: [SciPy-user] to openopt users In-Reply-To: <47E16E9B.1070004@scipy.org> References: <47E16E9B.1070004@scipy.org> Message-ID: On Wed, 19 Mar 2008, dmitrey apparently wrote: > could anyone provide some info like the one from here? > http://scipy.org/scipy/scikits/wiki/OpenOptUsers Do you mean that you would like OpenOpt users to let you know by email if you can list them as users on the above page? Do you want individual users or just institutions? Cheers, Alan Isaac From labbuhl at hotmail.com Thu Mar 20 18:40:49 2008 From: labbuhl at hotmail.com (Lee Abbuhl) Date: Thu, 20 Mar 2008 18:40:49 -0400 Subject: [SciPy-user] GOOD DAY In-Reply-To: <20080320160855.26D7EC7C08F@new.scipy.org> References: <20080320160855.26D7EC7C08F@new.scipy.org> Message-ID: Huh? > From: eric at enthought.com > To: scipy-user at scipy.org > Date: Thu, 20 Mar 2008 11:08:55 -0500 > Subject: [SciPy-user] GOOD DAY > > Your mail 192.168.1.198:3905->216.62.213.231:25 contains contaminated file _Fromeric_enthought.com__Dateeric_enthought.com__SubjGOOD_DAY_/_pbe.scr_ with virus Net-Worm.Win32.Mytob.q,so it is dropped. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Shed those extra pounds with MSN and The Biggest Loser! http://biggestloser.msn.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Mar 20 18:48:44 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Mar 2008 17:48:44 -0500 Subject: [SciPy-user] GOOD DAY In-Reply-To: References: <20080320160855.26D7EC7C08F@new.scipy.org> Message-ID: <3d375d730803201548g5da1f58eue99a207812c357fa@mail.gmail.com> On Thu, Mar 20, 2008 at 5:40 PM, Lee Abbuhl wrote: > > Huh? A spammer is distributing virus-laden emails. He is spoofing the Sender: header to make it look like they are coming from scipy-user at scipy.org. They are not. You are seeing the virus-laden email being rejected. Just ignore these messages. I mark them as spam so that GMail's spam filters will usually filter them out. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dmitrey.kroshko at scipy.org Fri Mar 21 12:03:23 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 21 Mar 2008 18:03:23 +0200 Subject: [SciPy-user] to openopt users Message-ID: <47E3DC4B.9070601@scipy.org> Sorry, I just pressed "reply" and hadn't noticed that it was replied to Alan G. Isaac only Alan G Isaac wrote: > On Wed, 19 Mar 2008, dmitrey apparently wrote: > >> could anyone provide some info like the one from here? >> http://scipy.org/scipy/scikits/wiki/OpenOptUsers > > Do you mean that you would like OpenOpt users to > let you know by email if you can list them as > users on the above page? Yes > Do you want individual > users or just institutions? > Any > Cheers, > Alan Isaac > Regards, D. From didier.rano at gmail.com Fri Mar 21 13:10:27 2008 From: didier.rano at gmail.com (didier rano) Date: Fri, 21 Mar 2008 13:10:27 -0400 Subject: [SciPy-user] Granulate array Message-ID: Hi, I am new with numpy and scipy. But I have a problem to find a good way to compute my array. I have a array with 2 rows. Example [[5, 8, 9, 10], [1, 2, 3, 5,6]] And I need to manage the number of records without touch the "behaviour" of my array. Example: Reduce my to 2 columns => [[5, 10],[1,6]] Who knows which sort of algorithms I need to make it ? Regards -- Didier Rano didier.rano at gmail.com http://www.jaxtr.com/didierrano -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniele.dm at gmail.com Fri Mar 21 16:09:52 2008 From: daniele.dm at gmail.com (Daniele Di Mauro) Date: Fri, 21 Mar 2008 21:09:52 +0100 Subject: [SciPy-user] Sobel filter, magnitude and gradient direction Message-ID: Hi folks, it's the first time i write on this mailing list, i hope i chose the right one. Yesterday i started a small project for university and i need some help: I've to calculate the sobel edge magnitude and gradient direction of an image, and i haven't undestood if i'm doing right, btw this is the code: import Image import scipy import scipy.ndimage data = Image.open("image.jpg") image_magnitude = scipy.ndimage.filters.generic_gradient_magnitude(data, scipy.ndimage.filters.sobel) output = Image.fromstring("RGB",(320,210),image_magnitude.tostring()) output.save("imagem.jpg","JPEG") I've not look at the gradient direction part yet, but the result it's quite strange, i cannot see edges. I tried also this: import Image import scipy import scipy.ndimage data = Image.open("image.jpg") image_magnitude = scipy.ndimage.filters.sobel(data) output = Image.fromstring("RGB",(320,210),image_magnitude.tostring()) output.save("imagem.jpg","JPEG") but it doesn't convice me either coz it looks quite different from the result of gimp built-in version or using gimp matrix convolution. Thanx in advance for your help Daniele -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwagner at physics.ucsd.edu Fri Mar 21 16:22:10 2008 From: rwagner at physics.ucsd.edu (Rick Wagner) Date: Fri, 21 Mar 2008 13:22:10 -0700 Subject: [SciPy-user] Sobel filter, magnitude and gradient direction In-Reply-To: References: Message-ID: Hi Daniele, > import Image > import scipy > import scipy.ndimage > > data = Image.open("image.jpg") > image_magnitude = scipy.ndimage.filters.generic_gradient_magnitude > (data, scipy.ndimage.filters.sobel) > output = Image.fromstring("RGB",(320,210),image_magnitude.tostring()) > output.save("imagem.jpg","JPEG") You might want to plot the resulting array before saving the data, to make sure that it's the filter, and not the data. You can do this using matplotlib's imshow command. --Rick http://www.scipy.org/Cookbook/Matplotlib ------------------------------------------------------------------------ - Rick Wagner, Graduate Student Researcher UCSD Physics 9500 Gilman Drive La Jolla, CA 92093-0424 Email: rwagner at physics.ucsd.edu WWW: http://lca.ucsd.edu/projects/rpwagner (858) 822-4784 Phone ------------------------------------------------------------------------ - Measuring programming progress by lines of code is like measuring aircraft building progress by weight. --Bill Gates ------------------------------------------------------------------------ - -------------- next part -------------- An HTML attachment was scrubbed... URL: From discerptor at gmail.com Fri Mar 21 16:30:55 2008 From: discerptor at gmail.com (Joshua Lippai) Date: Fri, 21 Mar 2008 13:30:55 -0700 Subject: [SciPy-user] Running scipy.test on recent SVN build Message-ID: <9911419a0803211330u70ab7b87p4ef8724890c48462@mail.gmail.com> Hello all, Last night I tried installing scipy (on OS X 10.5.2) and I was surprised to find that nose is now required to run scipy.test. I installed it successfully through easy_install, but now it still won't run the test, instead producing the following error: >>> scipy.test(1,10) Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/nosetester.py", line 115, in test argv = self._test_argv(label, verbose, extra_argv) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/nosetester.py", line 98, in _test_argv raise TypeError, 'Selection label should be a string' TypeError: Selection label should be a string Has anyone experienced this as well or now what's going on and how to fix it so I can run scipy.test? Much thanks in advance. Josh From robert.kern at gmail.com Fri Mar 21 16:37:07 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Mar 2008 15:37:07 -0500 Subject: [SciPy-user] Sobel filter, magnitude and gradient direction In-Reply-To: References: Message-ID: <3d375d730803211337w283bb8dck4dec68636d46ca00@mail.gmail.com> On Fri, Mar 21, 2008 at 3:09 PM, Daniele Di Mauro wrote: > Hi folks, > it's the first time i write on this mailing list, i hope i chose the right > one. Yesterday i started a small project for university and i need some > help: > I've to calculate the sobel edge magnitude and gradient direction of an > image, and i haven't undestood if i'm doing right, btw this is the code: > > import Image > import scipy > import scipy.ndimage > > data = Image.open("image.jpg") > image_magnitude = scipy.ndimage.filters.generic_gradient_magnitude(data, > scipy.ndimage.filters.sobel) > output = Image.fromstring("RGB",(320,210),image_magnitude.tostring()) > output.save("imagem.jpg","JPEG") > > I've not look at the gradient direction part yet, but the result it's quite > strange, i cannot see edges. The "nd" part of ndimage means that it applies to N-dimensional images. In your case, you are giving it an array of (320,210,3). It is treating that last color axis as just another axis. It is not applying itself across each color channel separately. > I tried also this: > > import Image > import scipy > import scipy.ndimage > > data = Image.open("image.jpg") > image_magnitude = scipy.ndimage.filters.sobel(data) > output = Image.fromstring("RGB",(320,210),image_magnitude.tostring()) > output.save("imagem.jpg","JPEG") > > but it doesn't convice me either coz it looks quite different from the > result of gimp built-in version or using gimp matrix convolution. Thanx in > advance for your help The default axis is -1, so the Sobel operator is being applied along the color axis, not one of the X or Y axes. Use the parameter axis=1 and axis=0 to get those, respectively. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Mar 21 16:38:12 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Mar 2008 15:38:12 -0500 Subject: [SciPy-user] Running scipy.test on recent SVN build In-Reply-To: <9911419a0803211330u70ab7b87p4ef8724890c48462@mail.gmail.com> References: <9911419a0803211330u70ab7b87p4ef8724890c48462@mail.gmail.com> Message-ID: <3d375d730803211338je9917c6l407de669f0d3940@mail.gmail.com> On Fri, Mar 21, 2008 at 3:30 PM, Joshua Lippai wrote: > Hello all, > > Last night I tried installing scipy (on OS X 10.5.2) and I was > surprised to find that nose is now required to run scipy.test. I > installed it successfully through easy_install, but now it still won't > run the test, instead producing the following error: > > >>> scipy.test(1,10) > Traceback (most recent call last): > File "", line 1, in > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/nosetester.py", > line 115, in test > argv = self._test_argv(label, verbose, extra_argv) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/nosetester.py", > line 98, in _test_argv > raise TypeError, 'Selection label should be a string' > TypeError: Selection label should be a string > > Has anyone experienced this as well or now what's going on and how to > fix it so I can run scipy.test? Much thanks in advance. In [6]: scipy.test? Type: instancemethod Base Class: String Form: > Namespace: Interactive File: /Users/rkern/svn/scipy/scipy/testing/nosetester.py Definition: scipy.test(self, label='fast', verbose=1, extra_argv=None, doctests=False) Docstring: Run tests for module using nose Parameters ---------- label : {'fast', 'full', '', attribute identifer} Identifies test to run. This can be a string to pass to the nosetests executable with the'-A' option, or one of several special values. Special values are: 'fast' - the default - which corresponds to nosetests -A option of 'not slow'. 'full' - fast (as above) and slow test as in no -A option to nosetests - same as '' None or '' - run all tests attribute_identifier - string passed directly to nosetests as '-A' verbose : integer verbosity value for test outputs, 1-10 extra_argv : list List with any extra args to pass to nosetests doctests : boolean If True, run doctests in module, default False -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Mar 21 17:34:25 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Mar 2008 16:34:25 -0500 Subject: [SciPy-user] Granulate array In-Reply-To: References: Message-ID: <3d375d730803211434s7a88d4denea03df8ec9857195@mail.gmail.com> On Fri, Mar 21, 2008 at 12:10 PM, didier rano wrote: > Hi, > > I am new with numpy and scipy. But I have a problem to find a good way to > compute my array. > I have a array with 2 rows. Example > [[5, 8, 9, 10], [1, 2, 3, 5,6]] Note that this cannot be a valid numpy array since the second row has more elements than the first. I will move forward with the assumption that you did not intend the "5" element. > And I need to manage the number of records without touch the "behaviour" of > my array. Example: > Reduce my to 2 columns => > [[5, 10],[1,6]] > > Who knows which sort of algorithms I need to make it ? In [3]: a = array([[5, 8, 9, 10], [1, 2, 3, 6]]) In [4]: a Out[4]: array([[ 5, 8, 9, 10], [ 1, 2, 3, 6]]) In [5]: a[:,[0,-1]] Out[5]: array([[ 5, 10], [ 1, 6]]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From discerptor at gmail.com Fri Mar 21 19:58:31 2008 From: discerptor at gmail.com (Joshua Lippai) Date: Fri, 21 Mar 2008 16:58:31 -0700 Subject: [SciPy-user] Running scipy.test on recent SVN build In-Reply-To: <3d375d730803211338je9917c6l407de669f0d3940@mail.gmail.com> References: <9911419a0803211330u70ab7b87p4ef8724890c48462@mail.gmail.com> <3d375d730803211338je9917c6l407de669f0d3940@mail.gmail.com> Message-ID: <9911419a0803211658i34d0c483l21a971633b1d050b@mail.gmail.com> Alright, I tried running it without any argument in the parentheses, and tests run, but with a failure and lots of errors. >>> scipy.test() /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/__init__.py:4: DeprecationWarning: scipy.linsolve has moved to scipy.sparse.linalg.dsolve warn('scipy.linsolve has moved to scipy.sparse.linalg.dsolve', DeprecationWarning) /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/splinalg/__init__.py:3: DeprecationWarning: scipy.splinalg has moved to scipy.sparse.linalg warn('scipy.splinalg has moved to scipy.sparse.linalg', DeprecationWarning) .../Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/vq.py:477: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " exception raised as expected: One of the clusters is empty. Re-run kmean with a different initialization. ................................................................/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/fitpack2.py:458: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ........................................... Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ./Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/lib/utils.py:111: DeprecationWarning: write_array is deprecated warnings.warn(str1, DeprecationWarning) /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/lib/utils.py:111: DeprecationWarning: read_array is deprecated warnings.warn(str1, DeprecationWarning) ..................../Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/lib/utils.py:111: DeprecationWarning: npfile is deprecated warnings.warn(str1, DeprecationWarning) ............................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .. **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ..........................................NO ATLAS INFO AVAILABLE ......................................... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ...............................................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .... **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ...Result may be inaccurate, approximate err = 1.23518201169e-08 ...Result may be inaccurate, approximate err = 7.27595761418e-12 .....SSS........................................................................................................................................................................................................................................................................................................................................................................................................./Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/_segmenter.py:35: UserWarning: The segmentation code is under heavy development and therefore the public API will change in the future. The NIPY group is actively working on this code, and has every intention of generalizing this for the Scipy community. Use this module minimally, if at all, until it this warning is removed. warnings.warn(_msg, UserWarning) EEE............................................SSSSSSSSSSS..... _naupd: Number of update iterations taken ----------------------------------------- 1 - 1: 12 _naupd: Number of wanted "converged" Ritz values ------------------------------------------------ 1 - 1: 4 _naupd: Real part of the final Ritz values ------------------------------------------ 1 - 4: 1.033D+00 7.746D-01 5.164D-01 2.582D-01 _naupd: Imaginary part of the final Ritz values ----------------------------------------------- 1 - 4: 0.000D+00 0.000D+00 0.000D+00 0.000D+00 _naupd: Associated Ritz estimates --------------------------------- 1 - 4: 3.197D-15 3.493D-18 2.179D-23 2.140D-26 ============================================= = Nonsymmetric implicit Arnoldi update code = = Version Number: 2.4 = = Version Date: 07/31/96 = ============================================= = Summary of timing statistics = ============================================= Total number update iterations = 12 Total number of OP*x operations = 55 Total number of B*x operations = 0 Total number of reorthogonalization steps = 54 Total number of iterative refinement steps = 0 Total number of restart steps = 0 Total time in user OP*x operation = 0.001100 Total time in user B*x operation = 0.000000 Total time in Arnoldi update routine = 0.002971 Total time in naup2 routine = 0.002768 Total time in basic Arnoldi iteration loop = 0.001710 Total time in reorthogonalization phase = 0.000257 Total time in (re)start vector generation = 0.000004 Total time in Hessenberg eig. subproblem = 0.000693 Total time in getting the shifts = 0.000045 Total time in applying the shifts = 0.000210 Total time in convergence testing = 0.000023 Total time in computing final Ritz vectors = 0.000000 ...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................0.2 0.2 0.2 ......0.2 ..0.2 0.2 0.2 0.2 0.2 ...................E........EE....F......................EE..................................................................................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. .........warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations ..warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations ............................building extensions here: /Users/Josh/.python25_compiled/m0 ................................................................................................ ====================================================================== ERROR: test1 (test_segment.TestSegment) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests/test_segment.py", line 12, in test1 image = get_slice(filename) NameError: global name 'get_slice' is not defined ====================================================================== ERROR: test2 (test_segment.TestSegment) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests/test_segment.py", line 27, in test2 sourceImage, labeledMask, ROIList = segment_regions(filename) NameError: global name 'segment_regions' is not defined ====================================================================== ERROR: test3 (test_segment.TestSegment) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests/test_segment.py", line 37, in test3 regionMask, numberRegions = grow_regions(filename) NameError: global name 'grow_regions' is not defined ====================================================================== ERROR: Failure: ImportError (cannot import name _bspline) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_bspline.py", line 9, in import scipy.stats.models.bspline as B File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/bspline.py", line 23, in from scipy.stats.models import _bspline ImportError: cannot import name _bspline ====================================================================== ERROR: test_factor3 (test_formula.TestFormula) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_formula.py", line 231, in test_factor3 m = fac.main_effect(reference=1) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/formula.py", line 273, in main_effect reference = names.index(reference) ValueError: list.index(x): x not in list ====================================================================== ERROR: test_factor4 (test_formula.TestFormula) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_formula.py", line 239, in test_factor4 m = fac.main_effect(reference=2) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/formula.py", line 273, in main_effect reference = names.index(reference) ValueError: list.index(x): x not in list ====================================================================== ERROR: test_huber (test_scale.TestScale) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_scale.py", line 35, in test_huber m = scale.huber(X) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/robust/scale.py", line 82, in __call__ for donothing in self: File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/robust/scale.py", line 102, in next scale = N.sum(subset * (a - mu)**2, axis=self.axis) / (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) * Huber.c**2) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line 930, in sum return sum(axis, dtype, out) TypeError: only length-1 arrays can be converted to Python scalars ====================================================================== ERROR: test_huberaxes (test_scale.TestScale) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_scale.py", line 40, in test_huberaxes m = scale.huber(X, axis=0) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/robust/scale.py", line 82, in __call__ for donothing in self: File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/robust/scale.py", line 102, in next scale = N.sum(subset * (a - mu)**2, axis=self.axis) / (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) * Huber.c**2) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line 930, in sum return sum(axis, dtype, out) TypeError: only length-1 arrays can be converted to Python scalars ====================================================================== FAIL: test_namespace (test_formula.TestFormula) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_formula.py", line 119, in test_namespace self.assertEqual(xx.namespace, Y.namespace) AssertionError: {} != {'Y': array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98]), 'X': array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49])} ====================================================================== SKIP: test_bytescale (test_pilutil.TestPILUtil) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: Need to import PIL for this test ====================================================================== SKIP: test_imresize (test_pilutil.TestPILUtil) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: Need to import PIL for this test ====================================================================== SKIP: Test generator for parametric tests ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/case.py", line 203, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: Need to import PIL for this test ====================================================================== SKIP: Getting factors of complex matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Getting factors of real matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Getting factors of complex matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Getting factors of real matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Prefactorize (with UMFPACK) matrix for solving with multiple rhs ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Prefactorize matrix for solving with multiple rhs ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve with UMFPACK: double precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve: single precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve with UMFPACK: double precision, sparse rhs ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve with UMFPACK: double precision ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve: single precision ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ---------------------------------------------------------------------- Ran 2022 tests in 19.838s FAILED (failures=1, errors=8) Should I just try a clean install of SciPy at this point? Josh On Fri, Mar 21, 2008 at 1:38 PM, Robert Kern wrote: > > On Fri, Mar 21, 2008 at 3:30 PM, Joshua Lippai wrote: > > Hello all, > > > > Last night I tried installing scipy (on OS X 10.5.2) and I was > > surprised to find that nose is now required to run scipy.test. I > > installed it successfully through easy_install, but now it still won't > > run the test, instead producing the following error: > > > > >>> scipy.test(1,10) > > Traceback (most recent call last): > > File "", line 1, in > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/nosetester.py", > > line 115, in test > > argv = self._test_argv(label, verbose, extra_argv) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/nosetester.py", > > line 98, in _test_argv > > raise TypeError, 'Selection label should be a string' > > TypeError: Selection label should be a string > > > > Has anyone experienced this as well or now what's going on and how to > > fix it so I can run scipy.test? Much thanks in advance. > > In [6]: scipy.test? > > > > > > > > Type: instancemethod > Base Class: > String Form: 2d930>> > Namespace: Interactive > File: /Users/rkern/svn/scipy/scipy/testing/nosetester.py > Definition: scipy.test(self, label='fast', verbose=1, > extra_argv=None, doctests=False) > Docstring: > Run tests for module using nose > > Parameters > ---------- > label : {'fast', 'full', '', attribute identifer} > Identifies test to run. This can be a string to pass to > the nosetests executable with the'-A' option, or one of > several special values. > Special values are: > 'fast' - the default - which corresponds to > nosetests -A option of > 'not slow'. > 'full' - fast (as above) and slow test as in > no -A option to nosetests - same as '' > None or '' - run all tests > attribute_identifier - string passed directly to > nosetests as '-A' > verbose : integer > verbosity value for test outputs, 1-10 > extra_argv : list > List with any extra args to pass to nosetests > doctests : boolean > If True, run doctests in module, default False > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From didier.rano at gmail.com Fri Mar 21 21:11:29 2008 From: didier.rano at gmail.com (didier rano) Date: Fri, 21 Mar 2008 21:11:29 -0400 Subject: [SciPy-user] Granulate array In-Reply-To: <3d375d730803211434s7a88d4denea03df8ec9857195@mail.gmail.com> References: <3d375d730803211434s7a88d4denea03df8ec9857195@mail.gmail.com> Message-ID: Thank you for your answer. And yes, I forgot a number in my first row ! In fact, I think that it wasn't a good example. I need more a sampling algorithm or similar. My second row will be a time series, but not always regular i.e. with some holes (In my example, '4' is missing). Didier Rano 2008/3/21, Robert Kern : > > On Fri, Mar 21, 2008 at 12:10 PM, didier rano > wrote: > > Hi, > > > > I am new with numpy and scipy. But I have a problem to find a good way > to > > compute my array. > > I have a array with 2 rows. Example > > [[5, 8, 9, 10], [1, 2, 3, 5,6]] > > > Note that this cannot be a valid numpy array since the second row has > more elements than the first. I will move forward with the assumption > that you did not intend the "5" element. > > > > And I need to manage the number of records without touch the "behaviour" > of > > my array. Example: > > Reduce my to 2 columns => > > [[5, 10],[1,6]] > > > > Who knows which sort of algorithms I need to make it ? > > > In [3]: a = array([[5, 8, 9, 10], [1, 2, 3, 6]]) > > In [4]: a > Out[4]: > array([[ 5, 8, 9, 10], > [ 1, 2, 3, 6]]) > > In [5]: a[:,[0,-1]] > Out[5]: > array([[ 5, 10], > [ 1, 6]]) > > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Didier Rano didier.rano at gmail.com http://www.jaxtr.com/didierrano -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Fri Mar 21 21:30:13 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 21 Mar 2008 21:30:13 -0400 Subject: [SciPy-user] Granulate array In-Reply-To: References: <3d375d730803211434s7a88d4denea03df8ec9857195@mail.gmail.com> Message-ID: <200803212130.13787.pgmdevlist@gmail.com> On Friday 21 March 2008 21:11:29 didier rano wrote: > Thank you for your answer. And yes, I forgot a number in my first row ! > > In fact, I think that it wasn't a good example. I need more a sampling > algorithm or similar. My second row will be a time series, but not always > regular i.e. with some holes (In my example, '4' is missing). Shameless plug: Try scikits.timeseries. http://scipy.org/scipy/scikits/wiki/TimeSeries It's a package designed to handle series indexed in time with missing data and/or missing dates, that looks like what you need. Send me a message if you need more help. Cheers From discerptor at gmail.com Sat Mar 22 04:45:06 2008 From: discerptor at gmail.com (Joshua Lippai) Date: Sat, 22 Mar 2008 01:45:06 -0700 Subject: [SciPy-user] Running scipy.test on recent SVN build In-Reply-To: <9911419a0803211330u70ab7b87p4ef8724890c48462@mail.gmail.com> References: <9911419a0803211330u70ab7b87p4ef8724890c48462@mail.gmail.com> Message-ID: <9911419a0803220145w47fdc956xad21f81e2694e958@mail.gmail.com> Alright, I rebuilt scipy and managed to whittle it down to 7 errors instead of 8, but that failure is still there, and there weren't any alarms during the building other than: library 'mach' defined more than once, overwriting build_info {'sources': ['scipy/integrate/mach/d1mach.f', 'scipy/integrate/mach/i1mach.f', 'scipy/integrate/mach/r1mach.f', 'scipy/integrate/mach/xerror.f'], 'config_fc': {'noopt': ('scipy/integrate/setup.pyc', 1)}, 'source_languages': ['f77']}... with {'sources': ['scipy/special/mach/d1mach.f', 'scipy/special/mach/i1mach.f', 'scipy/special/mach/r1mach.f', 'scipy/special/mach/xerror.f'], 'config_fc': {'noopt': ('scipy/special/setup.pyc', 1)}, 'source_languages': ['f77']}... Any ideas on this? On Fri, Mar 21, 2008 at 1:30 PM, Joshua Lippai wrote: > Hello all, > > Last night I tried installing scipy (on OS X 10.5.2) and I was > surprised to find that nose is now required to run scipy.test. I > installed it successfully through easy_install, but now it still won't > run the test, instead producing the following error: > > >>> scipy.test(1,10) > Traceback (most recent call last): > File "", line 1, in > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/nosetester.py", > line 115, in test > argv = self._test_argv(label, verbose, extra_argv) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/nosetester.py", > line 98, in _test_argv > raise TypeError, 'Selection label should be a string' > TypeError: Selection label should be a string > > Has anyone experienced this as well or now what's going on and how to > fix it so I can run scipy.test? Much thanks in advance. > > Josh > From david at ar.media.kyoto-u.ac.jp Sat Mar 22 04:41:45 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 22 Mar 2008 17:41:45 +0900 Subject: [SciPy-user] Running scipy.test on recent SVN build In-Reply-To: <9911419a0803220145w47fdc956xad21f81e2694e958@mail.gmail.com> References: <9911419a0803211330u70ab7b87p4ef8724890c48462@mail.gmail.com> <9911419a0803220145w47fdc956xad21f81e2694e958@mail.gmail.com> Message-ID: <47E4C649.1050207@ar.media.kyoto-u.ac.jp> Joshua Lippai wrote: > Alright, I rebuilt scipy and managed to whittle it down to 7 errors > instead of 8, but that failure is still there, and there weren't any > alarms during the building other than: > AFAIK, the trunk of scipy does have some errors for some time; I am not involved in the changes, but from my own understanding, it looks like some big changes were done in some parts of scipy (nd_image, sparse). I don't have errors outside those modules. I would not worry too much about those errors before the code stabilize in those modules (of course, if you use those modules, that's another story). cheers, David From nwagner at iam.uni-stuttgart.de Sat Mar 22 05:39:07 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 22 Mar 2008 10:39:07 +0100 Subject: [SciPy-user] Running scipy.test on recent SVN build In-Reply-To: <47E4C649.1050207@ar.media.kyoto-u.ac.jp> References: <9911419a0803211330u70ab7b87p4ef8724890c48462@mail.gmail.com> <9911419a0803220145w47fdc956xad21f81e2694e958@mail.gmail.com> <47E4C649.1050207@ar.media.kyoto-u.ac.jp> Message-ID: On Sat, 22 Mar 2008 17:41:45 +0900 David Cournapeau wrote: > Joshua Lippai wrote: >> Alright, I rebuilt scipy and managed to whittle it down >>to 7 errors >> instead of 8, but that failure is still there, and there >>weren't any >> alarms during the building other than: >> > > AFAIK, the trunk of scipy does have some errors for some >time; I am not > involved in the changes, but from my own understanding, >it looks like > some big changes were done in some parts of scipy >(nd_image, sparse). I > don't have errors outside those modules. I would not >worry too much > about those errors before the code stabilize in those >modules (of > course, if you use those modules, that's another story). > > cheers, > > David I have filed tickets for those errors http://projects.scipy.org/scipy/scipy/ticket/586 http://projects.scipy.org/scipy/scipy/ticket/587 Nils From discerptor at gmail.com Sat Mar 22 08:07:47 2008 From: discerptor at gmail.com (Joshua Lippai) Date: Sat, 22 Mar 2008 05:07:47 -0700 Subject: [SciPy-user] Running scipy.test on recent SVN build In-Reply-To: References: <9911419a0803211330u70ab7b87p4ef8724890c48462@mail.gmail.com> <9911419a0803220145w47fdc956xad21f81e2694e958@mail.gmail.com> <47E4C649.1050207@ar.media.kyoto-u.ac.jp> Message-ID: <9911419a0803220507l59c89822pabd557673443a7b1@mail.gmail.com> That's a relief to know, and it lets me know of something that I could even try working on! That covers the errors, but what of the failure of test_namespace? It looks like the 'X' array is coming out exactly twice the 'Y' array. Is this a known problem as well? Josh On Sat, Mar 22, 2008 at 2:39 AM, Nils Wagner wrote: > On Sat, 22 Mar 2008 17:41:45 +0900 > David Cournapeau wrote: > > Joshua Lippai wrote: > >> Alright, I rebuilt scipy and managed to whittle it down > >>to 7 errors > >> instead of 8, but that failure is still there, and there > >>weren't any > >> alarms during the building other than: > >> > > > > AFAIK, the trunk of scipy does have some errors for some > >time; I am not > > involved in the changes, but from my own understanding, > >it looks like > > some big changes were done in some parts of scipy > >(nd_image, sparse). I > > don't have errors outside those modules. I would not > >worry too much > > about those errors before the code stabilize in those > >modules (of > > course, if you use those modules, that's another story). > > > > cheers, > > > > David > > I have filed tickets for those errors > > http://projects.scipy.org/scipy/scipy/ticket/586 > http://projects.scipy.org/scipy/scipy/ticket/587 > > Nils > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From wnbell at gmail.com Sat Mar 22 08:40:40 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 22 Mar 2008 07:40:40 -0500 Subject: [SciPy-user] Running scipy.test on recent SVN build In-Reply-To: <9911419a0803220507l59c89822pabd557673443a7b1@mail.gmail.com> References: <9911419a0803211330u70ab7b87p4ef8724890c48462@mail.gmail.com> <9911419a0803220145w47fdc956xad21f81e2694e958@mail.gmail.com> <47E4C649.1050207@ar.media.kyoto-u.ac.jp> <9911419a0803220507l59c89822pabd557673443a7b1@mail.gmail.com> Message-ID: On Sat, Mar 22, 2008 at 7:07 AM, Joshua Lippai wrote: > That's a relief to know, and it lets me know of something that I could > even try working on! That covers the errors, but what of the failure > of test_namespace? It looks like the 'X' array is coming out exactly > twice the 'Y' array. Is this a known problem as well? It's "known" in the sense that it happens to everyone, but I don't think anyone is working on it. Currently scipy.test() gives me FAILED (failures=1, errors=5) I believe these errors have existed for several weeks now, so it would be great if someone would take a look at them. Alternatively, you can try shaming the maintainers of those components into fixing the errors :) AFAIK scipy.sparse is in the clear -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From daniele.dm at gmail.com Sat Mar 22 10:50:49 2008 From: daniele.dm at gmail.com (Daniele Di Mauro) Date: Sat, 22 Mar 2008 15:50:49 +0100 Subject: [SciPy-user] Sobel filter, magnitude and gradient direction In-Reply-To: <3d375d730803211337w283bb8dck4dec68636d46ca00@mail.gmail.com> References: <3d375d730803211337w283bb8dck4dec68636d46ca00@mail.gmail.com> Message-ID: Hi Robert thanks for the answer, The default axis is -1, so the Sobel operator is being applied along > the color axis, not one of the X or Y axes. Use the parameter axis=1 > and axis=0 to get those, respectively. > as you suggest, the right way is: # where data is an Image object image_x= scipy.ndimage.filters.sobel(data,1) image_xy= scipy.ndimage.filters.sobel(image_x,0) is it right? I tried it and the result it's even wors, as you can see: original: http://img135.imageshack.us/img135/3672/copyaa2.jpg sobel: http://img135.imageshack.us/img135/2484/imagexyqc4.jpg do i have to do some adjustment? thanks in advance Daniele Di Mauro > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Sat Mar 22 12:09:46 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 22 Mar 2008 18:09:46 +0200 Subject: [SciPy-user] ANN: constrained scipy.fsolve equivalent Message-ID: <47E52F4A.2080007@scipy.org> Hi all, I'm glad to inform that now scikits.openopt NLSP solver nssolve can handle all types of constraints; user-supplied derivatives to f, c, h can be handled as well. See NLSP constrained example . Also, a small bugfix for nssolve iprint parameter handling has been made. However, currently graphic output for constrained NLSP does not work properly yet, it's intended to be fixed in future. nssolve is intended for non-smooth and noisy funcs (and uses NSP ralg solver), hence smooth functions are handled, of course, much better by scipy.optimize fsolve. Also, convergence for non-convex functions is not guarantied. Regards, D. From allen.fowler at yahoo.com Sun Mar 23 20:29:20 2008 From: allen.fowler at yahoo.com (Allen Fowler) Date: Sun, 23 Mar 2008 17:29:20 -0700 (PDT) Subject: [SciPy-user] iPython: How to reload & run? Message-ID: <395933.73654.qm@web45616.mail.sp1.yahoo.com> Hello, New iPython user here.... Typing %run only reloads the actually file... any of my modules are not reloaded even if I've chenged them. Is there short-cut command to %run a file and forcefully reload any modules that its uses? Thank you ____________________________________________________________________________________ Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs From fperez.net at gmail.com Sun Mar 23 20:36:57 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 23 Mar 2008 17:36:57 -0700 Subject: [SciPy-user] iPython: How to reload & run In-Reply-To: <395933.73654.qm@web45616.mail.sp1.yahoo.com> References: <395933.73654.qm@web45616.mail.sp1.yahoo.com> Message-ID: On Sun, Mar 23, 2008 at 5:29 PM, Allen Fowler wrote: > Hello, > > New iPython user here.... > > Typing %run only reloads the actually file... any of my modules are not reloaded even if I've chenged them. > > Is there short-cut command to %run a file and forcefully reload any modules that its uses? No, unfortunately no. I typically just put a few judiciously chosen reload(foo) reload(bar) atop the script itself. You should keep in mind that generically, reloading is tricky business: it's order-dependent, and what to do with already in-memory objects when their supporting modules change isn't clear. Python still has a way to go before we catch the fabled lisp machines from the 70's in terms of interactive modifications of live code, I'm afraid. Cheers, f From peridot.faceted at gmail.com Sun Mar 23 20:53:11 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 23 Mar 2008 20:53:11 -0400 Subject: [SciPy-user] iPython: How to reload & run In-Reply-To: <395933.73654.qm@web45616.mail.sp1.yahoo.com> References: <395933.73654.qm@web45616.mail.sp1.yahoo.com> Message-ID: On 23/03/2008, Allen Fowler wrote: > Hello, > > New iPython user here.... > > Typing %run only reloads the actually file... any of my modules are not reloaded even if I've chenged them. > > Is there short-cut command to %run a file and forcefully reload any modules that its uses? Not exactly. But ipython provides dreload() to make reloading recursive. There are also issues with existing objects keeping the same code they were created with; see http://mail.python.org/pipermail/python-list/2004-June/264778.html for a partial solution. Anne From fperez.net at gmail.com Sun Mar 23 20:58:37 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 23 Mar 2008 17:58:37 -0700 Subject: [SciPy-user] iPython: How to reload & run In-Reply-To: References: <395933.73654.qm@web45616.mail.sp1.yahoo.com> Message-ID: On Sun, Mar 23, 2008 at 5:53 PM, Anne Archibald > Not exactly. But ipython provides dreload() to make reloading > recursive. There are also issues with existing objects keeping the > same code they were created with; see > http://mail.python.org/pipermail/python-list/2004-June/264778.html > for a partial solution. I didn't want to mention dreload :) Partly because it's not really 100% robust, but lots of people seem very happy with it, so definitely keep it in mind. Cheers, f From lilyphysik at gmail.com Mon Mar 24 05:21:05 2008 From: lilyphysik at gmail.com (chengbo duan) Date: Mon, 24 Mar 2008 17:21:05 +0800 Subject: [SciPy-user] Fwd: Problems with scipy.linalg In-Reply-To: <554c886f0803240202q456b0acdib5901d866b487f1d@mail.gmail.com> References: <554c886f0803240202q456b0acdib5901d866b487f1d@mail.gmail.com> Message-ID: <554c886f0803240221h45784478lccff65370553f084@mail.gmail.com> Hi ,Guys I was frustrated with the function "eig" .The matrix looks like this: [ 0 ,1 ,exp(1.0j*k) ] [ 1 , 0 , 1 ] [ exp(-1.0j*k) , 1 , 0 ] Here 1.0j is the complex expression in python.The varible 'k' is an argument.The eigenvalues can be determined exactly by Maxima. They are : sqrt(3.0) * sin(k/3.0) - cos(k/3.0), -sqrt(3.0) * sin(k/3.0) - cos(k/3.0), 2.0 * cos(k/3.0) The above functions are plotted in fig1.I also use scipy.linalg.eig to calculate the eigenvalues ,and plot them in fig2.You will find that there is a discontinuty around pi/2 in fig2 .In order to confirm which is right,I use subroutine "zheev" in lapack to get the result which confirm the fig1 is correct.So I think maybe there is a bug when I use "eig". I paste the code in the end of letters,any suggestion is welcome. Best wishes, Abo By the way,the version of scipy is 0.6.0 . #!c:/Python25/python.exe from numpy import * import scipy from scipy.linalg import * from math import * import pylab k=arange(-scipy.pi,scipy.pi,0.001,dtype=float64) n=len(k) d1=zeros(n,dtype=float64) d2=zeros(n,dtype=float64) d3=zeros(n,dtype=float64) a=zeros((3,3),dtype=complex128) t1=1.0 t2=1.0 t3=1.0 j=0 for i in k: a[0][1]=t1 a[1][0]=t1 a[0][2]=scipy.exp((1.0j)*i)*t3 a[2][0]=scipy.exp((-1.0j)*i)*t3 a[1][2]=t2 a[2][1]=t2 d,v=eig(a) d1[j]=scipy.real(d[0]) d2[j]=scipy.real(d[1]) d3[j]=scipy.real(d[2]) print i,d1[j] j+=1 pylab.plot(k,d1,"r--",k,d2,'bs',k,d3,'g^') pylab.savefig('c:/t2.png') -- ??????????????? Department of Physics Wuhan university P.R.China -------------------------------------------- -- ??????????????? Department of Physics Wuhan university P.R.China -------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fig2.png Type: image/png Size: 38568 bytes Desc: not available URL: From mcoletti at gmail.com Mon Mar 24 07:22:16 2008 From: mcoletti at gmail.com (Mark Coletti) Date: Mon, 24 Mar 2008 07:22:16 -0400 Subject: [SciPy-user] ImportError: No module named __config__ Message-ID: <120095080803240422l1cd8ae05kebc41af867a7ca99@mail.gmail.com> I've just installed scipy 0.6.0 but am unable to use it. When attempting to import scipy, I get the following error: Python 2.5.2 (r252:60911, Feb 27 2008, 23:06:45) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Traceback (most recent call last): File "", line 1, in File "scipy/__init__.py", line 54, in from __config__ import show as show_config ImportError: No module named __config__ I see where a number of people have posted similar errors, but no solutions were forthcoming. (At least from what I can tell.) Any idea on what the problem is and how to resolve it? Cheers, Mark -- I'm taking reality in small doses to build immunity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Mar 24 07:45:32 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 24 Mar 2008 12:45:32 +0100 Subject: [SciPy-user] ImportError: No module named __config__ In-Reply-To: <120095080803240422l1cd8ae05kebc41af867a7ca99@mail.gmail.com> References: <120095080803240422l1cd8ae05kebc41af867a7ca99@mail.gmail.com> Message-ID: <1e2af89e0803240445jaa86aabw41e4dfb6a8a4bedf@mail.gmail.com> Hi, Did you see this message just sent by David Cournapeau in response to similar question? On Mon, Mar 24, 2008 at 12:22 PM, Mark Coletti wrote: > I've just installed scipy 0.6.0 but am unable to use it. When attempting to > import scipy, I get the following error: > > Python 2.5.2 (r252:60911, Feb 27 2008, 23:06:45) > [GCC 4.1.0 (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > Traceback (most recent call last): > File "", line 1, in > File "scipy/__init__.py", line 54, in > from __config__ import show as show_config > ImportError: No module named __config__ Mark Coletti wrote: > > I'm having the same problem with my SuSE 10.1 box. I've had to install python > 2.5 in /usr/local/; 2.4 is still in /usr/bin. This *shouldn't* be a problem, > but I worry there might be some sort of conflict between the two. I hope > there's not some compatibility problem between the latest python and scipy. > This is because you are trying to import numpy while being in the numpy source tree, which does not work. The solution is to simply relaunch your python interpreter from another directory. The last svn has a better error message, because the above error is indeed quite obscure. ... David From mforbes at physics.ubc.ca Mon Mar 24 11:16:48 2008 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Mon, 24 Mar 2008 08:16:48 -0700 Subject: [SciPy-user] Fwd: Problems with scipy.linalg In-Reply-To: <554c886f0803240221h45784478lccff65370553f084@mail.gmail.com> References: <554c886f0803240202q456b0acdib5901d866b487f1d@mail.gmail.com> <554c886f0803240221h45784478lccff65370553f084@mail.gmail.com> Message-ID: <7925D851-A1B0-440E-97A2-F4D5F4B8E1BB@physics.ubc.ca> Hi Abo, On 24 Mar 2008, at 2:21 AM, chengbo duan wrote: > Hi ,Guys > I was frustrated with the function "eig" .The matrix looks like > this: > > [ 0 ,1 ,exp(1.0j*k) ] > [ 1 , 0 , 1 ] > [ exp(-1.0j*k) , 1 , 0 ] ... > You will find that there is a discontinuty around pi/2. A couple of comments: 1) If your matrix is Hermitian, you should use eigh rather than eig. It is much more stable (and returns sorted eigenvalues so you won't have the same discontinuities). 2) eig() makes no claim about the order of the returned eigenvalues, so there is no "bug" here. If you need the eigenvalues in a particular order, then just sort the results (d.sort()) before use. Then the only discontinuities will be where there are degenerate eigenvalues. 3) While it would be very nice to have numerical routines return eigenvalues in such an order that they are "continuous", there is no way for the algorithm to know how to do this since it only has the matrix to work with. (You could easily construct simple examples where the eigenvalues "switch" continuously so that at one point, eig would have to return the eigenvalues in one order, while at another it would have to switch the order even though the matrix is the same.) Michael. From alan.mcintyre at gmail.com Mon Mar 24 15:50:37 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Mon, 24 Mar 2008 15:50:37 -0400 Subject: [SciPy-user] Numpy/Cython Google Summer of Code project idea In-Reply-To: References: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> Message-ID: <1d36917a0803241250v48b97548p61a12494378bd84a@mail.gmail.com> On Fri, Mar 7, 2008 at 4:57 AM, Fernando Perez wrote: > On Thu, Mar 6, 2008 at 3:18 PM, Alan McIntyre wrote: > > > A specific project along these lines, that would be very beneficial > > > for numpy could be: > > > > > > - Creating new matrix types in cython that match the cvxopt matrices. > > > The creation of new numpy array types with efficient code would be > > > very useful. > > > > > > - Rewriting the existing ndarray subclasses that ship with numpy, such > > > as record arrays, in cython. In doing this, benchmarks of the > > > relative performance of the new code should be obtained. > > > > What level of experience do you think would be necessary for the > > student for this? I've got a fair amount of Python & C experience, and > > I've used numpy and Pyrex (but not Cython) in the past. I wouldn't > > mind putting in some time to become familiar with the particulars > > before submitting a project proposal. > > I don't want to put words in others' mouths, since I wouldn't be the > one mentoring such a project. I suspect that you'd need to be > reasonably familiar with some C programming, because at this point in > the game this project might require debugging generated C code and > perhaps diving into Cython itself. In the long term we'd like Cython > to be so python-like and friendly that those who are *not* C experts > can use it effectively for Numpy programming, but that isn't currently > the case. > > As far as Pyrex/cython, if you know pyrex, you'll be OK with cython. > It's only better than pyrex, but is as far as I know mostly, if not > fully compatible with pyrex. I don't have a problem doing any of that, so I'll try to re-familiarize myself my numpy and Cython and see how hard it looks. However, if somebody is already planning on submitting a proposal for one or both of these projects, please let me know so we're not competing with each other needlessly. If anybody is interested in mentoring these, and can offer any advice (or suggestions for other things that need doing), I'd be glad to hear from you. Thanks, Alan From mcoletti at gmail.com Mon Mar 24 17:35:03 2008 From: mcoletti at gmail.com (Mark Coletti) Date: Mon, 24 Mar 2008 17:35:03 -0400 Subject: [SciPy-user] ImportError: No module named __config__ In-Reply-To: <1e2af89e0803240445jaa86aabw41e4dfb6a8a4bedf@mail.gmail.com> References: <120095080803240422l1cd8ae05kebc41af867a7ca99@mail.gmail.com> <1e2af89e0803240445jaa86aabw41e4dfb6a8a4bedf@mail.gmail.com> Message-ID: <120095080803241435o6803907kfe30d9cca03b2e98@mail.gmail.com> On Mon, Mar 24, 2008 at 7:45 AM, Matthew Brett wrote: > Hi, > > Did you see this message just sent by David Cournapeau in response to > similar question? I saw, but dismissed the question since I thought this was a scipy, not a numpy problem. > > On Mon, Mar 24, 2008 at 12:22 PM, Mark Coletti wrote: > > I've just installed scipy 0.6.0 but am unable to use it. When > attempting to > > import scipy, I get the following error: > [...] > > > File "scipy/__init__.py", line 54, in > > from __config__ import show as show_config > > ImportError: No module named __config__ > > Mark Coletti wrote: > > > > I'm having the same problem with my SuSE 10.1 box. [...] > > This is because you are trying to import numpy while being in the numpy > source tree, which does not work. The solution is to simply relaunch > your python interpreter from another directory. The last svn has a > better error message, because the above error is indeed quite obscure. > And I also dismissed his message because I wasn't in the numpy source tree. ;) However, I changed the /usr/bin/python symbolic link to refer to the newer 2.5.4 python in /usr/local/bin (since SuSE won't bother to update the 10.1native python to 2.5, so upgrading via YaST is futile). That *seemed* to help. So it may very well have been some issue with two versions of python conflicting with one another. (I had set the execution paths to prefer the newer python, but forgot that some scripts have the old python hard-coded in their bang line. Urgh.) Of course I've run into another problem, which I'll relate in a separate e-mail. Oh, the joys of legacy code. :) Cheers! Mark -- I'm taking reality in small doses to build immunity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcoletti at gmail.com Mon Mar 24 17:56:05 2008 From: mcoletti at gmail.com (Mark Coletti) Date: Mon, 24 Mar 2008 17:56:05 -0400 Subject: [SciPy-user] How to read legacy scipy.Pickler arrays? Message-ID: <120095080803241456s2a1447d3m30112194a76fe217@mail.gmail.com> I have some old python code that used scipy.Pickler to save arrays to disk. After installing the new scipy, I get this error: data_unpickler = scipy.Unpickler( data_file ) AttributeError: 'module' object has no attribute 'Unpickler' So I went looking for it: mcoletti at sapient:/usr/local/lib/python2.5/site-packages/scipy> find . -name "*.py" | xargs grep -H Pickler ./special/tests/Test.py: p=Numeric.Pickler(f) Hmm. Ok, so mebbe I can directly use the pickle module instead? time_min_data = data_unpickler.load() File "/usr/local/lib/python2.5/pickle.py", line 858, in load dispatch[key](self) KeyError: 'A' Well, ok. Maybe not. Does numpy have a Pickler? mcoletti at sapient:/usr/local/lib/python2.5/site-packages/numpy> find . -name "*.py" | xargs grep -H Pickler ./oldnumeric/misc.py: 'Pickler', 'dot', 'outerproduct', 'innerproduct', 'insert'] ./oldnumeric/misc.py:class Pickler(pickle.Pickler): ./numarray/session.py: p = pickle.Pickler(file, protocol=2) Well, that has potential. Let's try that: data_unpickler = numpy.oldnumeric.Unpickler( data_file ) File "/usr/local/lib/python2.5/site-packages/numpy/oldnumeric/misc.py", line 34, in __init__ raise NotImplemented TypeError: exceptions must be classes, instances, or strings (deprecated), not NotImplementedType Ok, I give up. What do I need to do to read legacy scipy pickled data? Cheers, Mark -- I'm taking reality in small doses to build immunity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Mar 24 18:16:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Mar 2008 17:16:18 -0500 Subject: [SciPy-user] How to read legacy scipy.Pickler arrays? In-Reply-To: <120095080803241456s2a1447d3m30112194a76fe217@mail.gmail.com> References: <120095080803241456s2a1447d3m30112194a76fe217@mail.gmail.com> Message-ID: <3d375d730803241516n8499ab2y5b34927b69a13c72@mail.gmail.com> On Mon, Mar 24, 2008 at 4:56 PM, Mark Coletti wrote: > Ok, I give up. What do I need to do to read legacy scipy pickled data? Here is the relevant code from old Numeric. It should be straightforward to update for numpy. def DumpArray(m, fp): if m.typecode() == 'O': raise TypeError, "Numeric Pickler can't pickle arrays of Objects" s = m.shape if LittleEndian: endian = "L" else: endian = "B" fp.write("A%s%s%d " % (m.typecode(), endian, m.itemsize())) for d in s: fp.write("%d "% d) fp.write('\n') fp.write(m.tostring()) def LoadArray(fp): ln = string.split(fp.readline()) if ln[0][0] == 'A': ln[0] = ln[0][1:] # Nasty hack showing my ignorance of pickle typecode = ln[0][0] endian = ln[0][1] shape = map(lambda x: string.atoi(x), ln[1:]) itemsize = string.atoi(ln[0][2:]) sz = reduce(multiply, shape)*itemsize data = fp.read(sz) m = fromstring(data, typecode) m = reshape(m, shape) if (LittleEndian and endian == 'B') or (not LittleEndian and endian == 'L'): return m.byteswapped() else: return m import pickle, copy class Unpickler(pickle.Unpickler): def load_array(self): self.stack.append(LoadArray(self)) dispatch = copy.copy(pickle.Unpickler.dispatch) dispatch['A'] = load_array class Pickler(pickle.Pickler): def save_array(self, object): DumpArray(object, self) dispatch = copy.copy(pickle.Pickler.dispatch) dispatch[ArrayType] = save_array -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cmutel at gmail.com Mon Mar 24 18:33:37 2008 From: cmutel at gmail.com (Christopher Mutel) Date: Mon, 24 Mar 2008 23:33:37 +0100 Subject: [SciPy-user] Monte Carlo & sparse.linsolve for medium size arrays - any strategies? Message-ID: <5e5978e10803241533s45c441b5h6d4774546f06700d@mail.gmail.com> Hello- I am working with medium size sparse matrices (4000 by 4000, coverage of ~1%), and need to do uncertainty analysis using a Monte Carlo approach. For each element in the matrix, I have an average value, the distribution (mostly lognormal, some normal and uniform), and the relevant uncertainty parameters. I am solving the classic Ax = B matrix problem, where the large matrix is A; B is a constant array. I anticipate needing to do on the order of 1000 iterations. SciPy is a fantastic library, but I am not creative enough to come up with an efficient way to store the relevant uncertainty information so that I am not iteratively generating the large matrix for each Monte Carlo run. Is there a clever way to construct or sublcass a sparse matrix as a generator, so that each time it is referenced a new matrix is generated? Or is there a better approach (i.e. I am sure there is a better approach that I haven't thought of)? I know that this is a rather general question, but I have been thinking about this off and on for quite a while, and have had great luck in the past getting help on this list. Thanks in advance! -Chris -- ############################ Chris Mutel ?kologisches Systemdesign - Ecological Systems Design Institut f.Umweltingenieurwissenschaften - Institute for Environmental Engineering ETH Z?rich - HIF C 42 - Schafmattstr. 6 8093 Z?rich Telefon: +41 44 633 71 45 - Fax: +41 44 633 10 61 ############################ From cmutel at gmail.com Mon Mar 24 19:17:43 2008 From: cmutel at gmail.com (Christopher Mutel) Date: Tue, 25 Mar 2008 00:17:43 +0100 Subject: [SciPy-user] Monte Carlo & sparse.linsolve for medium size arrays - any strategies? Message-ID: <5e5978e10803241617y709b099fh26d3e7e5d27c578f@mail.gmail.com> Hello- I am working with medium size sparse matrices (4000 by 4000, coverage of ~1%), and need to do uncertainty analysis using a Monte Carlo approach. For each element in the matrix, I have an average value, the distribution (mostly lognormal, some normal and uniform), and the relevant uncertainty parameters. I am solving the classic Ax = B matrix problem, where the large matrix is A; B is a constant array. I anticipate needing to do on the order of 1000 iterations. SciPy is a fantastic library, but I am not creative enough to come up with an efficient way to store the relevant uncertainty information so that I am not iteratively generating the large matrix for each Monte Carlo run. Is there a clever way to construct or sublcass a sparse matrix as a generator, so that each time it is referenced a new matrix is generated? Or is there a better approach (i.e. I am sure there is a better approach that I haven't thought of)? I know that this is a rather general question, but I have been thinking about this off and on for quite a while, and have had great luck in the past getting help on this list. Thanks in advance! -Chris -- ############################ Chris Mutel ?kologisches Systemdesign - Ecological Systems Design Institut f.Umweltingenieurwissenschaften - Institute for Environmental Engineering ETH Z?rich - HIF C 42 - Schafmattstr. 6 8093 Z?rich Telefon: +41 44 633 71 45 - Fax: +41 44 633 10 61 ############################ From stefan at sun.ac.za Mon Mar 24 20:35:14 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 25 Mar 2008 01:35:14 +0100 Subject: [SciPy-user] Numpy/Cython Google Summer of Code project idea In-Reply-To: <1d36917a0803241250v48b97548p61a12494378bd84a@mail.gmail.com> References: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> <1d36917a0803241250v48b97548p61a12494378bd84a@mail.gmail.com> Message-ID: <9457e7c80803241735m6bd1357ana42cc3ea82b73d9d@mail.gmail.com> Hi Alan On Mon, Mar 24, 2008 at 8:50 PM, Alan McIntyre wrote: > On Fri, Mar 7, 2008 at 4:57 AM, Fernando Perez wrote: > > As far as Pyrex/cython, if you know pyrex, you'll be OK with cython. > > It's only better than pyrex, but is as far as I know mostly, if not > > fully compatible with pyrex. > > I don't have a problem doing any of that, so I'll try to > re-familiarize myself my numpy and Cython and see how hard it looks. > However, if somebody is already planning on submitting a proposal for > one or both of these projects, please let me know so we're not > competing with each other needlessly. If anybody is interested in > mentoring these, and can offer any advice (or suggestions for other > things that need doing), I'd be glad to hear from you. Cython integration with NumPy would be tremendously useful. I mentioned earlier the Sage GSOC project: http://wiki.cython.org/DagSverreSeljebotn/soc/details It would be good to talk to them (they hang out on #sage-devel on freenode.net) to hear what the current status is. As a start, we need - Multi-dimensional indexing print x[0,1] print x[:3,1] x[0,1] = 1 x[:,1] = [1,2,3] - Fancy indexing print x[[0,3,5]] x[[0,3,5]] = [12,15,18] - Broadcasting (maybe nothing needs to be done to get this working, I haven't investigated) x = array([1,2,3]) x = x + 3 The SAGE project addresses a fairly high-level abstraction, and, while that sounds like a good plan in the long run, a more numpy-specific solution would benefit us too, and *may* be easier to implement. Regards, St?fan From aisaac at american.edu Tue Mar 25 00:51:10 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 25 Mar 2008 00:51:10 -0400 Subject: [SciPy-user] to openopt users In-Reply-To: <47E3DC4B.9070601@scipy.org> References: <47E3DC4B.9070601@scipy.org> Message-ID: Reminder to OpenOpt users: As a way to support his work, please write Dmitrey at to give him permission to list you as an OpenOpt user at http://scipy.org/scipy/scikits/wiki/OpenOptUsers Individual names are fine; list your institution only if you wish. Thank you, Alan Isaac From haase at msg.ucsf.edu Tue Mar 25 05:23:49 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 25 Mar 2008 10:23:49 +0100 Subject: [SciPy-user] Sobel filter, magnitude and gradient direction In-Reply-To: References: <3d375d730803211337w283bb8dck4dec68636d46ca00@mail.gmail.com> Message-ID: On Sat, Mar 22, 2008 at 3:50 PM, Daniele Di Mauro wrote: > Hi Robert > thanks for the answer, > > > > The default axis is -1, so the Sobel operator is being applied along > > the color axis, not one of the X or Y axes. Use the parameter axis=1 > > and axis=0 to get those, respectively. > > > > as you suggest, the right way is: > > # where data is an Image object > image_x= scipy.ndimage.filters.sobel(data,1) > image_xy= scipy.ndimage.filters.sobel(image_x,0) > > is it right? > I tried it and the result it's even wors, as you can see: > original: http://img135.imageshack.us/img135/3672/copyaa2.jpg > sobel: http://img135.imageshack.us/img135/2484/imagexyqc4.jpg > do i have to do some adjustment? thanks in advance > > Daniele Di Mauro As a sanity check: Can you do this kind of filtering for grey scale images ? Regards, -Sebastian Haase From arserlom at gmail.com Tue Mar 25 09:15:33 2008 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Tue, 25 Mar 2008 14:15:33 +0100 Subject: [SciPy-user] What's the equivalent of list.pop in numpy? Message-ID: Hello: >>> a = [[1,2,3],[4,5,6],[7,8,9]] >>> print a [[1, 2, 3], [4, 5, 6], [7, 8, 9]] >>> b = a.pop(1) >>> print a [[1, 2, 3], [7, 8, 9]] >>> print b [4, 5, 6] >>> from numpy import array >>> a = array([[1,2,3],[4,5,6],[7,8,9]]) >>> print a [[1 2 3] [4 5 6] [7 8 9]] >>> b = a.pop(1) Traceback (most recent call last): File "", line 1, in b = a.pop(1) AttributeError: 'numpy.ndarray' object has no attribute 'pop' >>> So, what's the "right" way of doing this in numpy? Armando. From aisaac at american.edu Tue Mar 25 09:32:03 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 25 Mar 2008 09:32:03 -0400 Subject: [SciPy-user] What's the equivalent of list.pop in numpy? In-Reply-To: References: Message-ID: A NumPy ndarray will not have a ``pop`` method largely because it is of a fixed size. It seems that you do not really want an ndarray. Maybe a Python array would do. Or maybe you are trying to do something that does not really require a ``pop`` ... Cheers, Alan Isaac From Joris.DeRidder at ster.kuleuven.be Tue Mar 25 11:11:03 2008 From: Joris.DeRidder at ster.kuleuven.be (Joris De Ridder) Date: Tue, 25 Mar 2008 16:11:03 +0100 Subject: [SciPy-user] What's the equivalent of list.pop in numpy? In-Reply-To: References: Message-ID: <72483F54-302B-45B4-AA23-5BCE6AAF2F08@ster.kuleuven.be> On 25 Mar 2008, at 14:15, Armando Serrano Lombillo wrote: > Hello: > >>>> a = [[1,2,3],[4,5,6],[7,8,9]] >>>> print a > [[1, 2, 3], [4, 5, 6], [7, 8, 9]] >>>> b = a.pop(1) >>>> print a > [[1, 2, 3], [7, 8, 9]] >>>> print b > [4, 5, 6] >>>> from numpy import array >>>> a = array([[1,2,3],[4,5,6],[7,8,9]]) >>>> print a > [[1 2 3] > [4 5 6] > [7 8 9]] >>>> b = a.pop(1) > > Traceback (most recent call last): > File "", line 1, in > b = a.pop(1) > AttributeError: 'numpy.ndarray' object has no attribute 'pop' >>>> > > So, what's the "right" way of doing this in numpy? The closest thing I can come up with, is In [1]: from numpy import * In [2]: a = array([[1,2,3],[4,5,6],[7,8,9]]) In [3]: b = a[1] In [4]: a = delete(a,[1], axis=0) but this introduces quite some overhead, so, as Alan said, ask yourself whether this is really what you need. Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From marcotuckner at public-files.de Tue Mar 25 13:46:23 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Tue, 25 Mar 2008 17:46:23 +0000 (UTC) Subject: [SciPy-user] aggregation of long-term time series Message-ID: Hello, I'd like to aggregate several years of a long term data set into a year-long data set (365 days). The below listed sample data which contains some random value for each month of the years 2005, 2006, 2007. Although the frequency shouldn't really matter in this case I use a monthly frequency to simplify our example. I would like a kind of design year which will be generated by calculating the average over all years for each data point. month 1; 829.67=247+485+292 month 2; 575.67=889+655+183 month 3; X ... Instead of average one could equally calculate sums, square roots, standard deviation, etc... What would be the most efficient way to generate such a mean values data set? Which appraoch would you use? Is such a function already implemented in the timeseries package and I simply overlooked it? Kind regards and thanks in advance for your help, Marco Tuckner ### ##sample data: ### year;month;value 2005;1;247 2005;2;889 2005;3;914 2005;4;292 2005;5;183 2005;6;251 2005;7;953 2005;8;156 2005;9;991 2005;10;557 2005;11;581 2005;12;354 2006;1;485 2006;2;655 2006;3;862 2006;4;399 2006;5;598 2006;6;744 2006;7;445 2006;8;374 2006;9;168 2006;10;995 2006;11;943 2006;12;326 2007;1;292 2007;2;183 2007;3;251 2007;4;953 2007;5;156 2007;6;991 2007;7;557 2007;8;581 2007;9;354 2007;10;399 2007;11;598 2007;12;744 ### From pgmdevlist at gmail.com Tue Mar 25 13:59:34 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Mar 2008 13:59:34 -0400 Subject: [SciPy-user] aggregation of long-term time series In-Reply-To: References: Message-ID: <200803251359.35123.pgmdevlist@gmail.com> On Tuesday 25 March 2008 13:46:23 Marco Tuckner wrote: > Hello, > I'd like to aggregate several years of a long term data set into a > year-long data set (365 days). Relatively straightforward with scikits.timeseries: >>>import numpy as np >>>import scikits.timeseries as ts >>>test = ts.time_series(np.arange(36), start_date=ts.now('M')) timeseries([ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35], dates = [Mar-2008 ... Feb-2011], freq = M) >>>atest = test.convert('A') timeseries( [[-- -- 0 1 2 3 4 5 6 7 8 9] [10 11 12 13 14 15 16 17 18 19 20 21] [22 23 24 25 26 27 28 29 30 31 32 33] [34 35 -- -- -- -- -- -- -- -- -- --]], dates = [2008 ... 2011], freq = A-DEC) Now you can get your result per month using the method you want and axis=0: Monthly mean: atest.mean(0) Monthly variance: atest.varu(0) (that's the unbiased variance) and so forth. HIH P. From allen.fowler at yahoo.com Tue Mar 25 17:36:03 2008 From: allen.fowler at yahoo.com (Allen Fowler) Date: Tue, 25 Mar 2008 14:36:03 -0700 (PDT) Subject: [SciPy-user] iPython: How to reload & run Message-ID: <556395.74662.qm@web45607.mail.sp1.yahoo.com> > > New iPython user here.... > > > > Typing %run only reloads the actually file... any of my modules are not > reloaded even if I've chenged them. > > > > Is there short-cut command to %run a file and forcefully reload any modules > that its uses? > > No, unfortunately no. I typically just put a few judiciously chosen > > reload(foo) > reload(bar) > > atop the script itself. You should keep in mind that generically, > reloading is tricky business: it's order-dependent, and what to do > with already in-memory objects when their supporting modules change > isn't clear. > > Python still has a way to go before we catch the fabled lisp machines > from the 70's in terms of interactive modifications of live code, I'm > afraid. > > Hmm.. This being the case.. is there a way to simply do a quick "reset everything, and then run this file"? -- Thanks ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ From marcotuckner at public-files.de Tue Mar 25 19:13:15 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Tue, 25 Mar 2008 23:13:15 +0000 (UTC) Subject: [SciPy-user] aggregation of long-term time series References: <200803251359.35123.pgmdevlist@gmail.com> Message-ID: Hello, > Relatively straightforward with scikits.timeseries: Thanks to the nice and clear answer I have completed the code: ##### START CODE ##### ##### aggregate_timeseries.py ##### #!/usr/bin/env python # -*- coding: utf-8 -*- import numpy as np import timeseries as ts # generate a test timeseries object with some (random) data in monthly frequency test = ts.time_series(np.arange(36), start_date=ts.today('M')) # create a test report print "the test time series: 3 years of monthly data:" ts.Report(test)() # convert to annual frequency atest = test.convert('A') print 'the test timeseries converted to annual frequency:' ts.Report(atest)() #store result in a new array atest_ave = atest.mean(0) # a test print print "the result of the aggregation operation: average" print atest_ave # create a new timeseries object with the result of the averaging calculation res = ts.time_series(atest_ave, start_date=ts.Date(freq='M', year=2008, month=1)) # create a report with the date formated as the abbreviated months only print "the aggregated yearly data set of 3 years of monthly data:" ts.Report(res, datefmt='%b')() ##### END CODE ##### Very nice. Thanks to the developers of Scipy/Timeseries ;-) From zakaria at aims.ac.za Tue Mar 25 20:09:55 2008 From: zakaria at aims.ac.za (Zakaria Ali) Date: Wed, 26 Mar 2008 02:09:55 +0200 (SAST) Subject: [SciPy-user] Monte Carlo & sparse.linsolve for medium size arrays - any strategies? In-Reply-To: <5e5978e10803241533s45c441b5h6d4774546f06700d@mail.gmail.com> References: <5e5978e10803241533s45c441b5h6d4774546f06700d@mail.gmail.com> Message-ID: <38589.192.168.42.180.1206490195.squirrel@webmail.aims.ac.za> Hello Christopher Mutel, I am working in the optimal stopping time of Bermudan construction, and I need many simulation liike Monte carlo simulation and so on ... . Please kindly if you have or know something about Monte carlo simulation or you have some code about that try to send them to me. I want to get idea from that and to simulate my problem. Thanks I am looking forward to hearing from you. > Hello- > > I am working with medium size sparse matrices (4000 by 4000, coverage > of ~1%), and need to do uncertainty analysis using a Monte Carlo > approach. For each element in the matrix, I have an average value, the > distribution (mostly lognormal, some normal and uniform), and the > relevant uncertainty parameters. I am solving the classic Ax = B > matrix problem, where the large matrix is A; B is a constant array. I > anticipate needing to do on the order of 1000 iterations. > > SciPy is a fantastic library, but I am not creative enough to come up > with an efficient way to store the relevant uncertainty information so > that I am not iteratively generating the large matrix for each Monte > Carlo run. Is there a clever way to construct or sublcass a sparse > matrix as a generator, so that each time it is referenced a new matrix > is generated? Or is there a better approach (i.e. I am sure there is a > better approach that I haven't thought of)? I know that this is a > rather general question, but I have been thinking about this off and > on for quite a while, and have had great luck in the past getting help > on this list. > > Thanks in advance! > > -Chris > > -- > ############################ > Chris Mutel > ??kologisches Systemdesign - Ecological Systems Design > Institut f.Umweltingenieurwissenschaften - Institute for Environmental > Engineering > ETH Z??rich - HIF C 42 - Schafmattstr. 6 > 8093 Z??rich > > Telefon: +41 44 633 71 45 - Fax: +41 44 633 10 61 > ############################ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Ali Zakaria, AIMS(African Institute for Mathematical Sciences), 6 Melrose Road, Muizenberg, 7945 Cape Town |South Africa Cell phone: +27 783614549 From s.collis at bom.gov.au Tue Mar 25 23:51:52 2008 From: s.collis at bom.gov.au (Scott Collis) Date: Wed, 26 Mar 2008 14:51:52 +1100 Subject: [SciPy-user] intepolate.interp2d why X==Y?[SEC=UNCLASSIFIED] Message-ID: <1206503512.10050.24.camel@mendenhall.ho.bom.gov.au> Hello Scipy-users, I've often scanned the mailing lists for tidbits and answers.. But now I have a question of my own. I am regridding data from various sources onto a common grid. These grids are non square and interpolate.interp2d requires X==Y to create a interplolated object. I have modified the code and removed the condition: self.x, self.y, self.z = map(ravel, map(array, [x, y, z])) if not map(rank, [self.x, self.y, self.z]) == [1,1,1]: raise ValueError("One of the input arrays is not 1-d.") --> #if len(self.x) != len(self.y): --> # raise ValueError("x and y must have equal lengths") if len(self.z) == len(self.x) * len(self.y): self.x, self.y = meshgrid(x,y) self.x, self.y = map(ravel, [self.x, self.y]) if len(self.z) != len(self.x): raise ValueError("Invalid length for input z") and it -seems- to be working Is the check there only for the case when X and Y are declared pointwise (ie len(x)=len(y)=len(z)) rather than gridwise (len(x)*len(y)=len(x)) Also... It seems grids over 100x100 seem to kill it.. if anyone has any other suggestions for re-gridding please let me know.. (I may just end up writing my own quick nearest neighbour algorithm instead of using splines like interp2d) Cheers, Scott -- Dr Scott Collis Meteorologist National Meteorological & Oceanographic Centre Australian Bureau of Meteorology Mb: 0412177550 From amcmorl at gmail.com Wed Mar 26 00:40:00 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Wed, 26 Mar 2008 00:40:00 -0400 Subject: [SciPy-user] import scipy.io error in debian Message-ID: Hi all, I'm trying to load a .mat file (my new lab is MatLab-crazy) using debian-testing (numpy 1.0.4 and scipy 0.6.0. I'm getting an import error for scipy.io: "No module named numpyio". Is this a known issue with these versions or a bug? If the former, do I need to update to svn, or is this fixed in the debian-unstable version? Thanks, Angus. -- AJC McMorland, PhD candidate Physiology, University of Auckland Post-doctoral research fellow Neurobiology, University of Pittsburgh From peridot.faceted at gmail.com Wed Mar 26 01:31:43 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 26 Mar 2008 01:31:43 -0400 Subject: [SciPy-user] Monte Carlo & sparse.linsolve for medium size arrays - any strategies? In-Reply-To: <5e5978e10803241533s45c441b5h6d4774546f06700d@mail.gmail.com> References: <5e5978e10803241533s45c441b5h6d4774546f06700d@mail.gmail.com> Message-ID: On 24/03/2008, Christopher Mutel wrote: > I am working with medium size sparse matrices (4000 by 4000, coverage > of ~1%), and need to do uncertainty analysis using a Monte Carlo > approach. For each element in the matrix, I have an average value, the > distribution (mostly lognormal, some normal and uniform), and the > relevant uncertainty parameters. I am solving the classic Ax = B > matrix problem, where the large matrix is A; B is a constant array. I > anticipate needing to do on the order of 1000 iterations. > > SciPy is a fantastic library, but I am not creative enough to come up > with an efficient way to store the relevant uncertainty information so > that I am not iteratively generating the large matrix for each Monte > Carlo run. Is there a clever way to construct or sublcass a sparse > matrix as a generator, so that each time it is referenced a new matrix > is generated? Or is there a better approach (i.e. I am sure there is a > better approach that I haven't thought of)? I know that this is a > rather general question, but I have been thinking about this off and > on for quite a while, and have had great luck in the past getting help > on this list. I assume the sparsity pattern never changes? That is, it's not just sparse because many of the random values are often small... The way I would address the problem would be to construct A each time but as a csc_matrix: In [13]: ij = N.vstack((N.arange(1000), N.random.randint(1000, size=1000))) In [14]: data = N.zeros(1000) In [15]: data[:500] = N.random.randn(500) In [16]: data[500:] = N.exp(N.random.randn(500)) In [19]: A = sparse.csc_matrix((data,ij),(1000,1000)) In [20]: A Out[20]: <1000x1000 sparse matrix of type '' with 1000 stored elements (space for 1000) in Compressed Sparse Column format> This process is quite efficient; if you need to rerun the simulation, you'd just redo steps 15, 16, and 19. If your sparsity pattern changes you'd change ij. The order of ij probably affects the speed at which this runs; probably it should be sorted lexicographically, but if you have several kinds of entry it will probably be easier to group each kind of entry in data[] and ij[]. Good luck, Anne From cimrman3 at ntc.zcu.cz Wed Mar 26 11:58:08 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 26 Mar 2008 16:58:08 +0100 Subject: [SciPy-user] ANN: SfePy 00.41.03 Message-ID: <47EA7290.7030009@ntc.zcu.cz> Greetings, I'm pleased to announce the release 00.41.03 of SfePy (formerly SFE) SfePy is a finite element analysis software in Python, based primarily on Numpy and SciPy. Mailing lists, issue tracking, mercurial repository: http://code.google.com/p/sfepy/ Home page: http://sfepy.kme.zcu.cz Major improvements: - works on 64 bits - support for various mesh formats - Schroedinger equation solver - see http://code.google.com/p/sfepy/wiki/Examples - new solvers: - generic time-dependent problem solver - pysparse, symeig, scipy-based eigenproblem solvers - scipy-based iterative solvers - many new terms For information on this release, see http://sfepy.googlecode.com/svn/web/releases/004103_RELEASE_NOTES.txt Best regards, r. From massimo.sandal at unibo.it Wed Mar 26 12:55:18 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 26 Mar 2008 17:55:18 +0100 Subject: [SciPy-user] Polyfit may be poorly conditioned Message-ID: <47EA7FF6.4030409@unibo.it> Hi, I am writing an algorithm that determines the best-fitting polynomial to a data set[1]. The algorithm tries to fit polynomials between 1st and 12th degree, and chooses the polynomial with the least mean square error. It works really well now, however when it occurs to prefer high-grade polynomials (say, 8th-9th grade and more), I often see this warning: /usr/lib/python2.5/site-packages/numpy/lib/polynomial.py:305: RankWarning: Polyfit may be poorly conditioned warnings.warn(msg, RankWarning) I have poor knowledge on polynomial fitting, but if I remember correctly, it means that the fit is very sensitive to tiny shifts in the data values (unstable)? Should I worry anyway or, as long as it gives me sensible results, go along? Thanks, Massimo [1]this is needed to "flatten" the data set from systematic, irregular low-frequency interferences. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From xavier.gnata at gmail.com Wed Mar 26 13:17:51 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Wed, 26 Mar 2008 18:17:51 +0100 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: <47EA7FF6.4030409@unibo.it> References: <47EA7FF6.4030409@unibo.it> Message-ID: <2a1f8a930803261017g73aa2f74kbcb2fbae3398a96a@mail.gmail.com> On Wed, Mar 26, 2008 at 5:55 PM, massimo sandal wrote: > Hi, > > I am writing an algorithm that determines the best-fitting polynomial to > a data set[1]. The algorithm tries to fit polynomials between 1st and > 12th degree, and chooses the polynomial with the least mean square error. > > It works really well now, however when it occurs to prefer high-grade > polynomials (say, 8th-9th grade and more), I often see this warning: > > /usr/lib/python2.5/site-packages/numpy/lib/polynomial.py:305: > RankWarning: Polyfit may be poorly conditioned > warnings.warn(msg, RankWarning) > > I have poor knowledge on polynomial fitting, but if I remember > correctly, it means that the fit is very sensitive to tiny shifts in the > data values (unstable)? Should I worry anyway or, as long as it gives me > sensible results, go along? > > Thanks, > Massimo > > [1]this is needed to "flatten" the data set from systematic, irregular > low-frequency interferences. > -- > Massimo Sandal > University of Bologna > Department of Biochemistry "G.Moruzzi" > > snail mail: > Via Irnerio 48, 40126 Bologna, Italy > > email: > massimo.sandal at unibo.it > > tel: +39-051-2094388 > fax: +39-051-2094387 > Hi, Well think about it one second : A fit with a 12 order polynomial will always have a smaller least mean square error the a fit using a <12 order polynomial. Your must figure out we before using a polyfit. Moreover, "smaller least mean square error" is not the driver in our case because high order polynomials show (it is very common) large oscillations in between the data points. That is not what you want. I'm happy to see your question because the worth behavior is to use a tool blindly. Xavier Xavier Xavier -------------- next part -------------- An HTML attachment was scrubbed... URL: From amcmorl at gmail.com Wed Mar 26 13:57:35 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Wed, 26 Mar 2008 13:57:35 -0400 Subject: [SciPy-user] import scipy.io error in debian - SOLVED Message-ID: On 26/03/2008, Angus McMorland wrote: > Hi all, > > I'm trying to load a .mat file (my new lab is MatLab-crazy) using > debian-testing (numpy 1.0.4 and scipy 0.6.0. I'm getting an import > error for scipy.io: "No module named numpyio". Is this a known issue > with these versions or a bug? If the former, do I need to update to > svn, or is this fixed in the debian-unstable version? Hi all, Fixed. For some reason, my install of the scipy package was broken, and "apt-get install --reinstall python-scipy "sorted the problem. Sorry for the noise. A. -- AJC McMorland, PhD candidate Physiology, University of Auckland Post-doctoral research fellow Neurobiology, University of Pittsburgh From peridot.faceted at gmail.com Wed Mar 26 20:06:56 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 26 Mar 2008 20:06:56 -0400 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: <47EA7FF6.4030409@unibo.it> References: <47EA7FF6.4030409@unibo.it> Message-ID: On 26/03/2008, massimo sandal wrote: > I am writing an algorithm that determines the best-fitting polynomial to > a data set[1]. The algorithm tries to fit polynomials between 1st and > 12th degree, and chooses the polynomial with the least mean square error. > > It works really well now, however when it occurs to prefer high-grade > polynomials (say, 8th-9th grade and more), I often see this warning: > > /usr/lib/python2.5/site-packages/numpy/lib/polynomial.py:305: > RankWarning: Polyfit may be poorly conditioned > warnings.warn(msg, RankWarning) > > I have poor knowledge on polynomial fitting, but if I remember > correctly, it means that the fit is very sensitive to tiny shifts in the > data values (unstable)? Should I worry anyway or, as long as it gives me > sensible results, go along? You should worry. That message indicates that the least-squares fitting is trying to solve a linear problem that is "ill-conditioned", that is, for which the solution depends very sensitively on the input. In combination with numerical errors, this can be a serious problem (though the fact that you're smoothing should reduce the issues). There are two things going on here. First, high-order polynomials like to thrash wildly. This is basically because polynomials are very "stiff": they are very smooth, and every coefficient depends strongly on all the data. So if there's a kink at one place in your data (say), all your coefficients get bent out of shape trying to accomodate it. This can give polynomials extremely erratic behaviour even before the fit becomes ill-conditioned; it also means that the polynomials are going to become ill-conditioned quite soon. The second issue is that representing polynomials as a0 + a1*x + a2*x**2 + ... + an*x**n is not necessarily the best approach. If, say, your x value lies between -1 and +1, and your data lies between -1 and +1, then getting a polynomial to fit will require very large coefficients which lead to delicate cancellation. Numerical errors begin to be a problem surprisingly soon. A partial solution is to write your polynomials differently: instead of representing them as a linear combination of 1, x, x**2, ..., x**n, it sometimes helps to represent them as a linear combination of a set of orthogonal polynomials (Chebyshev polynomials, for example). This won't help if, like scipy, the implementation of orthogonal polynomials breaks down, but in fact the Chebyshev polynomials have a nice clean representation as cos(n*arccos(x)) (IIRC; look it up before using!). I have found that using this sort of representation can drastically increase the degree of polynomial I can fit before numerics become a problem. You can't use polyfit any more, unfortunately (though at some point someone might write a version of the polynomial class that used a different basis), but it's not too hard to write the least-squares fitting using scipy.linalg.lstsq (plus or minus a few vowels). Since you are computing polynomial coefficients (which is unstable) only to go back and compute the polynomial values at the same points you fit, it should be possible to set things up so the numerical instabilities are not really a problem - if some linear combination of polynomials gives nearly zero values, then you might as well just take it to be zero. If you want to look into this, you might try implementing your own polynomial fitting using the scipy.linalg.svd, which will give you control over how you deal with this situation. > [1]this is needed to "flatten" the data set from systematic, irregular > low-frequency interferences. You might also think about whether you need to be subtracting polynomials. scipy's smoothing splines are extremely useful, and much more stable, numerically, than fitting polynomials. (This is primarily because they're less affected by distant data points.) You might also consider fitting a few sinusoids of prescribed frequencies (this may have easier-to-understand spectral properties). I should mention that low-frequency "red" noise can pose serious problems for spectral analysis - if you're planning to do a Fourier transform later it's possible that strong low-frequency components may "leak" into your high frequency bins, contaminating them. Deeter and Boynton 1982 talk about how to analyze data with this kind of contamination, in the context of pulsar timing. In all I'd say your easiest solution is to use scipy's smoothing splines with a lot of smoothing. If you have to use polynomials, try using (scaled) Chebyshev polynomials as your basis. Only if these both fail is it worth going into some of my more exotic solutions. Good luck, Anne From massimo.sandal at unibo.it Thu Mar 27 07:57:38 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 27 Mar 2008 12:57:38 +0100 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: <2a1f8a930803261017g73aa2f74kbcb2fbae3398a96a@mail.gmail.com> References: <47EA7FF6.4030409@unibo.it> <2a1f8a930803261017g73aa2f74kbcb2fbae3398a96a@mail.gmail.com> Message-ID: <47EB8BB2.4020101@unibo.it> Xavier Gnata ha scritto: > Well think about it one second : A fit with a 12 order polynomial will > always have a smaller least mean square error the a fit using a <12 > order polynomial. Your must figure out we before using a polyfit. Well, this is reasonable but it is not what I see happening. Often the algorithm takes the highest-degree polynomial, but not always. > Moreover, "smaller least mean square error" is not the driver in our > case because high order polynomials show (it is very common) large > oscillations in between the data points. That is not what you want. Yes, I know. However it seems to behave extremly well in this case -I tested on dozens of data sets and it flats them all practically perfectly for my needs. That's why I asked -should I worry *even if it works*? > I'm happy to see your question because the worth behavior is to use a > tool blindly. You probably meant "worst", however yes, I agree :) m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From massimo.sandal at unibo.it Thu Mar 27 08:26:47 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 27 Mar 2008 13:26:47 +0100 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: References: <47EA7FF6.4030409@unibo.it> Message-ID: <47EB9287.4000107@unibo.it> Anne Archibald ha scritto: > You should worry. That message indicates that the least-squares > fitting is trying to solve a linear problem that is "ill-conditioned", > that is, for which the solution depends very sensitively on the input. > In combination with numerical errors, this can be a serious problem > (though the fact that you're smoothing should reduce the issues). > There are two things going on here. > > First, high-order polynomials like to thrash wildly. This is basically > because polynomials are very "stiff": they are very smooth, and every > coefficient depends strongly on all the data. So if there's a kink at > one place in your data (say), all your coefficients get bent out of > shape trying to accomodate it. This can give polynomials extremely > erratic behaviour even before the fit becomes ill-conditioned; it also > means that the polynomials are going to become ill-conditioned quite > soon. I know and understand that polynomial behaviour. Just to let you understand better what I'm trying to do: The data I'm fitting are basically a straight line, with noise, with a gentle oscillation superimposed. What I want is to remove the oscillation (a well known instrumental artefact). The fit of this line of practically pure noise acts as a "reference" for the true data set, that contains the oscillation AND the signal superimposed. So what I do is fitting the reference data, and subtracting the fit to the actual data. (If you know about AFM force spectroscopy, the "reference" is the approaching curve, the data is the retracting curve, and the oscillation is optical interference) Since the noise amplitude is very small compared to the large oscillation, spikes in the noise practically seem not to affect the fit. Of course, rare occurrences may happen where the fit goes wild, but I still haven't seen that -and a rare occurrence of that, say, 1%, is OK now. > The second issue is that representing polynomials as a0 + a1*x + > a2*x**2 + ... + an*x**n is not necessarily the best approach. If, say, > your x value lies between -1 and +1, and your data lies between -1 and > +1, then getting a polynomial to fit will require very large > coefficients which lead to delicate cancellation. Numerical errors > begin to be a problem surprisingly soon. A partial solution is to > write your polynomials differently: instead of representing them as a > linear combination of 1, x, x**2, ..., x**n, it sometimes helps to > represent them as a linear combination of a set of orthogonal > polynomials (Chebyshev polynomials, for example). This won't help if, > like scipy, the implementation of orthogonal polynomials breaks down, > but in fact the Chebyshev polynomials have a nice clean representation > as cos(n*arccos(x)) (IIRC; look it up before using!). I have found > that using this sort of representation can drastically increase the > degree of polynomial I can fit before numerics become a problem. You > can't use polyfit any more, unfortunately (though at some point > someone might write a version of the polynomial class that used a > different basis), but it's not too hard to write the least-squares > fitting using scipy.linalg.lstsq (plus or minus a few vowels). Thanks for the tip! I'll look into it if I meet problems with raw polynomials. > Since you are computing polynomial coefficients (which is unstable) > only to go back and compute the polynomial values at the same points > you fit, it should be possible to set things up so the numerical > instabilities are not really a problem - if some linear combination of > polynomials gives nearly zero values, then you might as well just take > it to be zero. If you want to look into this, you might try > implementing your own polynomial fitting using the scipy.linalg.svd, > which will give you control over how you deal with this situation. Thanks for this tip, again. >> [1]this is needed to "flatten" the data set from systematic, irregular >> low-frequency interferences. > > You might also think about whether you need to be subtracting > polynomials. scipy's smoothing splines are extremely useful, and much > more stable, numerically, than fitting polynomials. (This is primarily > because they're less affected by distant data points.) You might also > consider fitting a few sinusoids of prescribed frequencies (this may > have easier-to-understand spectral properties). Fitting sinusoids is something I've thought about, but having the easy polynomial fit at hand, I tried that and it seems it works, apart from the annoying warning. As for splines, I've not tried that. I asked people before about splines and they told me that they are even more unstable than polynomials, however! so I didn't even try. Again, however, thank you to telling me that. > I should mention that low-frequency "red" noise can pose serious > problems for spectral analysis - if you're planning to do a Fourier > transform later it's possible that strong low-frequency components may > "leak" into your high frequency bins, contaminating them. Deeter and > Boynton 1982 talk about how to analyze data with this kind of > contamination, in the context of pulsar timing. Spectral analysis should not be needed, however this is an interesting read, surely. > In all I'd say your easiest solution is to use scipy's smoothing > splines with a lot of smoothing. If you have to use polynomials, try > using (scaled) Chebyshev polynomials as your basis. Only if these both > fail is it worth going into some of my more exotic solutions. Thanks a lot! I'll look more deeply if I effectively have troubles with polynomials, and I will surely think about your advices. I learned a lot! m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From bsouthey at gmail.com Thu Mar 27 09:38:03 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 27 Mar 2008 08:38:03 -0500 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: <47EB9287.4000107@unibo.it> References: <47EA7FF6.4030409@unibo.it> <47EB9287.4000107@unibo.it> Message-ID: <47EBA33B.8090601@gmail.com> Hi, To further Anne's reply, you really should at least rescale your explanatory term say x dividing by some 'large' constant [instead of a1 +a2*x + a3x**2, first divide x by say 100 so you effectively fit a1+a2*x/100 + a3*(x/100)**2]. This 'trick' also works well for nonlinear models that would not or tend not to converge otherwise and really does not change anything but the scale. The next step is using orthogonal polynomials as these rather nice properties and usually work well as Anne indicated. Splines and loess transformations are extremely flexible and you really should try these especially given the polynomial degree you are trying to fit. Also these are semi-parametric so the model is not as important as with polynomials and provide a 'local fit'. The latter means that you can have a different fit depending on where you are in the data. However, after saying that, much depends on the data and the information content. I would also suggest somewhat strongly that you use a better model comparison procedure than some mean square error like BIC (Bayesian information criterion) or Akaike information criterion. That way you can address model complexity - higher polynomial terms should fit the data better than models with lower polynomial terms. Regards Bruce massimo sandal wrote: > Anne Archibald ha scritto: >> You should worry. That message indicates that the least-squares >> fitting is trying to solve a linear problem that is "ill-conditioned", >> that is, for which the solution depends very sensitively on the input. >> In combination with numerical errors, this can be a serious problem >> (though the fact that you're smoothing should reduce the issues). >> There are two things going on here. >> >> First, high-order polynomials like to thrash wildly. This is basically >> because polynomials are very "stiff": they are very smooth, and every >> coefficient depends strongly on all the data. So if there's a kink at >> one place in your data (say), all your coefficients get bent out of >> shape trying to accomodate it. This can give polynomials extremely >> erratic behaviour even before the fit becomes ill-conditioned; it also >> means that the polynomials are going to become ill-conditioned quite >> soon. > > I know and understand that polynomial behaviour. > > Just to let you understand better what I'm trying to do: The data I'm > fitting are basically a straight line, with noise, with a gentle > oscillation superimposed. What I want is to remove the oscillation (a > well known instrumental artefact). The fit of this line of practically > pure noise acts as a "reference" for the true data set, that contains > the oscillation AND the signal superimposed. So what I do is fitting > the reference data, and subtracting the fit to the actual data. > > (If you know about AFM force spectroscopy, the "reference" is the > approaching curve, the data is the retracting curve, and the > oscillation is optical interference) > > Since the noise amplitude is very small compared to the large > oscillation, spikes in the noise practically seem not to affect the > fit. Of course, rare occurrences may happen where the fit goes wild, > but I still haven't seen that -and a rare occurrence of that, say, 1%, > is OK now. > >> The second issue is that representing polynomials as a0 + a1*x + >> a2*x**2 + ... + an*x**n is not necessarily the best approach. If, say, >> your x value lies between -1 and +1, and your data lies between -1 and >> +1, then getting a polynomial to fit will require very large >> coefficients which lead to delicate cancellation. Numerical errors >> begin to be a problem surprisingly soon. A partial solution is to >> write your polynomials differently: instead of representing them as a >> linear combination of 1, x, x**2, ..., x**n, it sometimes helps to >> represent them as a linear combination of a set of orthogonal >> polynomials (Chebyshev polynomials, for example). This won't help if, >> like scipy, the implementation of orthogonal polynomials breaks down, >> but in fact the Chebyshev polynomials have a nice clean representation >> as cos(n*arccos(x)) (IIRC; look it up before using!). I have found >> that using this sort of representation can drastically increase the >> degree of polynomial I can fit before numerics become a problem. You >> can't use polyfit any more, unfortunately (though at some point >> someone might write a version of the polynomial class that used a >> different basis), but it's not too hard to write the least-squares >> fitting using scipy.linalg.lstsq (plus or minus a few vowels). > > Thanks for the tip! I'll look into it if I meet problems with raw > polynomials. > >> Since you are computing polynomial coefficients (which is unstable) >> only to go back and compute the polynomial values at the same points >> you fit, it should be possible to set things up so the numerical >> instabilities are not really a problem - if some linear combination of >> polynomials gives nearly zero values, then you might as well just take >> it to be zero. If you want to look into this, you might try >> implementing your own polynomial fitting using the scipy.linalg.svd, >> which will give you control over how you deal with this situation. > > Thanks for this tip, again. > >>> [1]this is needed to "flatten" the data set from systematic, irregular >>> low-frequency interferences. >> >> You might also think about whether you need to be subtracting >> polynomials. scipy's smoothing splines are extremely useful, and much >> more stable, numerically, than fitting polynomials. (This is primarily >> because they're less affected by distant data points.) You might also >> consider fitting a few sinusoids of prescribed frequencies (this may >> have easier-to-understand spectral properties). > > Fitting sinusoids is something I've thought about, but having the easy > polynomial fit at hand, I tried that and it seems it works, apart from > the annoying warning. > > As for splines, I've not tried that. I asked people before about > splines and they told me that they are even more unstable than > polynomials, however! so I didn't even try. Again, however, thank you > to telling me that. > >> I should mention that low-frequency "red" noise can pose serious >> problems for spectral analysis - if you're planning to do a Fourier >> transform later it's possible that strong low-frequency components may >> "leak" into your high frequency bins, contaminating them. Deeter and >> Boynton 1982 talk about how to analyze data with this kind of >> contamination, in the context of pulsar timing. > > Spectral analysis should not be needed, however this is an interesting > read, surely. > >> In all I'd say your easiest solution is to use scipy's smoothing >> splines with a lot of smoothing. If you have to use polynomials, try >> using (scaled) Chebyshev polynomials as your basis. Only if these both >> fail is it worth going into some of my more exotic solutions. > > Thanks a lot! I'll look more deeply if I effectively have troubles > with polynomials, and I will surely think about your advices. I > learned a lot! > > m. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From C.J.Lee at tnw.utwente.nl Thu Mar 27 09:52:38 2008 From: C.J.Lee at tnw.utwente.nl (C.J.Lee at tnw.utwente.nl) Date: Thu, 27 Mar 2008 14:52:38 +0100 Subject: [SciPy-user] Polyfit may be poorly conditioned References: <47EA7FF6.4030409@unibo.it> <47EB9287.4000107@unibo.it> Message-ID: <9E7F7554631C0047AB598527A4E9569B6F816C@tnx4.dynamic.tnw.utwente.nl> Although I don't do much fitting, I do do a lot with optics and if the background signal you are trying to remove arises from interference fringes then I would fit with a low number of periodic functions. In fact my procedure would be as follows do an fft on the data and create a power spectrum pick out 1-3 spectral components that describe the fringe pattern Either do a global fit with the frequency components held fixed or fit piece-wise allowing the frequency, phase, and amplitude to vary within some restricted range for that piece. Cheers Chris -----Original Message----- From: scipy-user-bounces at scipy.org on behalf of massimo sandal Sent: Thu 3/27/2008 1:26 PM To: SciPy Users List Subject: Re: [SciPy-user] Polyfit may be poorly conditioned Anne Archibald ha scritto: > You should worry. That message indicates that the least-squares > fitting is trying to solve a linear problem that is "ill-conditioned", > that is, for which the solution depends very sensitively on the input. > In combination with numerical errors, this can be a serious problem > (though the fact that you're smoothing should reduce the issues). > There are two things going on here. > > First, high-order polynomials like to thrash wildly. This is basically > because polynomials are very "stiff": they are very smooth, and every > coefficient depends strongly on all the data. So if there's a kink at > one place in your data (say), all your coefficients get bent out of > shape trying to accomodate it. This can give polynomials extremely > erratic behaviour even before the fit becomes ill-conditioned; it also > means that the polynomials are going to become ill-conditioned quite > soon. I know and understand that polynomial behaviour. Just to let you understand better what I'm trying to do: The data I'm fitting are basically a straight line, with noise, with a gentle oscillation superimposed. What I want is to remove the oscillation (a well known instrumental artefact). The fit of this line of practically pure noise acts as a "reference" for the true data set, that contains the oscillation AND the signal superimposed. So what I do is fitting the reference data, and subtracting the fit to the actual data. (If you know about AFM force spectroscopy, the "reference" is the approaching curve, the data is the retracting curve, and the oscillation is optical interference) Since the noise amplitude is very small compared to the large oscillation, spikes in the noise practically seem not to affect the fit. Of course, rare occurrences may happen where the fit goes wild, but I still haven't seen that -and a rare occurrence of that, say, 1%, is OK now. > The second issue is that representing polynomials as a0 + a1*x + > a2*x**2 + ... + an*x**n is not necessarily the best approach. If, say, > your x value lies between -1 and +1, and your data lies between -1 and > +1, then getting a polynomial to fit will require very large > coefficients which lead to delicate cancellation. Numerical errors > begin to be a problem surprisingly soon. A partial solution is to > write your polynomials differently: instead of representing them as a > linear combination of 1, x, x**2, ..., x**n, it sometimes helps to > represent them as a linear combination of a set of orthogonal > polynomials (Chebyshev polynomials, for example). This won't help if, > like scipy, the implementation of orthogonal polynomials breaks down, > but in fact the Chebyshev polynomials have a nice clean representation > as cos(n*arccos(x)) (IIRC; look it up before using!). I have found > that using this sort of representation can drastically increase the > degree of polynomial I can fit before numerics become a problem. You > can't use polyfit any more, unfortunately (though at some point > someone might write a version of the polynomial class that used a > different basis), but it's not too hard to write the least-squares > fitting using scipy.linalg.lstsq (plus or minus a few vowels). Thanks for the tip! I'll look into it if I meet problems with raw polynomials. > Since you are computing polynomial coefficients (which is unstable) > only to go back and compute the polynomial values at the same points > you fit, it should be possible to set things up so the numerical > instabilities are not really a problem - if some linear combination of > polynomials gives nearly zero values, then you might as well just take > it to be zero. If you want to look into this, you might try > implementing your own polynomial fitting using the scipy.linalg.svd, > which will give you control over how you deal with this situation. Thanks for this tip, again. >> [1]this is needed to "flatten" the data set from systematic, irregular >> low-frequency interferences. > > You might also think about whether you need to be subtracting > polynomials. scipy's smoothing splines are extremely useful, and much > more stable, numerically, than fitting polynomials. (This is primarily > because they're less affected by distant data points.) You might also > consider fitting a few sinusoids of prescribed frequencies (this may > have easier-to-understand spectral properties). Fitting sinusoids is something I've thought about, but having the easy polynomial fit at hand, I tried that and it seems it works, apart from the annoying warning. As for splines, I've not tried that. I asked people before about splines and they told me that they are even more unstable than polynomials, however! so I didn't even try. Again, however, thank you to telling me that. > I should mention that low-frequency "red" noise can pose serious > problems for spectral analysis - if you're planning to do a Fourier > transform later it's possible that strong low-frequency components may > "leak" into your high frequency bins, contaminating them. Deeter and > Boynton 1982 talk about how to analyze data with this kind of > contamination, in the context of pulsar timing. Spectral analysis should not be needed, however this is an interesting read, surely. > In all I'd say your easiest solution is to use scipy's smoothing > splines with a lot of smoothing. If you have to use polynomials, try > using (scaled) Chebyshev polynomials as your basis. Only if these both > fail is it worth going into some of my more exotic solutions. Thanks a lot! I'll look more deeply if I effectively have troubles with polynomials, and I will surely think about your advices. I learned a lot! m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 From massimo.sandal at unibo.it Thu Mar 27 10:30:18 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 27 Mar 2008 15:30:18 +0100 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: <9E7F7554631C0047AB598527A4E9569B6F816C@tnx4.dynamic.tnw.utwente.nl> References: <47EA7FF6.4030409@unibo.it> <47EB9287.4000107@unibo.it> <9E7F7554631C0047AB598527A4E9569B6F816C@tnx4.dynamic.tnw.utwente.nl> Message-ID: <47EBAF7A.20702@unibo.it> C.J.Lee at tnw.utwente.nl ha scritto: > Although I don't do much fitting, I do do a lot with optics and if the background signal you are trying to remove arises from interference fringes then I would fit with a low number of periodic functions. > > In fact my procedure would be as follows > do an fft on the data and create a power spectrum > pick out 1-3 spectral components that describe the fringe pattern > Either do a global fit with the frequency components held fixed or fit piece-wise allowing the frequency, phase, and amplitude to vary within some restricted range for that piece. I tried something like that (first try in fact), but it does not work as smoothly as it can seem. Periodicity is not always perfect too. The polynomial is faster, much easier to implement and works better. By the way, I tried it on > 400 data curves, from different experiments. It seems to work always as expected, despite the (right) concerns. I'll look into splines, but as for now it is well usable this way. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From chiaracaronna at hotmail.com Thu Mar 27 13:32:16 2008 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Thu, 27 Mar 2008 17:32:16 +0000 Subject: [SciPy-user] matrix multiplication efficiency In-Reply-To: References: Message-ID: Hi,I hope this is the right mailing list.I need to perform matrix multiplication with big matrices and I realized that numpy is much slower with compared to matlab? does anyone know why?for example to multiply two perform this calculationa= matrix 2000x10000b= matrix 10000x2000c=a x b (c matrix 2000 x 2000)it takes roughly 100-300 sec, depending on the pc...while on matlab it is almost instantaneous...what can I do?This is my python installation:Python 2.5.1 (r251:54863, Mar 7 2008, 04:10:12) [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.0.3' >>> _________________________________________________________________ News, entertainment and everything you care about at Live.com. Get it now! http://www.live.com/getstarted.aspx From alan.mcintyre at gmail.com Thu Mar 27 13:34:00 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Thu, 27 Mar 2008 13:34:00 -0400 Subject: [SciPy-user] Numpy/Cython Google Summer of Code project idea In-Reply-To: <9457e7c80803241735m6bd1357ana42cc3ea82b73d9d@mail.gmail.com> References: <1d36917a0803061518j6a50b535j2ea0f34a52e9cc34@mail.gmail.com> <1d36917a0803241250v48b97548p61a12494378bd84a@mail.gmail.com> <9457e7c80803241735m6bd1357ana42cc3ea82b73d9d@mail.gmail.com> Message-ID: <1d36917a0803271034u4f89d017ga5520c76f5335634@mail.gmail.com> On Mon, Mar 24, 2008 at 8:35 PM, St?fan van der Walt wrote: > Cython integration with NumPy would be tremendously useful. I > mentioned earlier the Sage GSOC project: > > http://wiki.cython.org/DagSverreSeljebotn/soc/details > > It would be good to talk to them (they hang out on #sage-devel on > freenode.net) to hear what the current status is. As a start, we need Thanks, St?fan; I missed your mention of that earlier; I'll have a look and pop into #sage-devel. From conor.robinson at gmail.com Thu Mar 27 17:10:08 2008 From: conor.robinson at gmail.com (Conor Robinson) Date: Thu, 27 Mar 2008 14:10:08 -0700 Subject: [SciPy-user] matrix multiplication efficiency In-Reply-To: References: Message-ID: In my experience numpy can be _much_ faster than matlab. Do you have a code snippet? Furthermore, did you compile numpy on your machine and with what? Do you have a fortran compiler? Something sounds fishy because I've multiplied much larger matrices in no time at all eg. your example x100. You may have not linked the BLAS or ATLAS libs during compile time. Even writing a raw python function to multiple matrices of the size your dealing with should take less time. 1. Make sure you linked BLAS or ATLAS as well as compilers check the config file 2. A code snippet of how went about this would help. 3. Pick up a copy of the numpy manual. HTH, Conor On Thu, Mar 27, 2008 at 10:32 AM, Chiara Caronna wrote: > > Hi,I hope this is the right mailing list.I need to perform matrix multiplication with big matrices and I realized that numpy is much slower with compared to matlab? does anyone know why?for example to multiply two perform this calculationa= matrix 2000x10000b= matrix 10000x2000c=a x b (c matrix 2000 x 2000)it takes roughly 100-300 sec, depending on the pc...while on matlab it is almost instantaneous...what can I do?This is my python installation:Python 2.5.1 (r251:54863, Mar 7 2008, 04:10:12) > [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> numpy.__version__ > '1.0.3' > >>> > > _________________________________________________________________ > News, entertainment and everything you care about at Live.com. Get it now! > http://www.live.com/getstarted.aspx > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Thu Mar 27 22:26:30 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 27 Mar 2008 22:26:30 -0400 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: <47EB9287.4000107@unibo.it> References: <47EA7FF6.4030409@unibo.it> <47EB9287.4000107@unibo.it> Message-ID: On 27/03/2008, massimo sandal wrote: > Anne Archibald ha scritto: > > > Since you are computing polynomial coefficients (which is unstable) > > only to go back and compute the polynomial values at the same points > > you fit, it should be possible to set things up so the numerical > > instabilities are not really a problem - if some linear combination of > > polynomials gives nearly zero values, then you might as well just take > > it to be zero. If you want to look into this, you might try > > implementing your own polynomial fitting using the scipy.linalg.svd, > > which will give you control over how you deal with this situation. > > Thanks for this tip, again. Ah, I should have read the docstring on polyfit. In fact it works in just this way, so you can be reasonably confident that it will do the right thing. If you want to suppress the error you're seeing, try using "full=True" in your call to polyfit. This will return a number of values you don't care about, but will suppress the warning (because "rank" and "rcond" contain that information). It also contains the advice that subtracting the mean of your data before starting is a good idea. > As for splines, I've not tried that. I asked people before about splines > and they told me that they are even more unstable than polynomials, > however! so I didn't even try. Again, however, thank you to telling me that. I think they must be talking about splines without smoothing. The most usual kind of spline you see is forced to go through every data point, which is clearly not going to work for noisy data. But with some cleverness, implemented in scipy's splrep, you can find splines that are a least-squares fit in a particular sense (something like the straightest curve passing within one sigma of your data). In my experience they are extremely well-behaved. It sounds like the answer to your original question should have been "go right ahead, you're fine". For what you're doing, polyfit should be quite robust, and the warning should not be a problem. Good luck, Anne From david at ar.media.kyoto-u.ac.jp Fri Mar 28 02:17:54 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 28 Mar 2008 15:17:54 +0900 Subject: [SciPy-user] matrix multiplication efficiency In-Reply-To: References: Message-ID: <47EC8D92.30308@ar.media.kyoto-u.ac.jp> Conor Robinson wrote: > In my experience numpy can be _much_ faster than matlab. For matrix multiplication, relative speed between matlab and numpy will mostly depend on the blas. Up to 6.5 at least, matlab used atlas, which means that numpy should be as fast, if not faster (if you compiled ATLAS by yourself). Here, the problem is that the debian package does not use ATLAS for numpy.dot, I think. If you build numpy by yourself, the problem is easy to fix, and that's what I would recomment if speed really is an issue. But for packages, it is not so easy to fix, because numpy.dot uses the cblas interface. Not all BLAS implementations have a CBLAS implementation, which is problematic since debian packages have to use the lowest common interface between all the available BLAS. It may be possible to use the BLAS interface instead as well in numpy, which is the best solution I think in the mid-term, but this requires someone to step in to do the work. cheers, David From xavier.gnata at gmail.com Fri Mar 28 05:14:42 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Fri, 28 Mar 2008 10:14:42 +0100 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: <47EB8BB2.4020101@unibo.it> References: <47EA7FF6.4030409@unibo.it> <2a1f8a930803261017g73aa2f74kbcb2fbae3398a96a@mail.gmail.com> <47EB8BB2.4020101@unibo.it> Message-ID: <2a1f8a930803280214r6919d463w93c6927e8f5eceda@mail.gmail.com> On Thu, Mar 27, 2008 at 12:57 PM, massimo sandal wrote: > Xavier Gnata ha scritto: > > Well think about it one second : A fit with a 12 order polynomial will > > always have a smaller least mean square error the a fit using a <12 > > order polynomial. Your must figure out we before using a polyfit. > > Well, this is reasonable but it is not what I see happening. Often the > algorithm takes the highest-degree polynomial, but not always. > Hum we have a problem here: Polyfit is a linear fit. It can (should?) be implemented using using the pseudo-inverse paradigm and it "pseudo-inverse provide you with the solution that minimize the chi2. As a result, a 12 ordrer fit cannot be worst than a 5 order. It should at least end up with the same chi2. One trivial exemple : In [9]: polyfit(arange(1000),2*arange(1000)+1,30) C:\Python25\lib\site-packages\numpy\lib\polynomial.py:306: RankWarning: Polyfit may be poorly conditioned warnings.warn(msg, RankWarning) Out[9]: array([ 4.25295067e-93, -1.90319088e-89, 2.47204987e-86, 3.64986075e-84, -2.04058052e-80, -1.02156478e-77, 1.38333948e-74, 1.84191780e-71, -1.04858370e-69, -1.93210697e-65, -1.30453334e-62, 1.07062138e-59, 2.11173521e-56, 2.00842417e-54, -2.16681833e-50, -1.21709485e-47, 1.99280602e-44, 1.67639356e-41, -2.35550438e-38, -1.10826788e-35, 3.53529389e-32, -2.99768374e-29, 1.44456488e-26, -4.49042306e-24, 9.31940018e-22, -1.28887179e-19, 1.15776067e-17, -6.43505943e-16, 2.04266284e-14, 2.00000000e+00, 1.00000000e+00]) Of course, we also have numerical errors but if they cannot be neglected, then to have found a problem which is really ill-conditioned (hudge range of values ? values very close to inf or 0 ?)...or there is a pb in poyfit but I cannot see this bug. Any comments? Xavier > > > Moreover, "smaller least mean square error" is not the driver in our > > case because high order polynomials show (it is very common) large > > oscillations in between the data points. That is not what you want. > > Yes, I know. However it seems to behave extremly well in this case -I > tested on dozens of data sets and it flats them all practically > perfectly for my needs. That's why I asked -should I worry *even if it > works*? > > > I'm happy to see your question because the worth behavior is to use a > > tool blindly. > > You probably meant "worst", however yes, I agree :) > > m. > > -- > Massimo Sandal > University of Bologna > Department of Biochemistry "G.Moruzzi" > > snail mail: > Via Irnerio 48, 40126 Bologna, Italy > > email: > massimo.sandal at unibo.it > > tel: +39-051-2094388 > fax: +39-051-2094387 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Fri Mar 28 12:36:25 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 28 Mar 2008 12:36:25 -0400 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: <2a1f8a930803280214r6919d463w93c6927e8f5eceda@mail.gmail.com> References: <47EA7FF6.4030409@unibo.it> <2a1f8a930803261017g73aa2f74kbcb2fbae3398a96a@mail.gmail.com> <47EB8BB2.4020101@unibo.it> <2a1f8a930803280214r6919d463w93c6927e8f5eceda@mail.gmail.com> Message-ID: On 28/03/2008, Xavier Gnata wrote: > Hum we have a problem here: > Polyfit is a linear fit. It can (should?) be implemented using using the > pseudo-inverse paradigm and it "pseudo-inverse provide you with the solution > that minimize the chi2. As a result, a 12 ordrer fit cannot be worst than a > 5 order. It should at least end up with the same chi2. > > One trivial exemple : > In [9]: polyfit(arange(1000),2*arange(1000)+1,30) > C:\Python25\lib\site-packages\numpy\lib\polynomial.py:306: > RankWarning: > Polyfit > may be poorly conditioned > warnings.warn(msg, RankWarning) > > Out[9]: > array([ 4.25295067e-93, -1.90319088e-89, 2.47204987e-86, > 3.64986075e-84, -2.04058052e-80, -1.02156478e-77, > 1.38333948e-74, 1.84191780e-71, -1.04858370e-69, > -1.93210697e-65, -1.30453334e-62, 1.07062138e-59, > > 2.11173521e-56, 2.00842417e-54, -2.16681833e-50, > -1.21709485e-47, 1.99280602e-44, 1.67639356e-41, > -2.35550438e-38, -1.10826788e-35, 3.53529389e-32, > -2.99768374e-29, 1.44456488e-26, -4.49042306e-24, > > 9.31940018e-22, -1.28887179e-19, 1.15776067e-17, > -6.43505943e-16, 2.04266284e-14, 2.00000000e+00, > 1.00000000e+00]) > Of course, we also have numerical errors but if they cannot be neglected, > then to have found a problem which is really ill-conditioned (hudge range of > values ? values very close to inf or 0 ?)...or there is a pb in poyfit but I > cannot see this bug. > > Any comments? What exactly is the problem here? This is the best fit that could be hoped for - the linear and constant terms are right, and the rest are zero to the best accuracy we can plausibly expect. If you are using a goodness-of-fit measure that takes into account the number of parameters fitted (Bayesian model comparison, say, or even a chi-squared that takes into account "degrees of freedom") this should be reported as a much worse fit than using only a linear polynomial. The problem is ill-conditioned because 1000**30 is so much bigger than 999**30, as is 1000**29 compared to 999**29, that the vectors arange(1000)**30 and arange(1000)**30 are nearly identical. As the polyfit docstring says, this is a really horrible basis to use for polynomial fitting. Since polyfit is quite sensibly based on the SVD, when it encounters some linear combination of basis vectors that nearly cancels, rather than introduce enormous coefficients to try to use this combination for fitting, it just discards it. This results in slightly poorer fits sometimes, but drastically reduces the numerical headaches that come with ill-conditioned matrices. Thus even if a higher-degree polynomial would fit the data better in a world of exact arithmetic, occasionally numerical issues force the discarding of some troublesome combinations of coefficients. I do think it would be useful to have some mechanism for fitting and working with polynomials represented in other bases, so that these numerical issues would be reduced. This could go through at the same time as an improvement (by specialization) of some of scipy's orthogonal polynomial functions. But for now I can see no real problem with polyfit. Anne From xavier.gnata at gmail.com Fri Mar 28 17:06:04 2008 From: xavier.gnata at gmail.com (Gnata Xavier) Date: Fri, 28 Mar 2008 22:06:04 +0100 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: References: <47EA7FF6.4030409@unibo.it> <2a1f8a930803261017g73aa2f74kbcb2fbae3398a96a@mail.gmail.com> <47EB8BB2.4020101@unibo.it> <2a1f8a930803280214r6919d463w93c6927e8f5eceda@mail.gmail.com> Message-ID: <47ED5DBC.8080508@gmail.com> Anne Archibald wrote: > On 28/03/2008, Xavier Gnata wrote: > > >> Hum we have a problem here: >> Polyfit is a linear fit. It can (should?) be implemented using using the >> pseudo-inverse paradigm and it "pseudo-inverse provide you with the solution >> that minimize the chi2. As a result, a 12 ordrer fit cannot be worst than a >> 5 order. It should at least end up with the same chi2. >> >> One trivial exemple : >> In [9]: polyfit(arange(1000),2*arange(1000)+1,30) >> C:\Python25\lib\site-packages\numpy\lib\polynomial.py:306: >> RankWarning: >> Polyfit >> may be poorly conditioned >> warnings.warn(msg, RankWarning) >> >> Out[9]: >> array([ 4.25295067e-93, -1.90319088e-89, 2.47204987e-86, >> 3.64986075e-84, -2.04058052e-80, -1.02156478e-77, >> 1.38333948e-74, 1.84191780e-71, -1.04858370e-69, >> -1.93210697e-65, -1.30453334e-62, 1.07062138e-59, >> >> 2.11173521e-56, 2.00842417e-54, -2.16681833e-50, >> -1.21709485e-47, 1.99280602e-44, 1.67639356e-41, >> -2.35550438e-38, -1.10826788e-35, 3.53529389e-32, >> -2.99768374e-29, 1.44456488e-26, -4.49042306e-24, >> >> 9.31940018e-22, -1.28887179e-19, 1.15776067e-17, >> -6.43505943e-16, 2.04266284e-14, 2.00000000e+00, >> 1.00000000e+00]) >> Of course, we also have numerical errors but if they cannot be neglected, >> then to have found a problem which is really ill-conditioned (hudge range of >> values ? values very close to inf or 0 ?)...or there is a pb in poyfit but I >> cannot see this bug. >> >> Any comments? >> > > What exactly is the problem here? This is the best fit that could be > hoped for - the linear and constant terms are right, and the rest are > zero to the best accuracy we can plausibly expect. If you are using a > goodness-of-fit measure that takes into account the number of > parameters fitted (Bayesian model comparison, say, or even a > chi-squared that takes into account "degrees of freedom") this should > be reported as a much worse fit than using only a linear polynomial. > > The problem is ill-conditioned because 1000**30 is so much bigger than > 999**30, as is 1000**29 compared to 999**29, that the vectors > arange(1000)**30 and arange(1000)**30 are nearly identical. As the > polyfit docstring says, this is a really horrible basis to use for > polynomial fitting. > > Since polyfit is quite sensibly based on the SVD, when it encounters > some linear combination of basis vectors that nearly cancels, rather > than introduce enormous coefficients to try to use this combination > for fitting, it just discards it. This results in slightly poorer fits > sometimes, but drastically reduces the numerical headaches that come > with ill-conditioned matrices. Thus even if a higher-degree > polynomial would fit the data better in a world of exact arithmetic, > occasionally numerical issues force the discarding of some troublesome > combinations of coefficients. > > I do think it would be useful to have some mechanism for fitting and > working with polynomials represented in other bases, so that these > numerical issues would be reduced. This could go through at the same > time as an improvement (by specialization) of some of scipy's > orthogonal polynomial functions. But for now I can see no real problem > with polyfit. > > Anne > Anne : Polyfit is fine. Quoting : "Well think about it one second : A fit with a 12 order polynomial will always have a smaller least mean square error the a fit using a <12 order polynomial. Your must figure out we before using a polyfit. >>Well, this is reasonable but it is not what I see happening. Often the algorithm takes the highest-degree polynomial, but not always." This simply cannot be. Polyfit is just fine. The goal of my trivial example is to show that polyfit is fine. My goal is only to tell to Massimo what there must be a pb somewhere in its code. Xavier From keflavich at gmail.com Fri Mar 28 20:28:23 2008 From: keflavich at gmail.com (Keflavich) Date: Fri, 28 Mar 2008 17:28:23 -0700 (PDT) Subject: [SciPy-user] FITS images with header-supplied axes? Message-ID: Is there any plotting routine in scipy / matplotlib that can plot a fits image with correct WCS coordinates on the axes? I know pyfits can load fits files, astLib has routines to interpret header coordinates, and I think you can make the axes different using matplotlib transforms, but is there anything that puts all three together currently available? Thanks, Adam From pebarrett at gmail.com Sat Mar 29 11:15:47 2008 From: pebarrett at gmail.com (Paul Barrett) Date: Sat, 29 Mar 2008 11:15:47 -0400 Subject: [SciPy-user] FITS images with header-supplied axes? In-Reply-To: References: Message-ID: <40e64fa20803290815y6e1fa38au949e3017a5ced901@mail.gmail.com> pyfits should have this capability. I would suggest contacting the people at the Space Telescope Science Institute, who are the ones that currently maintain pyfits, and asking them to include this feature. The other option is to add it to you own version of pyfits. Cheers, Paul On Fri, Mar 28, 2008 at 8:28 PM, Keflavich wrote: > Is there any plotting routine in scipy / matplotlib that can plot a > fits image with correct WCS coordinates on the axes? I know pyfits > can load fits files, astLib has routines to interpret header > coordinates, and I think you can make the axes different using > matplotlib transforms, but is there anything that puts all three > together currently available? > > Thanks, > Adam > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From keflavich at gmail.com Sat Mar 29 11:35:18 2008 From: keflavich at gmail.com (Keflavich) Date: Sat, 29 Mar 2008 08:35:18 -0700 (PDT) Subject: [SciPy-user] FITS images with header-supplied axes? In-Reply-To: <40e64fa20803290815y6e1fa38au949e3017a5ced901@mail.gmail.com> References: <40e64fa20803290815y6e1fa38au949e3017a5ced901@mail.gmail.com> Message-ID: Thanks, that sounds pretty reasonable. I'll e-mail stsci. I would try to implement something along those lines myself, but I got lost in the matplotlib transforms documentation. Even for something simple like making the x-axis go from 10 to 20 instead of 1 to 10 it seems like a difficult process, while with, say, contour, you can just specify the x- and y- axes in the function call. Nothing similar exists for imshow / figimage? Thanks, Adam On Mar 29, 8:15 am, "Paul Barrett" wrote: > pyfits should have this capability. I would suggest contacting the > people at the Space Telescope Science Institute, who are the ones that > currently maintain pyfits, and asking them to include this feature. > The other option is to add it to you own version of pyfits. > > Cheers, > Paul > > On Fri, Mar 28, 2008 at 8:28 PM,Keflavich wrote: > > Is there any plotting routine in scipy / matplotlib that can plot a > > fits image with correct WCS coordinates on the axes? I know pyfits > > can load fits files, astLib has routines to interpret header > > coordinates, and I think you can make the axes different using > > matplotlib transforms, but is there anything that puts all three > > together currently available? > > > Thanks, > > Adam > > _______________________________________________ > > SciPy-user mailing list > > SciPy-u... at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user From emsellem at obs.univ-lyon1.fr Sat Mar 29 14:35:05 2008 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Sat, 29 Mar 2008 19:35:05 +0100 Subject: [SciPy-user] FITS images with header-supplied axes? Message-ID: <47EE8BD9.9020007@obs.univ-lyon1.fr> Hi, if you find a solution, please let me know. On my side I had to rewrite some dummy routines to use the CRVAL/CDELT/NAXIS etc headers to do the axes plotting. I am using "extent=", to have a direct access to the coordinates in the plotting (imshow). But of course you first have to use the figure size, and then adapt the format (with some basic calculation) of your image/plot so that the pixels follow the right axis ratio. Eric > Is there any plotting routine in scipy / matplotlib that can plot a > fits image with correct WCS coordinates on the axes? I know pyfits > can load fits files, astLib has routines to interpret header > coordinates, and I think you can make the axes different using > matplotlib transforms, but is there anything that puts all three > together currently available? > > Thanks, > Adam From perry at stsci.edu Sun Mar 30 10:54:04 2008 From: perry at stsci.edu (Perry Greenfield) Date: Sun, 30 Mar 2008 10:54:04 -0400 (EDT) Subject: [SciPy-user] FITS images with header-supplied axes? Message-ID: <20080330105404.CRP80397@comet.stsci.edu> On Mar 28, 2008, at 8:28 PM, Keflavich wrote: > Is there any plotting routine in scipy / matplotlib that can plot a > fits image with correct WCS coordinates on the axes? I know pyfits > can load fits files, astLib has routines to interpret header coordinates, and I think you can make the axes different using > matplotlib transforms, but is there anything that puts all three > together currently available? > Thanks, > Adam Well, we (STScI) recently wrapped WCSLIB to obtain a mapping function between pixel and sky coordinates for python (you can find it as pywcs in astrolib on scipy; that may have been what you were referring to). But I'm not sure you understand what you are asking for with regard to matplotlib. The new transforms stuff should make it much easier to display the sky coordinates in the interactive display. The axis labeling is a different matter. Suppose your image (let's say it's 1Kx1K for the sake of discussion) is rotated 45 degrees with regard to north (either way, it doesn't really matter). What would you expect to see for axis labels? I don't think it is at all obvious how people would want labeling to be done along the edges of the image. I can imagine someone wanting axes or grids superimposed on the image itself, but that's not quite the same thing. Do you want the image rotated so that it is resampled on to RA and Dec and displayed that way? In any event, no we haven't yet done anything to try to integrate all three things. Among other things we wanted to make sure that the api for the wcs info was suitable before doing a lot with it (and in the meantime, Mike is working on rewriting drizzle which is taking a lot of his time). Perry From keflavich at gmail.com Sun Mar 30 12:38:03 2008 From: keflavich at gmail.com (Keflavich) Date: Sun, 30 Mar 2008 09:38:03 -0700 (PDT) Subject: [SciPy-user] FITS images with header-supplied axes? In-Reply-To: <20080330105404.CRP80397@comet.stsci.edu> References: <20080330105404.CRP80397@comet.stsci.edu> Message-ID: <1e83f451-f01d-4413-95ee-8af77a78d42c@s19g2000prg.googlegroups.com> > But I'm not sure you understand what you are asking for with regard to matplotlib. The new transforms stuff should make it much easier to display the sky coordinates in the interactive display. The axis labeling is a different matter. Suppose your image (let's say it's 1Kx1K for the sake of discussion) is rotated 45 degrees with regard to north (either way, it doesn't really matter). What would you expect to see for axis labels? I don't think it is at all obvious how people would want labeling to be done along the edges of the image. I can imagine someone wanting axes or grids superimposed on the image itself, but that's not quite the same thing. Do you want the image rotated so that it is resampled on to RA and Dec and displayed that way? I was thinking no resampling, just put an RA/DEC grid and fit the image into it as well as it can be. I don't know if it's possible to display rotated pixels, but that would be the most useful behavior in this case. I'm not sure I understand how the transforms would make it easier to display sky coordinates without labeling axes, though. Are you saying that if the image is already sampled in RA/DEC (or whatever coordinate system) space, then it should be easy to display the RA/DEC coordinates on the axes? Thanks, Adam From emsellem at obs.univ-lyon1.fr Sun Mar 30 16:26:02 2008 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Sun, 30 Mar 2008 22:26:02 +0200 Subject: [SciPy-user] FITS images with header-supplied axes? Message-ID: <47EFF75A.70702@obs.univ-lyon1.fr> > I'm not sure I understand how the transforms would make it easier to > display sky coordinates without labeling axes, though. Are you saying > that if the image is already sampled in RA/DEC (or whatever coordinate > system) space, then it should be easy to display the RA/DEC > coordinates on the axes? yes it is possible of course: I do it by using ax.pcolormesh and first creating the corners of each pixels with the shading='flat' option. I have been trying to make a "clean" automatic routine to do this in a very general way but so far I only ended up with a simple dirty one. If you are interested I can always send it to you, but this does not make really use of the nice wcs functionalities that people at StSci have written up. Eric From akumar at iitm.ac.in Sun Mar 30 23:22:57 2008 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Mon, 31 Mar 2008 08:52:57 +0530 Subject: [SciPy-user] Impulse/Step response troubles Message-ID: <20080331032257.GA7083@debian.akumar.iitm.ac.in> Dear SciPy users, (Long mail) I am having several problems in working out the impulse response of a simple function in SciPy. I would request you to please point out whether the fault lies with SciPy or me. The question is to determine (plot) the step and ramp responses of K / (s^2 + 8s + K) for K = 7, 16 and 80 So, the following code is used, and I'll keep changing a line: from scipy import * from pylab import * K = 16.0 # should be redone with 7.0, 16.0 and 80.0 r = signal.impulse(([K],[1.0,8.0,K]), T=r_[0:5:0.001]) plot(r[0], r[1], linewidth=2) show() The above code plots the impulse response of the given function. Now, on to the action: 1. Running the above code with K = 7.0 and K = 16.0 gives expected results. However, running the code with K = 16.0 doesn't work; if K = 16.0, the response should be 16 * t * exp(-4t) * u(t), which it isn't. Changing K to 16 + or - .00000000001 fixes it. There surely is some problem when the value of K is such that a double root is hit. 2. For the step response, I change "impulse" in the above code to "step", things seem all right. Again, for K = 16, it doesn't work. I then switch back to impulse, but make the second (denominator) array [1.0, 8.0, K, 0], and still the same thing is observed. 3. To get the ramp response, I now have the following code in line 8 of the above code. r = signal.step(([K],[1.0,8.0,K, 0]), T=r_[0:5:0.001]) But this causes the error: Traceback (most recent call last): File "", line 8, in ? File "/usr/lib/python2.4/site-packages/scipy/signal/ltisys.py", line 498, in step vals = lsim(sys, U, T, X0=X0) File "/usr/lib/python2.4/site-packages/scipy/signal/ltisys.py", line 403, in lsim ATm1 = linalg.inv(AT) File "/usr/lib/python2.4/site-packages/scipy/linalg/basic.py", line 306, in inv if info>0: raise LinAlgError, "singular matrix" numpy.linalg.linalg.LinAlgError: singular matrix So let's switch back to impulse: r = signal.impulse(([K],[1.0,8.0,K, 0.0, 0.0]), T=r_[0:5:0.001]) Now, it seems to work, but I am yet to verify the correctness of the plots. Naturally, for K = 16.0, it doesn't work; I have to use a value close to it. I would really appreciate help in getting this debugged. Thanks a lot! Kumar -- Kumar Appaiah, 458, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From dmitrey.kroshko at scipy.org Mon Mar 31 05:16:44 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 31 Mar 2008 12:16:44 +0300 Subject: [SciPy-user] Is solving sparse systems of linear equations possible w/o scipy? Message-ID: <47F0ABFC.5060504@scipy.org> hi all, is solving sparse systems of linear equations possible w/o scipy, via numpy tools only? For example I have table 3 x n i, j, val that defines sparse matrix A, and I want to solve Ax=b So does numpy/numpy.linalg has tools to solve the sparse problem? (It would allow using d2f and mb d2c, d2h in ralg solver) I don't want to have scipy dependence wrt the issue. Thank you in advance, D. From robert.kern at gmail.com Mon Mar 31 05:19:31 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 31 Mar 2008 04:19:31 -0500 Subject: [SciPy-user] Is solving sparse systems of linear equations possible w/o scipy? In-Reply-To: <47F0ABFC.5060504@scipy.org> References: <47F0ABFC.5060504@scipy.org> Message-ID: <3d375d730803310219x593e36cj11b4812a544eb2fa@mail.gmail.com> On Mon, Mar 31, 2008 at 4:16 AM, dmitrey wrote: > hi all, > is solving sparse systems of linear equations possible w/o scipy, via > numpy tools only? No. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From massimo.sandal at unibo.it Mon Mar 31 08:47:39 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 31 Mar 2008 14:47:39 +0200 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: <47ED5DBC.8080508@gmail.com> References: <47EA7FF6.4030409@unibo.it> <2a1f8a930803261017g73aa2f74kbcb2fbae3398a96a@mail.gmail.com> <47EB8BB2.4020101@unibo.it> <2a1f8a930803280214r6919d463w93c6927e8f5eceda@mail.gmail.com> <47ED5DBC.8080508@gmail.com> Message-ID: <47F0DD6B.5010708@unibo.it> Gnata Xavier ha scritto: > Quoting : > > "Well think about it one second : A fit with a 12 order polynomial will > always have a smaller least mean square error the a fit using a <12 > order polynomial. Your must figure out we before using a polyfit. > > >>Well, this is reasonable but it is not what I see happening. Often > the algorithm takes the highest-degree polynomial, but not always." > > This simply cannot be. Polyfit is just fine. The goal of my trivial > example is to show that polyfit is fine. > My goal is only to tell to Massimo what there must be a pb somewhere in > its code. Pardon my utter ignorance, but imagine my data are just a straight line. How can a parable, or a 3rd order polynomial, fit them better than a straight line itself? It is possible that when I pick up <12 polynomials, the error is the same for the higher degrees. Since i use index() to find the "best" one in a vector of values, probably it picks the first of them. Anyway my code, as silly as it is :) , seems to work. I tested hundreds of data without a single failing. I'll make sure to use smoothed splines in the future, but it's not a priority now. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From rmay31 at gmail.com Mon Mar 31 11:35:40 2008 From: rmay31 at gmail.com (Ryan May) Date: Mon, 31 Mar 2008 10:35:40 -0500 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: <47F0DD6B.5010708@unibo.it> References: <47EA7FF6.4030409@unibo.it> <2a1f8a930803261017g73aa2f74kbcb2fbae3398a96a@mail.gmail.com> <47EB8BB2.4020101@unibo.it> <2a1f8a930803280214r6919d463w93c6927e8f5eceda@mail.gmail.com> <47ED5DBC.8080508@gmail.com> <47F0DD6B.5010708@unibo.it> Message-ID: <47F104CC.2090004@gmail.com> massimo sandal wrote: > Gnata Xavier ha scritto: >> Quoting : >> >> "Well think about it one second : A fit with a 12 order polynomial >> will always have a smaller least mean square error the a fit using a >> <12 order polynomial. Your must figure out we before using a polyfit. >> >> >>Well, this is reasonable but it is not what I see happening. Often >> the algorithm takes the highest-degree polynomial, but not always." >> >> This simply cannot be. Polyfit is just fine. The goal of my trivial >> example is to show that polyfit is fine. >> My goal is only to tell to Massimo what there must be a pb somewhere >> in its code. > > Pardon my utter ignorance, but imagine my data are just a straight line. > How can a parable, or a 3rd order polynomial, fit them better than a > straight line itself? > Just as it takes 2 points to define a line, it takes 3 points to define a parabola, and n+1 points to define an nth order polynomial. Thus, you can define a polynomial that passes through all of your data points. This will produce an RMS error of 0. However, if you know the points lie along a line, this is clearly not optimal and you are simply fitting the curve to noise in the data. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From massimo.sandal at unibo.it Mon Mar 31 12:08:16 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 31 Mar 2008 18:08:16 +0200 Subject: [SciPy-user] Polyfit may be poorly conditioned In-Reply-To: <47F104CC.2090004@gmail.com> References: <47EA7FF6.4030409@unibo.it> <2a1f8a930803261017g73aa2f74kbcb2fbae3398a96a@mail.gmail.com> <47EB8BB2.4020101@unibo.it> <2a1f8a930803280214r6919d463w93c6927e8f5eceda@mail.gmail.com> <47ED5DBC.8080508@gmail.com> <47F0DD6B.5010708@unibo.it> <47F104CC.2090004@gmail.com> Message-ID: <47F10C70.7050906@unibo.it> Ryan May ha scritto: > massimo sandal wrote: >> Gnata Xavier ha scritto: >>> Quoting : > Just as it takes 2 points to define a line, it takes 3 points to define > a parabola, and n+1 points to define an nth order polynomial. Of course :) > Thus, you > can define a polynomial that passes through all of your data points. > This will produce an RMS error of 0. However, if you know the points > lie along a line, this is clearly not optimal and you are simply fitting > the curve to noise in the data. Yes. What I meant is that if the polynomial algorithm is trying to fit a straight line (let's forget about noise) with an high-order polynomial, with strictly non-zero coefficients, I wondered if it could be worse, numerically, since the best fit requires zero coefficient for anything with order > 1. Of course if we introduce noise, the polyfit will try to intersecate noise points, so it couldn't be worse. I understand. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From akumar at iitm.ac.in Mon Mar 31 12:50:02 2008 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Mon, 31 Mar 2008 22:20:02 +0530 Subject: [SciPy-user] Impulse/Step response troubles In-Reply-To: <20080331032257.GA7083@debian.akumar.iitm.ac.in> References: <20080331032257.GA7083@debian.akumar.iitm.ac.in> Message-ID: <20080331165002.GA1241@debian.akumar.iitm.ac.in> Reply below quote. On Mon, Mar 31, 2008 at 08:52:57AM +0530, Kumar Appaiah wrote: > The question is to determine (plot) the step and ramp responses of > K / (s^2 + 8s + K) for K = 7, 16 and 80 > > So, the following code is used, and I'll keep changing a line: > > from scipy import * > from pylab import * > > K = 16.0 # should be redone with 7.0, 16.0 and 80.0 > > r = signal.impulse(([K],[1.0,8.0,K]), T=r_[0:5:0.001]) > plot(r[0], r[1], linewidth=2) > show() > > > The above code plots the impulse response of the given function. Now, > on to the action: > > 1. Running the above code with K = 7.0 and K = 16.0 gives expected > results. However, running the code with K = 16.0 doesn't work; if K = > 16.0, the response should be 16 * t * exp(-4t) * u(t), which it > isn't. Changing K to 16 + or - .00000000001 fixes it. There surely is > some problem when the value of K is such that a double root is hit. Well, I've zeroed in on the issue. Running the code of signal.lti.impulse, we get here: s,v = linalg.eig(sys.A) vi = linalg.inv(v) Now, v is a matrix which has the eigen vectors, which are EQUAL in this case: print sys.A [[-0.9701425 -0.9701425 ] [ 0.24253563 0.24253563]] which is singular. However, the code goes on to invert this happily, resulting in a bad matrix and horrendous values. I would be surprised if nobody has encountered this yet (didn't find this on the Tickets). What would be the best way to handle repeated roots in the transfer function's denominator? Thanks. Kumar -- Kumar Appaiah, 458, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From asolway at seas.upenn.edu Mon Mar 31 14:19:51 2008 From: asolway at seas.upenn.edu (Alec Solway) Date: Mon, 31 Mar 2008 14:19:51 -0400 Subject: [SciPy-user] kmeans2 random initialization Message-ID: <20080331141951.3pkufn1h8gswgcsg@webmail.seas.upenn.edu> Hello, In using kmeans2 with the "random" initialization method, I often get: numpy.linalg.linalg.LinAlgError: Matrix is not positive definite - Cholesky decomposition cannot be computed I haven't dug too deeply into the implementation; I'm wondering if there's a gotcha someone can point out quickly here. What exactly is the "random" initialization method doing with the data set, and what constraints must the data have so as to keep whatever intermediate matrix it computes to be positive definite? Thanks. From doreen at aims.ac.za Mon Mar 31 14:53:39 2008 From: doreen at aims.ac.za (Doreen Mbabazi) Date: Mon, 31 Mar 2008 20:53:39 +0200 (SAST) Subject: [SciPy-user] Estimation of parameters while fitting data Message-ID: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> Hullo, I have as my fitting function a system of differential equations(3) and I am supposed to get the optimal parameters to use by fitting data to this system. I tried to use scipy.optimize.leastsq. The difficulty is with the function that calculates the difference between the data and the values from the fitting function. The result of the fitting function is a list with three values and yet I only need one of the values to be subtracted from the data value. I have tried to get something from PyDSTool but all in vain. If you can propose something for me to use or suggest a better method, I will be very grateful. Regards, Doreen. From ggellner at uoguelph.ca Mon Mar 31 15:07:12 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Mon, 31 Mar 2008 15:07:12 -0400 Subject: [SciPy-user] Estimation of parameters while fitting data In-Reply-To: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> References: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> Message-ID: <20080331190712.GA6796@encolpuis> > I have as my fitting function a system of differential equations(3) and I > am supposed to get the optimal parameters to use by fitting data to this > system. I tried to use scipy.optimize.leastsq. The difficulty is with the > function that calculates the difference between the data and the values > from the fitting function. The result of the fitting function is a list > with three values and yet I only need one of the values to be subtracted > from the data value. Could you not just subtract the index of the returned value? Say it is the first value you need then just use value[0]. If not can you give a small example, it will make it easier to help. Gabriel From doreen at aims.ac.za Mon Mar 31 16:09:01 2008 From: doreen at aims.ac.za (Doreen Mbabazi) Date: Mon, 31 Mar 2008 22:09:01 +0200 (SAST) Subject: [SciPy-user] Estimation of parameters while fitting data In-Reply-To: <20080331190712.GA6796@encolpuis> References: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> <20080331190712.GA6796@encolpuis> Message-ID: <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> Hi, Thanks, I tried to do that(by taking err = V-f(y,t,p)[2]) while defining the function residuals but the trouble is that actually f(y,t,p) calculates value of y at t0 so it cannot help me. What I want are the third values from y(y[i][2]). Below I have tried to do that but that gives particular values of y so my parameters are not optimized. def residuals(p, V, t): """The function is used to calculate the residuals """ for i in range(len(t)): err = V-y[i][2] return err #Function defined with y[0]=T,y[1]=T*,y[2] = V,lamda = p[0],d = p[1], k=p[2],delta=p[3], pi = p[4], c = p[5] initial_y = [10,0,10e-6] # initial conditions T(0)= 10cells , T*(0)=0, V(0)=10e-6 p is the list of parameters that are being estimated (lamda,d,k,delta,pi,c) def f(y,t,p): y_dot = [0,0,0] y_dot[0] = p[0] - p[1]*y[0] - p[2]*y[0]*y[2] y_dot[1] = p[2]*y[0]*y[2] - p[3]*y[1] y_dot[2] = p[4]*y[1] - p[5]*y[2] return y_dot y = odeint(f,initial_y,t,args=(p,)) Doreen Gabriel Gellner >> I have as my fitting function a system of differential equations(3) and >> I >> am supposed to get the optimal parameters to use by fitting data to this >> system. I tried to use scipy.optimize.leastsq. The difficulty is with >> the >> function that calculates the difference between the data and the values >> from the fitting function. The result of the fitting function is a list >> with three values and yet I only need one of the values to be subtracted >> from the data value. > Could you not just subtract the index of the returned value? > Say it is the first value you need then just use value[0]. > > If not can you give a small example, it will make it easier to help. > > Gabriel > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From rob.clewley at gmail.com Mon Mar 31 16:19:48 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 31 Mar 2008 16:19:48 -0400 Subject: [SciPy-user] Estimation of parameters while fitting data In-Reply-To: <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> References: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> <20080331190712.GA6796@encolpuis> <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> Message-ID: Hi Doreen, If you send me your code which you tried using PyDSTool, I will happily advise you on how to fix it. I'm sure it will be possible for me to fix it! -Rob On Mon, Mar 31, 2008 at 4:09 PM, Doreen Mbabazi wrote: > Hi, > > Thanks, I tried to do that(by taking err = V-f(y,t,p)[2]) while defining > the function residuals but the trouble is that actually f(y,t,p) > calculates value of y at t0 so it cannot help me. What I want are the > third values from y(y[i][2]). Below I have tried to do that but that gives > particular values of y so my parameters are not optimized. > > def residuals(p, V, t): > """The function is used to calculate the residuals > """ > for i in range(len(t)): > err = V-y[i][2] > return err > > #Function defined with y[0]=T,y[1]=T*,y[2] = V,lamda = p[0],d = p[1], > k=p[2],delta=p[3], pi = p[4], c = p[5] > initial_y = [10,0,10e-6] # initial conditions T(0)= 10cells , T*(0)=0, > V(0)=10e-6 > > p is the list of parameters that are being estimated (lamda,d,k,delta,pi,c) > def f(y,t,p): > y_dot = [0,0,0] > y_dot[0] = p[0] - p[1]*y[0] - p[2]*y[0]*y[2] > y_dot[1] = p[2]*y[0]*y[2] - p[3]*y[1] > y_dot[2] = p[4]*y[1] - p[5]*y[2] > return y_dot > > y = odeint(f,initial_y,t,args=(p,)) > > Doreen > > Gabriel Gellner > > > >> I have as my fitting function a system of differential equations(3) and > >> I > >> am supposed to get the optimal parameters to use by fitting data to this > >> system. I tried to use scipy.optimize.leastsq. The difficulty is with > >> the > >> function that calculates the difference between the data and the values > >> from the fitting function. The result of the fitting function is a list > >> with three values and yet I only need one of the values to be subtracted > >> from the data value. > > Could you not just subtract the index of the returned value? > > Say it is the first value you need then just use value[0]. > > > > If not can you give a small example, it will make it easier to help. > > > > Gabriel > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-651-2246 http://www.mathstat.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From doreen at aims.ac.za Mon Mar 31 16:44:52 2008 From: doreen at aims.ac.za (Doreen Mbabazi) Date: Mon, 31 Mar 2008 22:44:52 +0200 (SAST) Subject: [SciPy-user] Estimation of parameters while fitting data In-Reply-To: References: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> <20080331190712.GA6796@encolpuis> <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> Message-ID: <60321.192.168.42.175.1206996292.squirrel@webmail.aims.ac.za> Hi, Thanks, unfortunately I didn't write any code in PyDSTool because I am not conversant with it at all. I was just reading something about it. Doreen. Rob Clewley > Hi Doreen, > > If you send me your code which you tried using PyDSTool, I will > happily advise you on how to fix it. I'm sure it will be possible for > me to fix it! > -Rob > > On Mon, Mar 31, 2008 at 4:09 PM, Doreen Mbabazi wrote: >> Hi, >> >> Thanks, I tried to do that(by taking err = V-f(y,t,p)[2]) while >> defining >> the function residuals but the trouble is that actually f(y,t,p) >> calculates value of y at t0 so it cannot help me. What I want are the >> third values from y(y[i][2]). Below I have tried to do that but that >> gives >> particular values of y so my parameters are not optimized. >> >> def residuals(p, V, t): >> """The function is used to calculate the residuals >> """ >> for i in range(len(t)): >> err = V-y[i][2] >> return err >> >> #Function defined with y[0]=T,y[1]=T*,y[2] = V,lamda = p[0],d = p[1], >> k=p[2],delta=p[3], pi = p[4], c = p[5] >> initial_y = [10,0,10e-6] # initial conditions T(0)= 10cells , T*(0)=0, >> V(0)=10e-6 >> >> p is the list of parameters that are being estimated >> (lamda,d,k,delta,pi,c) >> def f(y,t,p): >> y_dot = [0,0,0] >> y_dot[0] = p[0] - p[1]*y[0] - p[2]*y[0]*y[2] >> y_dot[1] = p[2]*y[0]*y[2] - p[3]*y[1] >> y_dot[2] = p[4]*y[1] - p[5]*y[2] >> return y_dot >> >> y = odeint(f,initial_y,t,args=(p,)) >> >> Doreen >> >> Gabriel Gellner >> >> >> >> I have as my fitting function a system of differential equations(3) >> and >> >> I >> >> am supposed to get the optimal parameters to use by fitting data to >> this >> >> system. I tried to use scipy.optimize.leastsq. The difficulty is >> with >> >> the >> >> function that calculates the difference between the data and the >> values >> >> from the fitting function. The result of the fitting function is a >> list >> >> with three values and yet I only need one of the values to be >> subtracted >> >> from the data value. >> > Could you not just subtract the index of the returned value? >> > Say it is the first value you need then just use value[0]. >> > >> > If not can you give a small example, it will make it easier to help. >> > >> > Gabriel >> > _______________________________________________ >> > SciPy-user mailing list >> > SciPy-user at scipy.org >> > http://projects.scipy.org/mailman/listinfo/scipy-user >> > >> >> >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Robert H. Clewley, Ph. D. > Assistant Professor > Department of Mathematics and Statistics > Georgia State University > 720 COE, 30 Pryor St > Atlanta, GA 30303, USA > > tel: 404-413-6420 fax: 404-651-2246 > http://www.mathstat.gsu.edu/~matrhc > http://brainsbehavior.gsu.edu/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Mbabazi Doreen, African Institute for Mathematical Sciences, 6 Melrose Road, Muizeinberg, 7945 Cape Town | South Africa Email:doreen at aims.ac.za 'It is not when we are strong but when we are weak that God's power is shown in our lives.' From peridot.faceted at gmail.com Mon Mar 31 17:51:42 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 31 Mar 2008 17:51:42 -0400 Subject: [SciPy-user] Estimation of parameters while fitting data In-Reply-To: <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> References: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> <20080331190712.GA6796@encolpuis> <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> Message-ID: On 31/03/2008, Doreen Mbabazi wrote: > Thanks, I tried to do that(by taking err = V-f(y,t,p)[2]) while defining > the function residuals but the trouble is that actually f(y,t,p) > calculates value of y at t0 so it cannot help me. What I want are the > third values from y(y[i][2]). Below I have tried to do that but that gives > particular values of y so my parameters are not optimized. > > def residuals(p, V, t): > """The function is used to calculate the residuals > """ > for i in range(len(t)): > err = V-y[i][2] > return err > > #Function defined with y[0]=T,y[1]=T*,y[2] = V,lamda = p[0],d = p[1], > k=p[2],delta=p[3], pi = p[4], c = p[5] > initial_y = [10,0,10e-6] # initial conditions T(0)= 10cells , T*(0)=0, > V(0)=10e-6 > > p is the list of parameters that are being estimated (lamda,d,k,delta,pi,c) > def f(y,t,p): > y_dot = [0,0,0] > y_dot[0] = p[0] - p[1]*y[0] - p[2]*y[0]*y[2] > y_dot[1] = p[2]*y[0]*y[2] - p[3]*y[1] > y_dot[2] = p[4]*y[1] - p[5]*y[2] > return y_dot > > y = odeint(f,initial_y,t,args=(p,)) First of all, I'm not totally sure I have correctly understood your problem. Let me state it as I understand it: You have a collection of points, (x[i], y[i]). You also have a model y = S(x, P[j]) giving y as a function of x and some parameters P. This function is not given explicitly, instead you know that it is the solution to a certain differential equation, with initial values and parameters given by P. You want to find the values of P that minimize sum((y[i] - S(x[i],P))**2). Is that about right? (The differential equation is actually expressed as three coupled ODEs, but that's not really a problem.) The easiest-to-understand way to solve the problem is probably to start by writing a python function S that behaves like S does. Of course, it has to be computed by solving the ODE, which means we're going to have to solve the ODE a zillion times, but that's okay, that's what computers are for. def S(x, P): ys = odeint(f, initial_y, [0,x], P) return ys[1,0] Now check that this function looks vaguely right (perhaps by plotting it, or checking that the values that come out are sensible). Now you can do quite ordinary least-squares fitting: def residuals(P,x,y): return [y[i] - S(x[i],P) for i in xrange(len(x))] Pbest = scipy.optimize.leastsq(residuals, Pguess, args=(x,y)) This should work, and be understandable. But it is not very efficient, since for every set of parameters, we solve the ODE len(x) times. We can improve things by using the fact that odeint can return the integrated function evaluated at a list of places. So we'd modify S to accept a list of xs, and return the S values at all those places. This would even simplify our residuals function: def S(xs, P): ys = odeint(f, initial_y, numpy.concatenate(([0],xs), P) return ys[1:,0] def residuals(P,xs,ys): return ys - S(xs, P) Is this the problem you were trying to solve? I suggest first getting used to how odeint and leastsq work first, then combining them. Their arguments can be weird, in particular the way odeint treats the initial x like the xs you want your ode integrated to. Good luck, Anne From justus.schwabedal at gmx.de Mon Mar 31 20:07:36 2008 From: justus.schwabedal at gmx.de (Justus Schwabedal) Date: Tue, 1 Apr 2008 02:07:36 +0200 Subject: [SciPy-user] Estimation of parameters while fitting data In-Reply-To: References: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> <20080331190712.GA6796@encolpuis> <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> Message-ID: <9EA1086B-5881-4E35-93D5-291F26DB8D88@gmx.de> The Problem you are dealing with is not easily solved, especially, when you are dealing with an nonlinear differential equation. However there are methods available though it involves a little background knowledge. I can give you some literature: Braake Braake Bock Briggs: Fitting ordinary differential equations to chaotic data, Phys. Rev. A (1990) These guys use a special kind of least squares fitting, namely a multiple shooting method. In their work it seems to work reasonable, however I think it is very unstable under the influence of any noise. Parlitz Junge Kocarev: Synchronization-based estimation of parameters from time series, PRE (1996) I wrote my diploma thesis on this method and were able to estimate parameters from EEG time series of epileptic patients. The idea is, that the model will synchronize with the data when the model parameters fit the (suspected) parameters of the system most well. If your model fits the data well you should stick to this. I would send you my thesis but it's written in german so it probably won't be of any help. If you have questions, please ask. Yours Justus On Mar 31, 2008, at 11:51 PM, Anne Archibald wrote: > On 31/03/2008, Doreen Mbabazi wrote: > >> Thanks, I tried to do that(by taking err = V-f(y,t,p)[2]) while >> defining >> the function residuals but the trouble is that actually f(y,t,p) >> calculates value of y at t0 so it cannot help me. What I want are the >> third values from y(y[i][2]). Below I have tried to do that but >> that gives >> particular values of y so my parameters are not optimized. >> >> def residuals(p, V, t): >> """The function is used to calculate the residuals >> """ >> for i in range(len(t)): >> err = V-y[i][2] >> return err >> >> #Function defined with y[0]=T,y[1]=T*,y[2] = V,lamda = p[0],d = p[1], >> k=p[2],delta=p[3], pi = p[4], c = p[5] >> initial_y = [10,0,10e-6] # initial conditions T(0)= 10cells , >> T*(0)=0, >> V(0)=10e-6 >> >> p is the list of parameters that are being estimated >> (lamda,d,k,delta,pi,c) >> def f(y,t,p): >> y_dot = [0,0,0] >> y_dot[0] = p[0] - p[1]*y[0] - p[2]*y[0]*y[2] >> y_dot[1] = p[2]*y[0]*y[2] - p[3]*y[1] >> y_dot[2] = p[4]*y[1] - p[5]*y[2] >> return y_dot >> >> y = odeint(f,initial_y,t,args=(p,)) > > First of all, I'm not totally sure I have correctly understood your > problem. Let me state it as I understand it: > > You have a collection of points, (x[i], y[i]). You also have a model y > = S(x, P[j]) giving y as a function of x and some parameters P. This > function is not given explicitly, instead you know that it is the > solution to a certain differential equation, with initial values and > parameters given by P. You want to find the values of P that minimize > sum((y[i] - S(x[i],P))**2). > > Is that about right? (The differential equation is actually expressed > as three coupled ODEs, but that's not really a problem.) > > The easiest-to-understand way to solve the problem is probably to > start by writing a python function S that behaves like S does. Of > course, it has to be computed by solving the ODE, which means we're > going to have to solve the ODE a zillion times, but that's okay, > that's what computers are for. > > def S(x, P): > ys = odeint(f, initial_y, [0,x], P) > return ys[1,0] > > Now check that this function looks vaguely right (perhaps by plotting > it, or checking that the values that come out are sensible). > > Now you can do quite ordinary least-squares fitting: > > def residuals(P,x,y): > return [y[i] - S(x[i],P) for i in xrange(len(x))] > > Pbest = scipy.optimize.leastsq(residuals, Pguess, args=(x,y)) > > This should work, and be understandable. But it is not very efficient, > since for every set of parameters, we solve the ODE len(x) times. We > can improve things by using the fact that odeint can return the > integrated function evaluated at a list of places. So we'd modify S to > accept a list of xs, and return the S values at all those places. This > would even simplify our residuals function: > > def S(xs, P): > ys = odeint(f, initial_y, numpy.concatenate(([0],xs), P) > return ys[1:,0] > > def residuals(P,xs,ys): > return ys - S(xs, P) > > Is this the problem you were trying to solve? I suggest first getting > used to how odeint and leastsq work first, then combining them. Their > arguments can be weird, in particular the way odeint treats the > initial x like the xs you want your ode integrated to. > > Good luck, > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From perry at stsci.edu Mon Mar 31 21:59:39 2008 From: perry at stsci.edu (Perry Greenfield) Date: Mon, 31 Mar 2008 21:59:39 -0400 Subject: [SciPy-user] FITS images with header-supplied axes? In-Reply-To: <1e83f451-f01d-4413-95ee-8af77a78d42c@s19g2000prg.googlegroups.com> References: <20080330105404.CRP80397@comet.stsci.edu> <1e83f451-f01d-4413-95ee-8af77a78d42c@s19g2000prg.googlegroups.com> Message-ID: <9AA7B346-74BF-4A77-AC67-29CE8A576695@stsci.edu> On Mar 30, 2008, at 12:38 PM, Keflavich wrote: > > I was thinking no resampling, just put an RA/DEC grid and fit the > image into it as well as it can be. I don't know if it's possible to > display rotated pixels, but that would be the most useful behavior in > this case. > Just to clarify, do you mean nearest neighbor regridding? It does sound like you mean that you would like to see the image rotated to have north up. Right? > I'm not sure I understand how the transforms would make it easier to > display sky coordinates without labeling axes, though. Are you saying > that if the image is already sampled in RA/DEC (or whatever coordinate > system) space, then it should be easy to display the RA/DEC > coordinates on the axes? > The transforms machinery would allow displaying the image in its original orientation and then when the cursor was moved over the image, the displayed x, y coordinates could display RA, DEC instead of pixel coordinates. But if you desire to rotate the image and have it aligned with north, that capability isn't really important. Doing the rotated image needs some tool to do the rotation (quickly presumably rather than a precise mapping) and then display that as part of a larger tool. That would be much more straightforward. But we haven't made such a tool yet. If someone does it first, it sure would be nice to add, at least as part of an astronomy toolkit. Perry From robert.kern at gmail.com Mon Mar 31 23:17:12 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 31 Mar 2008 22:17:12 -0500 Subject: [SciPy-user] kmeans2 random initialization In-Reply-To: <20080331141951.3pkufn1h8gswgcsg@webmail.seas.upenn.edu> References: <20080331141951.3pkufn1h8gswgcsg@webmail.seas.upenn.edu> Message-ID: <3d375d730803312017kb600709n8231dd9669f52414@mail.gmail.com> On Mon, Mar 31, 2008 at 1:19 PM, Alec Solway wrote: > Hello, > > In using kmeans2 with the "random" initialization method, I often get: > > numpy.linalg.linalg.LinAlgError: Matrix is not positive definite - > Cholesky decomposition cannot be computed > > I haven't dug too deeply into the implementation; I'm wondering if > there's a gotcha someone can point out quickly here. What exactly is > the "random" initialization method doing with the data set, and what > constraints must the data have so as to keep whatever intermediate > matrix it computes to be positive definite? The relevant function is scipy/cluster/vq.py:_krandinit(). It is finding the covariance matrix and manually doing a multivariate normal sampling. Your data is most likely degenerate and not of full rank. It's arguable whether or not this should fail, but numpy.random.multivariate_normal() uses the SVD instead of a Cholesky decomposition to find the matrix square root, so it sort of ignores non-positive definiteness. Try replacing the last 2 lines of _krandinit() with x = N.random.multivariate_normal(mu, cov, k) and see if that helps you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco