From cournape at gmail.com Mon Sep 1 03:13:51 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 1 Sep 2008 16:13:51 +0900 Subject: [SciPy-user] Create a spectrogram from a waveform In-Reply-To: <86f16dc10808311352k32bebbf7qd67b77b936b6973b@mail.gmail.com> References: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> <20080830202812.bpnp4vvz0g4g4o80@mail.enthought.com> <86f16dc10808310649v127c77acha3d13c5d03773a44@mail.gmail.com> <86f16dc10808311158h68c550edh1c11a55d43641378@mail.gmail.com> <86f16dc10808311352k32bebbf7qd67b77b936b6973b@mail.gmail.com> Message-ID: <5b8d13220809010013k5e54ceb0sfcd5460b059ecb0c@mail.gmail.com> On Mon, Sep 1, 2008 at 5:52 AM, Ed McCaffrey wrote: > Thanks for the reply. I had not heard of audiolab before, but I just tried > using it. > > Looking at audiolab made me realize that I had forgotten how a .wav stores > the data for multiple channels, so that was why the spectrogram I generated > before looked so odd. You should not have to care how it is stored, normally. audiolab gives you one column per channel; audiolab is just a wrapper around sndfile, which handles interleaving/deinterleaving internally if necessary. Also, this is not well documented, unfortunately, but if you don:t have advanced needs, you can use the high level API ala matlab: from scikits.audiolab import wavread If you want to compute the spectrogram without matplotlib, this is not too difficult: a spectrogram is a short time fourier, that is fourier transform computed on windowed parts of your signal http://en.wikipedia.org/wiki/Short-time_Fourier_transform I hope to include a tool for automatically segmenting a signal into a matrix of overlapping windows (implemented by A. Archibald) into numpy after 1.2 release. With this, a spectrogram is 2-3 lines away in pure numpy cheers, David From luca.ciciriello at email.it Mon Sep 1 05:56:28 2008 From: luca.ciciriello at email.it (luca.ciciriello at email.it) Date: Mon, 1 Sep 2008 11:56:28 +0200 Subject: [SciPy-user] install SciPy on Mac OS X 10.4 Message-ID: <8687ec0214b346df50502c9af4a8f9b4@85.18.140.153> Hi All. I've installed NumPy 1.1.1 and now I want to install SciPy 0.6.0 on My Mac OS X (10.4 PPC). I've got scipy-0.6.0.tar.gz.tar from SciPy site. On My system is installed Xcode 2.5, so my default compiler is gcc 4.0.1. I've read in INSTALL.TXT in SciPy distribution that SciPy is not so compatible with this compiler, then to try to digit gcc_select 3.3. My question is: where have I to digit this gcc_select 3.3? Thanks in advance for any answer. Luca -- Email.it, the professional e-mail, gratis per te: http://www.email.it/f Sponsor: Realizza i tuoi sogni con Carta Eureka. Fido fino a 3.000 euro, rate a partire da 20 euro e canone gratis il 1? anno. Scoprila! Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=7877&d=20080901 From ed at edmccaffrey.net Mon Sep 1 07:20:12 2008 From: ed at edmccaffrey.net (Ed McCaffrey) Date: Mon, 1 Sep 2008 07:20:12 -0400 Subject: [SciPy-user] Create a spectrogram from a waveform In-Reply-To: <5b8d13220809010013k5e54ceb0sfcd5460b059ecb0c@mail.gmail.com> References: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> <20080830202812.bpnp4vvz0g4g4o80@mail.enthought.com> <86f16dc10808310649v127c77acha3d13c5d03773a44@mail.gmail.com> <86f16dc10808311158h68c550edh1c11a55d43641378@mail.gmail.com> <86f16dc10808311352k32bebbf7qd67b77b936b6973b@mail.gmail.com> <5b8d13220809010013k5e54ceb0sfcd5460b059ecb0c@mail.gmail.com> Message-ID: <86f16dc10809010420p4df752eer9f487ad83bb552de@mail.gmail.com> On Mon, Sep 1, 2008 at 3:13 AM, David Cournapeau wrote: > You should not have to care how it is stored, normally. audiolab gives > you one column per channel; audiolab is just a wrapper around sndfile, > which handles interleaving/deinterleaving internally if necessary. I was referring to earlier, before I used audiolab and got an odd spectrogram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Sep 1 10:23:22 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 1 Sep 2008 23:23:22 +0900 Subject: [SciPy-user] install SciPy on Mac OS X 10.4 In-Reply-To: <8687ec0214b346df50502c9af4a8f9b4@85.18.140.153> References: <8687ec0214b346df50502c9af4a8f9b4@85.18.140.153> Message-ID: <5b8d13220809010723h47d34accgb9dedeade20348a4@mail.gmail.com> On Mon, Sep 1, 2008 at 6:56 PM, wrote: > > Hi All. > I've installed NumPy 1.1.1 and now I want to install SciPy 0.6.0 on My Mac > OS X > (10.4 PPC). I've got scipy-0.6.0.tar.gz.tar from SciPy site. On My system > is installed Xcode 2.5, so my default compiler is gcc 4.0.1. I've read in > INSTALL.TXT in SciPy distribution that SciPy is not so compatible with this > compiler, then to try to digit gcc_select 3.3. Hm, those informations are really obsolete. Please ignore them, we should update them. cheers, David From c.j.lee at tnw.utwente.nl Mon Sep 1 10:40:00 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Mon, 1 Sep 2008 16:40:00 +0200 Subject: [SciPy-user] install SciPy on Mac OS X 10.4 In-Reply-To: <5b8d13220809010723h47d34accgb9dedeade20348a4@mail.gmail.com> References: <8687ec0214b346df50502c9af4a8f9b4@85.18.140.153> <5b8d13220809010723h47d34accgb9dedeade20348a4@mail.gmail.com> Message-ID: The enthought distro will do all this for you as well. (http://www.enthought.com/products/epdacademic.php?ver=MacOSX ) Cheers Chris On Sep 1, 2008, at 4:23 PM, David Cournapeau wrote: > On Mon, Sep 1, 2008 at 6:56 PM, wrote: >> >> Hi All. >> I've installed NumPy 1.1.1 and now I want to install SciPy 0.6.0 on >> My Mac >> OS X >> (10.4 PPC). I've got scipy-0.6.0.tar.gz.tar from SciPy site. On My >> system >> is installed Xcode 2.5, so my default compiler is gcc 4.0.1. I've >> read in >> INSTALL.TXT in SciPy distribution that SciPy is not so compatible >> with this >> compiler, then to try to digit gcc_select 3.3. > > Hm, those informations are really obsolete. Please ignore them, we > should update them. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user *************************************************** Chris Lee Laser Physics and Nonlinear Optics Group MESA+ Research Institute for Nanotechnology University of Twente Phone: ++31 (0)53 489 3968 fax: ++31 (0)53 489 1102 *************************************************** From jeff.lyon at cox.net Mon Sep 1 11:41:52 2008 From: jeff.lyon at cox.net (Jeff Lyon) Date: Mon, 1 Sep 2008 08:41:52 -0700 Subject: [SciPy-user] Create a spectrogram from a waveform In-Reply-To: <20080830202812.bpnp4vvz0g4g4o80@mail.enthought.com> References: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> <20080830202812.bpnp4vvz0g4g4o80@mail.enthought.com> Message-ID: Hello, I tried to run the spectrogram.py example and I appear to be having configuration problems. I have installed the latest enthought distro, but the enable module can seem to find it's api component. Any thoughts? ~ jeff$ python Enthought Python Distribution (2.5.2001) -- http://code.enthought.com Python 2.5.2 |EPD 2.5.2001| (r252:60911, Jul 1 2008, 19:18:12) [GCC 4.0.1 (Apple Computer, Inc. build 5370)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import spectrum.py Traceback (most recent call last): File "", line 1, in File "spectrum.py", line 19, in from enthought.enable.api import Window ImportError: No module named api Thanks, Jeff Lyon On Aug 30, 2008, at 6:28 PM, Peter Wang wrote: > Quoting Ed McCaffrey : > >> I wrote a program in C# that creates a spectrogram from the >> waveform of a >> .wav music file. I now want to port it to Python, and I want to >> try to use >> SciPy instead of a direct port of the existing code, because I am >> not sure >> that it is perfectly accurate, and it is probably slow. >> >> I am having a hard time finding out how to do this with SciPy. >> With my >> code, I had a FFT function that took an array of real and imaginary >> components for each sample, and a second function taking both that >> produced >> the amplitude. The FFT function in SciPy just takes one array. >> >> Has anyone done this task in SciPy? > > We have a realtime spectrogram plot in the Audio Spectrum example for > Chaco. (See the very last screenshot on the gallery page here: > http://code.enthought.com/projects/chaco/gallery.php) > > You can see the full source code of the example here: > https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/spectrum.py > > The lines you would be interested in are the last few: > > def get_audio_data(): > pa = PyAudio() > stream = pa.open(format=paInt16, channels=1, rate=SAMPLING_RATE, > input=True, > frames_per_buffer=NUM_SAMPLES) > string_audio_data = stream.read(NUM_SAMPLES) > audio_data = fromstring(string_audio_data, dtype=short) > normalized_data = audio_data / 32768.0 > return (abs(fft(normalized_data))[:NUM_SAMPLES/2], > normalized_data) > > Here we are using the PyAudio library to directly read from the sound > card, normalize the 16-bit data, and perform an FFT on it. > > In your case, since you are reading a WAV file, you might be > interested in the zoomed_plot example: > http://code.enthought.com/projects/chaco/pu-zooming-plot.html > > This displays the time-space signal but can easily be modified to show > the FFT. Here is the relevant code that uses the built-in python > 'wave' module to read the data: > https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/zoomed_plot/wav_to_numeric.py > > You should be able to take the 'data' array in the wav_to_numeric > function and hand that in to the fft function. > > > -Peter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From bernardo.rocha at meduni-graz.at Tue Sep 2 05:47:51 2008 From: bernardo.rocha at meduni-graz.at (bernardo martins rocha) Date: Tue, 02 Sep 2008 11:47:51 +0200 Subject: [SciPy-user] matplotlib - axis and zoom Message-ID: <48BD0BC7.9080502@meduni-graz.at> Hi Guys, how can I maintain zoom in the horizontal axis and keep the vertical axis unchanged using the zoom tool provided by matplotlib? Bernardo M. Rocha From cimrman3 at ntc.zcu.cz Tue Sep 2 08:14:14 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 02 Sep 2008 14:14:14 +0200 Subject: [SciPy-user] ANN: SfePy-00.50.00 Message-ID: <48BD2E16.4080307@ntc.zcu.cz> I am pleased announce the release of SfePy 00.50.00. SfePy is a finite element analysis software in Python, based primarily on Numpy and SciPy. Mailing lists, issue tracking, mercurial repository: http://sfepy.org Home page: http://sfepy.kme.zcu.cz People who contributed to this release: Ondrej Certik, Ryan Krauss, Vladimir Lukes. Major improvements: - finite strain elasticity: neo-Hookean, Mooney-Rivlin materials in the total Lagrangian (TL) formulation - solving problems in complex numbers - generalized equations to allow linear combination of terms - run-time type of state term arguments - refactoring to follow Python coding style guidelines - new terms For more information on this release, see http://sfepy.googlecode.com/svn/web/releases/005000_RELEASE_NOTES.txt Best regards, Robert Cimrman From christophe.grimault at novagrid.com Tue Sep 2 13:11:22 2008 From: christophe.grimault at novagrid.com (christophe grimault) Date: Tue, 02 Sep 2008 19:11:22 +0200 Subject: [SciPy-user] Arrays and strange memory usage ... In-Reply-To: <48BD2E16.4080307@ntc.zcu.cz> References: <48BD2E16.4080307@ntc.zcu.cz> Message-ID: <1220375482.3005.13.camel@pandora.novagrid.com> Hi, I have a application that is very demanding in memory ressources. So I started to to look closer at python + numpy/scipy as far as memory is concerned. I can't explain the following : I start my python, + import scipy. A 'top' in the console shows that : PID USER PR NI VIRT RES SHR S %CPU %MEM TIME COMMAND 14791 grimault 20 0 21624 8044 3200 S 0 0.4 0:00.43 python Now after typing : z = scipy.arange(1000000) I get : 14791 grimault 20 0 25532 11m 3204 S 0 0.6 0:00.44 python So the memory increased by ~ 7 Mb. I was expecting 4 Mb since the data type is int32, giving 4*1000000 = 4 Mb of memory chunk (in C/C++ at least). It gets even worse with complex float. I tried : z = arange(1000000) + 1j*arange(1000000) Expecting 8 Mb, since z.dtype gives "complex64", the "top" shows an increase by 31 Mb. This is very annoying. Can someone explain this ? Is there a way to create numpy arrays with the same (approximately ! I know the array class adds some overhead...) memory footprint as in C/C++ ? Thanks in advance From robert.kern at gmail.com Tue Sep 2 15:10:47 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 2 Sep 2008 14:10:47 -0500 Subject: [SciPy-user] matplotlib - axis and zoom In-Reply-To: <48BD0BC7.9080502@meduni-graz.at> References: <48BD0BC7.9080502@meduni-graz.at> Message-ID: <3d375d730809021210p15f996ebq225228063f3a9450@mail.gmail.com> On Tue, Sep 2, 2008 at 04:47, bernardo martins rocha wrote: > Hi Guys, > > how can I maintain zoom in the horizontal axis and keep the vertical > axis unchanged using the zoom tool provided by matplotlib? You will want to ask on the matplotlib list: https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Tue Sep 2 19:19:10 2008 From: cournape at gmail.com (David Cournapeau) Date: Wed, 3 Sep 2008 08:19:10 +0900 Subject: [SciPy-user] Arrays and strange memory usage ... In-Reply-To: <1220375482.3005.13.camel@pandora.novagrid.com> References: <48BD2E16.4080307@ntc.zcu.cz> <1220375482.3005.13.camel@pandora.novagrid.com> Message-ID: <5b8d13220809021619g3927dc47y6bddb687e19e1ce8@mail.gmail.com> On Wed, Sep 3, 2008 at 2:11 AM, christophe grimault wrote: > Hi, > > I have a application that is very demanding in memory ressources. So I > started to to look closer at python + numpy/scipy as far as memory is > concerned. I you are really tight on memory, you will have problems with python and most programming language which do not let you control memory in a fine grained manner. Now, it depends on what you mean by memory demanding: if you have barely enough memory for holding your data, it will extremely difficult to do it in python, and difficult to do in any language, including C and other manually managed languages. > > I can't explain the following : > > I start my python, + import scipy. A 'top' in the console shows that : > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME COMMAND > 14791 grimault 20 0 21624 8044 3200 S 0 0.4 0:00.43 python > > Now after typing : > > z = scipy.arange(1000000) > > I get : > 14791 grimault 20 0 25532 11m 3204 S 0 0.6 0:00.44 python > > So the memory increased by ~ 7 Mb. I was expecting 4 Mb since the data > type is int32, giving 4*1000000 = 4 Mb of memory chunk (in C/C++ at > least). a = scipy.arange(1e6) a.itemsize * a.size Give me 8e6 bytes. arange is float64 by default, and I get a similar memory increase (~ 8Mb). > > It gets even worse with complex float. I tried : > z = arange(1000000) + 1j*arange(1000000) > > Expecting 8 Mb, Again, this is strange, it should default to float128. Which version of numpy/scipy are you using ? I do not get unexpected results on my machine; results may vary because memory allocator in python tends to overcommit to avoid reallocating all the time, but IIRC, data are allocated with malloc and not the python allocator in numpy. More importantly though, that's not really representative of a typical numpy program, and would depend on what you are doing anyway. > This is very annoying. Can someone explain this ? Is there a way to > create numpy arrays with the same (approximately ! I know the array > class adds some overhead...) memory footprint as in C/C++ ? Arrays themselves have a similar footprint as C/C++ (for big arrays, where data >> array stucture overhead). But you will quickly find that depending on what you are doing (linear algebra, for example), you will need copies. Note that the same problem exists in C/C++, that's very difficult to avoid (you need things like expression template and co). cheers, David cheers, David From robert.kern at gmail.com Tue Sep 2 19:44:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 2 Sep 2008 18:44:14 -0500 Subject: [SciPy-user] Arrays and strange memory usage ... In-Reply-To: <5b8d13220809021619g3927dc47y6bddb687e19e1ce8@mail.gmail.com> References: <48BD2E16.4080307@ntc.zcu.cz> <1220375482.3005.13.camel@pandora.novagrid.com> <5b8d13220809021619g3927dc47y6bddb687e19e1ce8@mail.gmail.com> Message-ID: <3d375d730809021644k4232061ak327841959e0153a0@mail.gmail.com> On Tue, Sep 2, 2008 at 18:19, David Cournapeau wrote: > On Wed, Sep 3, 2008 at 2:11 AM, christophe grimault > wrote: >> Hi, >> >> I have a application that is very demanding in memory ressources. So I >> started to to look closer at python + numpy/scipy as far as memory is >> concerned. > > I you are really tight on memory, you will have problems with python > and most programming language which do not let you control memory in a > fine grained manner. Now, it depends on what you mean by memory > demanding: if you have barely enough memory for holding your data, it > will extremely difficult to do it in python, and difficult to do in > any language, including C and other manually managed languages. > >> >> I can't explain the following : >> >> I start my python, + import scipy. A 'top' in the console shows that : >> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME COMMAND >> 14791 grimault 20 0 21624 8044 3200 S 0 0.4 0:00.43 python >> >> Now after typing : >> >> z = scipy.arange(1000000) >> >> I get : >> 14791 grimault 20 0 25532 11m 3204 S 0 0.6 0:00.44 python >> >> So the memory increased by ~ 7 Mb. I was expecting 4 Mb since the data >> type is int32, giving 4*1000000 = 4 Mb of memory chunk (in C/C++ at >> least). > > a = scipy.arange(1e6) > a.itemsize * a.size > > Give me 8e6 bytes. arange is float64 by default, and I get a similar > memory increase (~ 8Mb). No, the default is int (int32 on 32-bit systems, int64 on most 64-bit systems) if you give it integer arguments and float64 if you give it float arguments. >> It gets even worse with complex float. I tried : >> z = arange(1000000) + 1j*arange(1000000) >> >> Expecting 8 Mb, > > Again, this is strange, it should default to float128. Which version > of numpy/scipy are you using ? You mean complex128. One thing to be aware of is that there are temporaries involved. 1j*arange(1000000) will allocate almost 16 MB of memory just by itself and then allocate another 16 MB for the result of the addition. The memory may not get returned to the OS when an object gets deallocated although it will be reused by Python. FWIW, here is what I get with SVN numpy on OS X: >>> import numpy 45564 Python 0.0% 0:00.49 1 16 127 5172K 1292K 7960K 28M >>> a = numpy.arange(1000000) 45564 Python 0.0% 0:00.50 1 16 128 9092K 1292K 12M 32M >>> z = numpy.arange(1000000) + 1j * numpy.arange(1000000) 45564 Python 0.0% 0:00.60 1 16 129 24M 1292K 27M 47M -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue Sep 2 21:19:22 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 03 Sep 2008 10:19:22 +0900 Subject: [SciPy-user] Arrays and strange memory usage ... In-Reply-To: <3d375d730809021644k4232061ak327841959e0153a0@mail.gmail.com> References: <48BD2E16.4080307@ntc.zcu.cz> <1220375482.3005.13.camel@pandora.novagrid.com> <5b8d13220809021619g3927dc47y6bddb687e19e1ce8@mail.gmail.com> <3d375d730809021644k4232061ak327841959e0153a0@mail.gmail.com> Message-ID: <48BDE61A.2000805@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > No, the default is int (int32 on 32-bit systems, int64 on most 64-bit > systems) if you give it integer arguments and float64 if you give it > float arguments. > Ah, my bad, I should have thought about the difference between 1e6 and 1000000. > >>> It gets even worse with complex float. I tried : >>> z = arange(1000000) + 1j*arange(1000000) >>> >>> Expecting 8 Mb, >>> >> Again, this is strange, it should default to float128. Which version >> of numpy/scipy are you using ? >> > > You mean complex128. > Yes; I just wanted to point out that 1j*arange(1000000) is expected to take ~16Mb, not 8. > One thing to be aware of is that there are temporaries involved. > 1j*arange(1000000) will allocate almost 16 MB of memory just by itself > and then allocate another 16 MB for the result of the addition. The > memory may not get returned to the OS when an object gets deallocated > although it will be reused by Python. > I think on linux, for those sizes, the memory is given back to the OS right away because it is above the mmap threshold, and free gives the memory right away in those cases. Since I see the exact same behavior as you on top (b = np.arange(1e6) + 1.j np.arange(1e6) only adding 16 Mb), maybe the Mac OS X malloc does something similar. cheers, David From falted at pytables.org Wed Sep 3 04:49:23 2008 From: falted at pytables.org (Francesc Alted) Date: Wed, 3 Sep 2008 10:49:23 +0200 Subject: [SciPy-user] Arrays and strange memory usage ... In-Reply-To: <1220375482.3005.13.camel@pandora.novagrid.com> References: <48BD2E16.4080307@ntc.zcu.cz> <1220375482.3005.13.camel@pandora.novagrid.com> Message-ID: <200809031049.23777.falted@pytables.org> A Tuesday 02 September 2008, christophe grimault escrigu?: > Hi, > > I have a application that is very demanding in memory ressources. So > I started to to look closer at python + numpy/scipy as far as memory > is concerned. > > I can't explain the following : > > I start my python, + import scipy. A 'top' in the console shows that > : > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME COMMAND > 14791 grimault 20 0 21624 8044 3200 S 0 0.4 0:00.43 python > > Now after typing : > > z = scipy.arange(1000000) > > I get : > 14791 grimault 20 0 25532 11m 3204 S 0 0.6 0:00.44 python > > So the memory increased by ~ 7 Mb. I was expecting 4 Mb since the > data type is int32, giving 4*1000000 = 4 Mb of memory chunk (in C/C++ > at least). You should look at the "RES" column instead of "VIRT" one. "RES" column shows the *real* memory that you are consuming. So, in this case, you have consumed 11MB - 8044KB ~ 3 MB. However, you are undergoing here the effects of number representation truncation. Your consumed memory should be rather: 8044KB + 3906KB = 11950KB, but as it is converted to MB (scale changes happens automatically in 'top' when the figures need more than 4 digits to be represented), 11959KB is truncated and 950KB are gone, so this is why the final figure you are seeing is 11MB. This can be a bit misleading at first sight, but be sure that your machine (and NumPy) is doing fine and works as expected. Cheers, -- Francesc Alted From christophe.grimault at novagrid.com Wed Sep 3 05:49:50 2008 From: christophe.grimault at novagrid.com (christophe grimault) Date: Wed, 03 Sep 2008 11:49:50 +0200 Subject: [SciPy-user] Arrays and strange memory usage ... In-Reply-To: <200809031049.23777.falted@pytables.org> References: <48BD2E16.4080307@ntc.zcu.cz> <1220375482.3005.13.camel@pandora.novagrid.com> <200809031049.23777.falted@pytables.org> Message-ID: <1220435390.2984.1.camel@pandora.novagrid.com> OK, I understand now ! Thanks very for the explanation. Chris On Wed, 2008-09-03 at 10:49 +0200, Francesc Alted wrote: > A Tuesday 02 September 2008, christophe grimault escrigu?: > > Hi, > > > > I have a application that is very demanding in memory ressources. So > > I started to to look closer at python + numpy/scipy as far as memory > > is concerned. > > > > I can't explain the following : > > > > I start my python, + import scipy. A 'top' in the console shows that > > : > > > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME COMMAND > > 14791 grimault 20 0 21624 8044 3200 S 0 0.4 0:00.43 python > > > > Now after typing : > > > > z = scipy.arange(1000000) > > > > I get : > > 14791 grimault 20 0 25532 11m 3204 S 0 0.6 0:00.44 python > > > > So the memory increased by ~ 7 Mb. I was expecting 4 Mb since the > > data type is int32, giving 4*1000000 = 4 Mb of memory chunk (in C/C++ > > at least). > > You should look at the "RES" column instead of "VIRT" one. "RES" column > shows the *real* memory that you are consuming. So, in this case, you > have consumed 11MB - 8044KB ~ 3 MB. However, you are undergoing here > the effects of number representation truncation. Your consumed memory > should be rather: 8044KB + 3906KB = 11950KB, but as it is converted to > MB (scale changes happens automatically in 'top' when the figures need > more than 4 digits to be represented), 11959KB is truncated and 950KB > are gone, so this is why the final figure you are seeing is 11MB. This > can be a bit misleading at first sight, but be sure that your machine > (and NumPy) is doing fine and works as expected. > > Cheers, > From faltet at pytables.org Wed Sep 3 06:36:44 2008 From: faltet at pytables.org (Francesc Alted) Date: Wed, 3 Sep 2008 12:36:44 +0200 Subject: [SciPy-user] Arrays and strange memory usage ... In-Reply-To: <1220435390.2984.1.camel@pandora.novagrid.com> References: <48BD2E16.4080307@ntc.zcu.cz> <200809031049.23777.falted@pytables.org> <1220435390.2984.1.camel@pandora.novagrid.com> Message-ID: <200809031236.44488.faltet@pytables.org> A Wednesday 03 September 2008, christophe grimault escrigu?: > OK, I understand now ! > Thanks very for the explanation. You are welcome. For the sort of situations that you are facing of, I normally query directly the Linux kernel so as to get a finer perception of the resources wasted. Here it goes the function that I use: def show_stats(explain, tref): "Show the used memory" # Build the command to obtain memory info (only for Linux 2.6.x) cmd = "cat /proc/%s/status" % os.getpid() sout = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE).stdout for line in sout: if line.startswith("VmSize:"): vmsize = int(line.split()[1]) elif line.startswith("VmRSS:"): vmrss = int(line.split()[1]) elif line.startswith("VmData:"): vmdata = int(line.split()[1]) elif line.startswith("VmStk:"): vmstk = int(line.split()[1]) elif line.startswith("VmExe:"): vmexe = int(line.split()[1]) elif line.startswith("VmLib:"): vmlib = int(line.split()[1]) sout.close() print "Memory usage: ******* %s *******" % explain print "VmSize: %7s kB\tVmRSS: %7s kB" % (vmsize, vmrss) print "VmData: %7s kB\tVmStk: %7s kB" % (vmdata, vmstk) print "VmExe: %7s kB\tVmLib: %7s kB" % (vmexe, vmlib) tnow = time() print "WallClock time:", round(tnow - tref, 3) return tnow And here it is an example of use: # declare this at the beginning of your module profile = True [clip] if profile: tref = time() if profile: show_stats("Entering initial_append", tref) [your statements here...] if profile: show_stats("Before creating idx", tref) [more statements...] if profile: show_stats("After creating idx", tref) I hope you get the idea. -- Francesc Alted From rocksportrocker at googlemail.com Wed Sep 3 07:57:09 2008 From: rocksportrocker at googlemail.com (Uwe Schmitt) Date: Wed, 3 Sep 2008 04:57:09 -0700 (PDT) Subject: [SciPy-user] Arrays and strange memory usage ... In-Reply-To: <1220375482.3005.13.camel@pandora.novagrid.com> References: <48BD2E16.4080307@ntc.zcu.cz> <1220375482.3005.13.camel@pandora.novagrid.com> Message-ID: <4da8b46e-00c6-4748-8c0d-fb035088e72a@m73g2000hsh.googlegroups.com> On 2 Sep., 19:11, christophe grimault wrote: > Hi, > > I have a application that is very demanding in memory ressources. So I > started to to look closer at python + numpy/scipy as far as memory is > concerned. > > I can't explain the following : > > I start my python, + import scipy. A 'top' in the console shows that : > > ? PID USER ? ? ?PR ?NI ?VIRT ?RES ?SHR S %CPU %MEM ? ?TIME COMMAND > 14791 grimault ?20 ? 0 21624 8044 3200 S ? ?0 ?0.4 ? 0:00.43 python > > Now after typing : > > z = scipy.arange(1000000) > > I get : > 14791 grimault ?20 ? 0 25532 ?11m 3204 S ? ?0 ?0.6 ? 0:00.44 python > > So the memory increased by ~ 7 Mb. I was expecting 4 Mb since the data > type is int32, giving 4*1000000 = 4 Mb of memory chunk (in C/C++ at > least). I do not see the 7MB. Virtual memory increased by 3,9 MB and RES (which is the number you are looking for), differs by 3+X MB, I do not know how RES is rounded. But this is not a contraction. Greetings, Uwe From emanuele at relativita.com Fri Sep 5 11:23:24 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Fri, 05 Sep 2008 17:23:24 +0200 Subject: [SciPy-user] [OpenOpt] problem with ralg (latest SVN) Message-ID: <48C14EEC.8060300@relativita.com> Dear all and Dmitrey, I've just updated to latest openopt (SVN). When using numpy 1.0.3 and scipy 0.5.2 (standard in Ubuntu 7.10 gutsy gibbon) openopt says that "ralg" (NLP) algorithm is missing! With more recent numpy and scipy it seems to work reliably. But what happened with respect to older numpy+scipy? In that case even running examples/nlp_1.py returns: ---- $ python nlp_1.py OpenOpt checks user-supplied gradient df (shape: (150,) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== OO Error:incorrect solver is called, maybe the solver "ralg" is not installed. Maybe setting p.debug=1 could specify the matter more precisely Traceback (most recent call last): File "nlp_1.py", line 110, in r = p.solve('ralg') File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", line 185, in solve return runProbSolver(self, solvers, *args, **kwargs) File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", line 48, in runProbSolver p.err('incorrect solver is called, maybe the solver "' + solver_str +'" is not installed. Maybe setting p.debug=1 could specify the matter more precisely') File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/oologfcn.py", line 16, in ooerr raise OpenOptException(msg) scikits.openopt.Kernel.oologfcn.OpenOptException: incorrect solver is called, maybe the solver "ralg" is not installed. Maybe setting p.debug=1 could specify the matter more precisely ---- This did not happen before so I guess it is due to a recent commit. It is possible to solve the problem? Kind Regards, Emanuele From cohen at slac.stanford.edu Fri Sep 5 11:36:53 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Fri, 05 Sep 2008 17:36:53 +0200 Subject: [SciPy-user] how to plot the result of histogram2d Message-ID: <48C15215.3060605@slac.stanford.edu> hi, I hope someone can quickly point me to some doc. I can do imshow(histogram2d(x,y)[0]) but then I miss the x and y binning correct labels. If I do imshow(histogram2d(x,y)) I get: ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (115, 0)) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /home/cohen/data1/WORK/GRB/data/GRB080904886/ in () /data1/GLAST/GLAST_EXT/python/2.5.1/lib/python2.5/site-packages/matplotlib/pyplot.pyc in imshow(*args, **kwargs) 1673 hold(h) 1674 try: -> 1675 ret = gca().imshow(*args, **kwargs) 1676 draw_if_interactive() 1677 except: /data1/GLAST/GLAST_EXT/python/2.5.1/lib/python2.5/site-packages/matplotlib/axes.pyc in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, **kwargs) 4432 filterrad=filterrad, **kwargs) 4433 -> 4434 im.set_data(X) 4435 im.set_alpha(alpha) 4436 self._set_artist_props(im) /data1/GLAST/GLAST_EXT/python/2.5.1/lib/python2.5/site-packages/matplotlib/image.pyc in set_data(self, A, shape) 232 X = pil_to_array(A) 233 else: --> 234 X = ma.asarray(A) # assume array 235 self._A = X 236 /data1/GLAST/GLAST_EXT/python/2.5.1/lib/python2.5/site-packages/numpy/core/ma.pyc in asarray(data, dtype) 2121 (dtype is None or dtype == data.dtype): 2122 return data -> 2123 return array(data, dtype=dtype, copy=0) 2124 2125 # Add methods to support ndarray interface /data1/GLAST/GLAST_EXT/python/2.5.1/lib/python2.5/site-packages/numpy/core/ma.pyc in __init__(self, data, dtype, copy, order, mask, fill_value) 565 else: 566 need_data_copied = False #because I'll do it now --> 567 c = numeric.array(data, dtype=tc, copy=True, order=order) 568 tc = c.dtype 569 ValueError: setting an array element with a sequence. so something gets awry. thanks in advance From jdh2358 at gmail.com Fri Sep 5 11:49:32 2008 From: jdh2358 at gmail.com (John Hunter) Date: Fri, 5 Sep 2008 10:49:32 -0500 Subject: [SciPy-user] how to plot the result of histogram2d In-Reply-To: <48C15215.3060605@slac.stanford.edu> References: <48C15215.3060605@slac.stanford.edu> Message-ID: <88e473830809050849n1bbc6beawde24bc19ad3b038c@mail.gmail.com> On Fri, Sep 5, 2008 at 10:36 AM, Johann Cohen-Tanugi wrote: > hi, I hope someone can quickly point me to some doc. > I can do imshow(histogram2d(x,y)[0]) but then I miss the x and y binning > correct labels. > If I do imshow(histogram2d(x,y)) I get: > ERROR: An unexpected error occurred while tokenizing input > The following traceback may be corrupted or invalid > The error message is: ('EOF in multi-line statement', (115, 0)) matplotlib questions are best addressed to the matplotlib-users mailing list at http://lists.sourceforge.net/mailman/listinfo/matplotlib-users histogram2d returns H, xedges and yedges. The first argument should be passed to imshow, and the second two can be used to get the extents In [26]: x, y = np.random.randn(2, 100000) In [27]: H, xedges, yedges = np.histogram2d(x, y, bins=50) In [28]: extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]] In [29]: imshow(H, extent=extent) Out[29]: I From emanuele at relativita.com Fri Sep 5 11:52:31 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Fri, 05 Sep 2008 17:52:31 +0200 Subject: [SciPy-user] [OpenOpt] problem with ralg (latest SVN) In-Reply-To: <48C14EEC.8060300@relativita.com> References: <48C14EEC.8060300@relativita.com> Message-ID: <48C155BF.7080008@relativita.com> Same problem with numpy 1.0.4 + scipy 0.6.0 (shipped with ubuntu 8.04 hardy heron). E. Emanuele Olivetti wrote: > Dear all and Dmitrey, > > I've just updated to latest openopt (SVN). When using numpy 1.0.3 > and scipy 0.5.2 (standard in Ubuntu 7.10 gutsy gibbon) openopt says > that "ralg" (NLP) algorithm is missing! With more recent numpy > and scipy it seems to work reliably. But what happened with respect > to older numpy+scipy? In that case even running examples/nlp_1.py > returns: > ---- > $ python nlp_1.py > OpenOpt checks user-supplied gradient df (shape: (150,) ) > according to: > prob.diffInt = [ 1.00000000e-07] > |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 > derivatives are equal > ======================== > OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) > according to: > prob.diffInt = [ 1.00000000e-07] > |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 > derivatives are equal > ======================== > OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) > according to: > prob.diffInt = [ 1.00000000e-07] > |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 > derivatives are equal > ======================== > OO Error:incorrect solver is called, maybe the solver "ralg" is not > installed. Maybe setting p.debug=1 could specify the matter more precisely > Traceback (most recent call last): > File "nlp_1.py", line 110, in > r = p.solve('ralg') > File > "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", > line 185, in solve > return runProbSolver(self, solvers, *args, **kwargs) > File > "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", > line 48, in runProbSolver > p.err('incorrect solver is called, maybe the solver "' + solver_str > +'" is not installed. Maybe setting p.debug=1 could specify the matter > more precisely') > File > "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/oologfcn.py", > line 16, in ooerr > raise OpenOptException(msg) > scikits.openopt.Kernel.oologfcn.OpenOptException: incorrect solver is > called, maybe the solver "ralg" is not installed. Maybe setting > p.debug=1 could specify the matter more precisely > ---- > > This did not happen before so I guess it is due to a recent > commit. It is possible to solve the problem? > > Kind Regards, > > Emanuele > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From dmitrey.kroshko at scipy.org Fri Sep 5 13:28:50 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 05 Sep 2008 20:28:50 +0300 Subject: [SciPy-user] [OpenOpt] problem with ralg (latest SVN) In-Reply-To: <48C155BF.7080008@relativita.com> References: <48C14EEC.8060300@relativita.com> <48C155BF.7080008@relativita.com> Message-ID: <48C16C52.1040101@scipy.org> Hi Emanuele, as it is mentioned in openopt install webpage and README.txt numpy v >= 1.1.0 is recommended. Some other oo users informed of bugs due to older versions. Could you inform what will be outputed if you set p.debug = 1? (either directly or via p = NLP(..., debug=1,...)) If the problem with numpy versions is critical for users of your soft, you'd better to put more recent numpy into Debian soft channel. Regards, D. Emanuele Olivetti wrote: > Same problem with numpy 1.0.4 + scipy 0.6.0 > (shipped with ubuntu 8.04 hardy heron). > > E. > > Emanuele Olivetti wrote: > >> Dear all and Dmitrey, >> >> I've just updated to latest openopt (SVN). When using numpy 1.0.3 >> and scipy 0.5.2 (standard in Ubuntu 7.10 gutsy gibbon) openopt says >> that "ralg" (NLP) algorithm is missing! With more recent numpy >> and scipy it seems to work reliably. But what happened with respect >> to older numpy+scipy? In that case even running examples/nlp_1.py >> returns: >> ---- >> $ python nlp_1.py >> OpenOpt checks user-supplied gradient df (shape: (150,) ) >> according to: >> prob.diffInt = [ 1.00000000e-07] >> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >> derivatives are equal >> ======================== >> OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) >> according to: >> prob.diffInt = [ 1.00000000e-07] >> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >> derivatives are equal >> ======================== >> OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) >> according to: >> prob.diffInt = [ 1.00000000e-07] >> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >> derivatives are equal >> ======================== >> OO Error:incorrect solver is called, maybe the solver "ralg" is not >> installed. Maybe setting p.debug=1 could specify the matter more precisely >> Traceback (most recent call last): >> File "nlp_1.py", line 110, in >> r = p.solve('ralg') >> File >> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", >> line 185, in solve >> return runProbSolver(self, solvers, *args, **kwargs) >> File >> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >> line 48, in runProbSolver >> p.err('incorrect solver is called, maybe the solver "' + solver_str >> +'" is not installed. Maybe setting p.debug=1 could specify the matter >> more precisely') >> File >> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/oologfcn.py", >> line 16, in ooerr >> raise OpenOptException(msg) >> scikits.openopt.Kernel.oologfcn.OpenOptException: incorrect solver is >> called, maybe the solver "ralg" is not installed. Maybe setting >> p.debug=1 could specify the matter more precisely >> ---- >> >> This did not happen before so I guess it is due to a recent >> commit. It is possible to solve the problem? >> >> Kind Regards, >> >> Emanuele >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From nwagner at iam.uni-stuttgart.de Fri Sep 5 14:12:57 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 05 Sep 2008 20:12:57 +0200 Subject: [SciPy-user] [OpenOpt] problem with ralg (latest SVN) In-Reply-To: <48C16C52.1040101@scipy.org> References: <48C14EEC.8060300@relativita.com> <48C155BF.7080008@relativita.com> <48C16C52.1040101@scipy.org> Message-ID: On Fri, 05 Sep 2008 20:28:50 +0300 dmitrey wrote: > Hi Emanuele, > as it is mentioned in openopt install webpage and >README.txt numpy v > >= 1.1.0 is recommended. Some other oo users informed of >bugs due to > older versions. > > Could you inform what will be outputed if you set >p.debug = 1? (either > directly or via p = NLP(..., debug=1,...)) > > If the problem with numpy versions is critical for users >of your soft, > you'd better to put more recent numpy into Debian soft >channel. > > Regards, D. > > Emanuele Olivetti wrote: >> Same problem with numpy 1.0.4 + scipy 0.6.0 >> (shipped with ubuntu 8.04 hardy heron). >> >> E. >> >> Emanuele Olivetti wrote: >> >>> Dear all and Dmitrey, >>> >>> I've just updated to latest openopt (SVN). When using >>>numpy 1.0.3 >>> and scipy 0.5.2 (standard in Ubuntu 7.10 gutsy gibbon) >>>openopt says >>> that "ralg" (NLP) algorithm is missing! With more recent >>>numpy >>> and scipy it seems to work reliably. But what happened >>>with respect >>> to older numpy+scipy? In that case even running >>>examples/nlp_1.py >>> returns: >>> ---- >>> $ python nlp_1.py >>> OpenOpt checks user-supplied gradient df (shape: (150,) >>>) >>> according to: >>> prob.diffInt = [ 1.00000000e-07] >>> |1 - info_user/info_numerical| <= prob.maxViolation >>>= 0.01 >>> derivatives are equal >>> ======================== >>> OpenOpt checks user-supplied gradient dc (shape: (2, >>>150) ) >>> according to: >>> prob.diffInt = [ 1.00000000e-07] >>> |1 - info_user/info_numerical| <= prob.maxViolation >>>= 0.01 >>> derivatives are equal >>> ======================== >>> OpenOpt checks user-supplied gradient dh (shape: (2, >>>150) ) >>> according to: >>> prob.diffInt = [ 1.00000000e-07] >>> |1 - info_user/info_numerical| <= prob.maxViolation >>>= 0.01 >>> derivatives are equal >>> ======================== >>> OO Error:incorrect solver is called, maybe the solver >>>"ralg" is not >>> installed. Maybe setting p.debug=1 could specify the >>>matter more precisely >>> Traceback (most recent call last): >>> File "nlp_1.py", line 110, in >>> r = p.solve('ralg') >>> File >>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", >>> line 185, in solve >>> return runProbSolver(self, solvers, *args, **kwargs) >>> File >>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >>> line 48, in runProbSolver >>> p.err('incorrect solver is called, maybe the solver >>>"' + solver_str >>> +'" is not installed. Maybe setting p.debug=1 could >>>specify the matter >>> more precisely') >>> File >>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/oologfcn.py", >>> line 16, in ooerr >>> raise OpenOptException(msg) >>> scikits.openopt.Kernel.oologfcn.OpenOptException: >>>incorrect solver is >>> called, maybe the solver "ralg" is not installed. Maybe >>>setting >>> p.debug=1 could specify the matter more precisely >>> ---- >>> >>> This did not happen before so I guess it is due to a >>>recent >>> commit. It is possible to solve the problem? >>> >>> Kind Regards, >>> >>> Emanuele >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Dmitrey, I am using >>> numpy.__version__ '1.3.0.dev5790' Cheers, Nils Here comes the output of nlp_1.py: OpenOpt checks user-supplied gradient df (shape: (150,) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== ----------------------------------------------------- solver: ralg problem: unnamed goal: minimum iter objFunVal log10(maxResidual) 0 8.596e+03 3.91 OpenOpt debug msg: hs: 4.0 OpenOpt debug msg: ls: 2 50 2.800e+03 0.79 100 1.754e+03 0.52 150 9.075e+02 0.31 200 4.455e+02 -0.03 250 3.682e+02 -0.48 300 3.465e+02 -1.15 350 3.409e+02 -1.81 400 1.911e+02 -3.14 450 1.373e+02 -3.07 OO info: debug msg: matrix B restoration in ralg solver 500 1.065e+03 1.20 550 2.224e+03 1.21 600 1.822e+03 0.43 650 2.178e+03 0.45 700 2.576e+03 0.48 750 2.840e+03 0.53 800 3.068e+03 0.59 850 7.958e+03 1.37 900 2.174e+04 1.54 950 3.341e+04 1.37 1000 7.463e+04 2.17 1050 3.692e+05 2.50 1100 1.940e+05 2.16 1150 1.482e+05 1.77 1200 1.719e+05 1.86 1250 2.963e+05 2.52 1300 1.603e+05 2.27 1350 2.299e+05 2.56 1400 3.243e+05 2.63 1450 2.663e+05 2.51 1500 3.064e+05 2.55 1550 4.297e+05 2.74 1600 1.629e+05 2.80 1650 2.379e+05 2.33 1700 2.086e+05 2.28 1750 1.214e+05 2.22 1800 4.913e+04 1.58 1850 3.862e+04 1.65 1900 1.610e+05 2.53 1950 3.576e+04 1.44 OO info: debug msg: matrix B restoration in ralg solver 2000 7.286e+05 2.42 2050 5.268e+05 2.50 2100 1.403e+05 2.01 2150 1.029e+05 1.96 2200 9.997e+04 2.15 2250 7.424e+05 2.92 2300 5.514e+04 1.55 2350 2.518e+05 2.66 2400 5.051e+04 1.78 2450 5.006e+04 2.05 2500 4.279e+04 1.44 2550 4.509e+04 1.62 2600 1.331e+05 2.45 2650 4.061e+04 1.41 2700 5.198e+04 1.90 2750 3.489e+09 4.77 2800 6.938e+04 2.22 2850 2.474e+10 5.20 2900 4.259e+07 3.81 2950 1.044e+05 2.40 3000 6.411e+10 5.40 3050 6.232e+07 3.89 3100 1.830e+06 3.13 3150 4.635e+04 1.45 3200 1.770e+09 4.62 OO info: debug msg: matrix B restoration in ralg solver 3250 1.764e+11 5.57 3300 3.792e+09 4.01 3350 1.554e+10 5.05 3400 7.294e+09 4.81 3450 7.227e+09 4.81 OO info: debug msg: matrix B restoration in ralg solver 3500 1.415e+11 5.56 3550 1.795e+10 6.16 3600 5.205e+09 4.40 3650 1.641e+10 5.04 3700 1.408e+10 5.01 OO info: debug msg: matrix B restoration in ralg solver 3750 1.277e+10 4.96 3800 5.576e+09 3.89 3850 5.008e+09 3.97 3900 4.475e+09 4.04 3950 3.993e+09 4.11 4000 3.558e+09 4.17 4050 3.237e+09 4.24 4100 2.844e+09 4.24 4150 1.077e+10 4.83 4200 9.891e+09 4.82 OO info: debug msg: matrix B restoration in ralg solver 4250 4.720e+09 4.12 4300 3.411e+09 4.02 4350 1.768e+09 6.43 4400 1.851e+09 4.31 4450 1.448e+09 3.99 4500 1.248e+09 3.84 4550 1.099e+09 3.80 4600 6.053e+09 4.85 4650 8.905e+08 3.86 4700 1.446e+09 4.43 OO info: debug msg: matrix B restoration in ralg solver 4750 6.292e+09 4.14 4800 2.558e+09 3.96 4850 2.898e+09 4.53 4900 1.581e+09 4.21 4950 1.272e+09 4.28 5000 5.860e+09 6.34 5050 4.163e+09 4.56 5100 3.478e+09 4.22 5150 3.238e+09 4.31 5200 2.862e+09 3.92 OO info: debug msg: matrix B restoration in ralg solver 5250 3.259e+09 4.36 5300 2.207e+09 3.91 5350 1.760e+09 3.74 5400 1.560e+09 3.93 5450 1.925e+09 4.41 5500 1.739e+09 4.41 5550 1.640e+09 4.42 5600 8.408e+10 4.93 5650 9.792e+10 4.69 5700 1.303e+11 4.75 5750 2.450e+11 5.44 5800 4.913e+11 5.33 5850 2.536e+11 6.00 5900 3.098e+11 5.70 5950 8.987e+10 5.37 OO info: debug msg: matrix B restoration in ralg solver 6000 1.037e+12 6.00 6050 3.448e+11 8.99 6100 8.307e+12 6.40 6150 1.589e+12 5.87 6200 1.213e+12 5.27 OO info: debug msg: matrix B restoration in ralg solver 6250 1.224e+12 5.45 6300 7.495e+11 5.00 6350 3.998e+11 15.67 6400 3.987e+11 5.00 6450 3.127e+11 5.02 6500 2.419e+11 5.27 6550 3.691e+11 5.13 6600 6.414e+11 5.74 6650 1.329e+12 5.92 6700 3.528e+11 5.18 6750 2.981e+11 4.78 6800 5.060e+11 5.51 6850 4.760e+11 5.09 6900 4.499e+11 5.10 6950 1.069e+12 5.86 7000 6.326e+11 5.26 7050 5.217e+11 5.18 7100 5.029e+11 5.16 7150 8.043e+12 6.43 7200 1.073e+13 6.51 7250 2.658e+12 6.18 7300 2.053e+11 4.81 7350 1.040e+12 5.45 7400 2.030e+12 6.08 7450 2.131e+12 6.11 7500 3.493e+11 5.17 7550 2.420e+11 5.04 7600 2.344e+12 6.17 7650 3.515e+11 5.62 7700 2.135e+11 5.35 7750 1.411e+11 4.78 7800 8.295e+12 6.46 7850 7.406e+12 6.39 7900 9.030e+12 6.45 7950 1.677e+12 6.04 OO info: debug msg: matrix B restoration in ralg solver 8000 3.579e+12 6.23 8050 1.109e+12 10.92 8100 5.111e+12 5.80 8150 7.521e+12 6.08 8200 7.199e+12 5.85 OO info: debug msg: matrix B restoration in ralg solver 8250 7.812e+12 6.05 8300 5.366e+12 8.57 8350 5.689e+12 5.97 8400 5.140e+12 5.97 8450 3.909e+12 5.38 OO info: debug msg: matrix B restoration in ralg solver 8500 5.130e+12 6.12 8550 3.753e+12 6.36 8600 2.963e+12 5.43 8650 2.528e+12 5.44 8700 2.134e+12 5.46 OO info: debug msg: matrix B restoration in ralg solver 8750 1.760e+12 5.46 8800 1.467e+12 5.27 8850 2.764e+12 12.53 8900 2.152e+12 5.63 8950 2.532e+12 5.86 OO info: debug msg: matrix B restoration in ralg solver 9000 1.884e+12 5.67 9050 4.073e+12 12.35 9100 1.709e+12 5.38 9150 1.398e+12 5.57 9200 1.248e+12 5.60 OO info: debug msg: matrix B restoration in ralg solver 9250 1.044e+12 5.14 9300 7.844e+11 5.21 9350 6.360e+11 5.47 9400 6.253e+11 5.67 9450 3.557e+11 4.91 9500 3.400e+11 5.29 9550 3.160e+11 5.30 9600 2.601e+11 4.94 9650 2.199e+11 4.85 9700 5.335e+12 13.48 OO info: debug msg: matrix B restoration in ralg solver 9750 5.933e+12 6.24 9800 4.174e+12 8.76 9850 3.803e+12 5.52 9900 2.854e+12 5.50 9950 2.014e+12 5.47 10000 3.285e+12 6.13 10001 3.285e+12 6.13 istop: -7 (Max Iter has been reached) Solver: Time Elapsed = 56.05 CPU Time Elapsed = 31.82 Plotting: Time Elapsed = 62.35 CPU Time Elapsed = 32.57 NO FEASIBLE SOLUTION is obtained (max residual = 1.4e+06, objFunc = 3.2852899e+12) From dmitrey.kroshko at scipy.org Fri Sep 5 15:40:12 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 05 Sep 2008 22:40:12 +0300 Subject: [SciPy-user] [OpenOpt] problem with ralg (latest SVN) In-Reply-To: References: <48C14EEC.8060300@relativita.com> <48C155BF.7080008@relativita.com> <48C16C52.1040101@scipy.org> Message-ID: <48C18B1C.4030801@scipy.org> Hi Nils, after some my modifications the file nlp_1.py become hard to be solved by any solver (I mean connected to OO) - you can try solving it by algencan or ipopt and see results (output). So I have committed some changes to nlp_1.py. As for other tests (like nlp_bench_1, nlp_3) they work ok (nlp_2 for ralg requires p.maxIter = 2000). Regards, D. Nils Wagner wrote: > On Fri, 05 Sep 2008 20:28:50 +0300 > dmitrey wrote: > >> Hi Emanuele, >> as it is mentioned in openopt install webpage and >> README.txt numpy v >> >>> = 1.1.0 is recommended. Some other oo users informed of >>> >> bugs due to >> older versions. >> >> Could you inform what will be outputed if you set >> p.debug = 1? (either >> directly or via p = NLP(..., debug=1,...)) >> >> If the problem with numpy versions is critical for users >> of your soft, >> you'd better to put more recent numpy into Debian soft >> channel. >> >> Regards, D. >> >> Emanuele Olivetti wrote: >> >>> Same problem with numpy 1.0.4 + scipy 0.6.0 >>> (shipped with ubuntu 8.04 hardy heron). >>> >>> E. >>> >>> Emanuele Olivetti wrote: >>> >>> >>>> Dear all and Dmitrey, >>>> >>>> I've just updated to latest openopt (SVN). When using >>>> numpy 1.0.3 >>>> and scipy 0.5.2 (standard in Ubuntu 7.10 gutsy gibbon) >>>> openopt says >>>> that "ralg" (NLP) algorithm is missing! With more recent >>>> numpy >>>> and scipy it seems to work reliably. But what happened >>>> with respect >>>> to older numpy+scipy? In that case even running >>>> examples/nlp_1.py >>>> returns: >>>> ---- >>>> $ python nlp_1.py >>>> OpenOpt checks user-supplied gradient df (shape: (150,) >>>> ) >>>> according to: >>>> prob.diffInt = [ 1.00000000e-07] >>>> |1 - info_user/info_numerical| <= prob.maxViolation >>>> = 0.01 >>>> derivatives are equal >>>> ======================== >>>> OpenOpt checks user-supplied gradient dc (shape: (2, >>>> 150) ) >>>> according to: >>>> prob.diffInt = [ 1.00000000e-07] >>>> |1 - info_user/info_numerical| <= prob.maxViolation >>>> = 0.01 >>>> derivatives are equal >>>> ======================== >>>> OpenOpt checks user-supplied gradient dh (shape: (2, >>>> 150) ) >>>> according to: >>>> prob.diffInt = [ 1.00000000e-07] >>>> |1 - info_user/info_numerical| <= prob.maxViolation >>>> = 0.01 >>>> derivatives are equal >>>> ======================== >>>> OO Error:incorrect solver is called, maybe the solver >>>> "ralg" is not >>>> installed. Maybe setting p.debug=1 could specify the >>>> matter more precisely >>>> Traceback (most recent call last): >>>> File "nlp_1.py", line 110, in >>>> r = p.solve('ralg') >>>> File >>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", >>>> line 185, in solve >>>> return runProbSolver(self, solvers, *args, **kwargs) >>>> File >>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >>>> line 48, in runProbSolver >>>> p.err('incorrect solver is called, maybe the solver >>>> "' + solver_str >>>> +'" is not installed. Maybe setting p.debug=1 could >>>> specify the matter >>>> more precisely') >>>> File >>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/oologfcn.py", >>>> line 16, in ooerr >>>> raise OpenOptException(msg) >>>> scikits.openopt.Kernel.oologfcn.OpenOptException: >>>> incorrect solver is >>>> called, maybe the solver "ralg" is not installed. Maybe >>>> setting >>>> p.debug=1 could specify the matter more precisely >>>> ---- >>>> >>>> This did not happen before so I guess it is due to a >>>> recent >>>> commit. It is possible to solve the problem? >>>> >>>> Kind Regards, >>>> >>>> Emanuele >>>> >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>>> >>>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > Dmitrey, > > I am using > >>>> numpy.__version__ >>>> > '1.3.0.dev5790' > > Cheers, > Nils > > Here comes the output of nlp_1.py: > > OpenOpt checks user-supplied gradient df (shape: (150,) ) > according to: > prob.diffInt = [ 1.00000000e-07] > |1 - info_user/info_numerical| <= prob.maxViolation = > 0.01 > derivatives are equal > ======================== > OpenOpt checks user-supplied gradient dc (shape: (2, 150) > ) > according to: > prob.diffInt = [ 1.00000000e-07] > |1 - info_user/info_numerical| <= prob.maxViolation = > 0.01 > derivatives are equal > ======================== > OpenOpt checks user-supplied gradient dh (shape: (2, 150) > ) > according to: > prob.diffInt = [ 1.00000000e-07] > |1 - info_user/info_numerical| <= prob.maxViolation = > 0.01 > derivatives are equal > ======================== > ----------------------------------------------------- > solver: ralg problem: unnamed goal: minimum > iter objFunVal log10(maxResidual) > 0 8.596e+03 3.91 > OpenOpt debug msg: hs: 4.0 > OpenOpt debug msg: ls: 2 > 50 2.800e+03 0.79 > 100 1.754e+03 0.52 > 150 9.075e+02 0.31 > 200 4.455e+02 -0.03 > 250 3.682e+02 -0.48 > 300 3.465e+02 -1.15 > 350 3.409e+02 -1.81 > 400 1.911e+02 -3.14 > 450 1.373e+02 -3.07 > OO info: debug msg: matrix B restoration in ralg solver > 500 1.065e+03 1.20 > 550 2.224e+03 1.21 > 600 1.822e+03 0.43 > 650 2.178e+03 0.45 > 700 2.576e+03 0.48 > 750 2.840e+03 0.53 > 800 3.068e+03 0.59 > 850 7.958e+03 1.37 > 900 2.174e+04 1.54 > 950 3.341e+04 1.37 > 1000 7.463e+04 2.17 > 1050 3.692e+05 2.50 > 1100 1.940e+05 2.16 > 1150 1.482e+05 1.77 > 1200 1.719e+05 1.86 > 1250 2.963e+05 2.52 > 1300 1.603e+05 2.27 > 1350 2.299e+05 2.56 > 1400 3.243e+05 2.63 > 1450 2.663e+05 2.51 > 1500 3.064e+05 2.55 > 1550 4.297e+05 2.74 > 1600 1.629e+05 2.80 > 1650 2.379e+05 2.33 > 1700 2.086e+05 2.28 > 1750 1.214e+05 2.22 > 1800 4.913e+04 1.58 > 1850 3.862e+04 1.65 > 1900 1.610e+05 2.53 > 1950 3.576e+04 1.44 > OO info: debug msg: matrix B restoration in ralg solver > 2000 7.286e+05 2.42 > 2050 5.268e+05 2.50 > 2100 1.403e+05 2.01 > 2150 1.029e+05 1.96 > 2200 9.997e+04 2.15 > 2250 7.424e+05 2.92 > 2300 5.514e+04 1.55 > 2350 2.518e+05 2.66 > 2400 5.051e+04 1.78 > 2450 5.006e+04 2.05 > 2500 4.279e+04 1.44 > 2550 4.509e+04 1.62 > 2600 1.331e+05 2.45 > 2650 4.061e+04 1.41 > 2700 5.198e+04 1.90 > 2750 3.489e+09 4.77 > 2800 6.938e+04 2.22 > 2850 2.474e+10 5.20 > 2900 4.259e+07 3.81 > 2950 1.044e+05 2.40 > 3000 6.411e+10 5.40 > 3050 6.232e+07 3.89 > 3100 1.830e+06 3.13 > 3150 4.635e+04 1.45 > 3200 1.770e+09 4.62 > OO info: debug msg: matrix B restoration in ralg solver > 3250 1.764e+11 5.57 > 3300 3.792e+09 4.01 > 3350 1.554e+10 5.05 > 3400 7.294e+09 4.81 > 3450 7.227e+09 4.81 > OO info: debug msg: matrix B restoration in ralg solver > 3500 1.415e+11 5.56 > 3550 1.795e+10 6.16 > 3600 5.205e+09 4.40 > 3650 1.641e+10 5.04 > 3700 1.408e+10 5.01 > OO info: debug msg: matrix B restoration in ralg solver > 3750 1.277e+10 4.96 > 3800 5.576e+09 3.89 > 3850 5.008e+09 3.97 > 3900 4.475e+09 4.04 > 3950 3.993e+09 4.11 > 4000 3.558e+09 4.17 > 4050 3.237e+09 4.24 > 4100 2.844e+09 4.24 > 4150 1.077e+10 4.83 > 4200 9.891e+09 4.82 > OO info: debug msg: matrix B restoration in ralg solver > 4250 4.720e+09 4.12 > 4300 3.411e+09 4.02 > 4350 1.768e+09 6.43 > 4400 1.851e+09 4.31 > 4450 1.448e+09 3.99 > 4500 1.248e+09 3.84 > 4550 1.099e+09 3.80 > 4600 6.053e+09 4.85 > 4650 8.905e+08 3.86 > 4700 1.446e+09 4.43 > OO info: debug msg: matrix B restoration in ralg solver > 4750 6.292e+09 4.14 > 4800 2.558e+09 3.96 > 4850 2.898e+09 4.53 > 4900 1.581e+09 4.21 > 4950 1.272e+09 4.28 > 5000 5.860e+09 6.34 > 5050 4.163e+09 4.56 > 5100 3.478e+09 4.22 > 5150 3.238e+09 4.31 > 5200 2.862e+09 3.92 > OO info: debug msg: matrix B restoration in ralg solver > 5250 3.259e+09 4.36 > 5300 2.207e+09 3.91 > 5350 1.760e+09 3.74 > 5400 1.560e+09 3.93 > 5450 1.925e+09 4.41 > 5500 1.739e+09 4.41 > 5550 1.640e+09 4.42 > 5600 8.408e+10 4.93 > 5650 9.792e+10 4.69 > 5700 1.303e+11 4.75 > 5750 2.450e+11 5.44 > 5800 4.913e+11 5.33 > 5850 2.536e+11 6.00 > 5900 3.098e+11 5.70 > 5950 8.987e+10 5.37 > OO info: debug msg: matrix B restoration in ralg solver > 6000 1.037e+12 6.00 > 6050 3.448e+11 8.99 > 6100 8.307e+12 6.40 > 6150 1.589e+12 5.87 > 6200 1.213e+12 5.27 > OO info: debug msg: matrix B restoration in ralg solver > 6250 1.224e+12 5.45 > 6300 7.495e+11 5.00 > 6350 3.998e+11 15.67 > 6400 3.987e+11 5.00 > 6450 3.127e+11 5.02 > 6500 2.419e+11 5.27 > 6550 3.691e+11 5.13 > 6600 6.414e+11 5.74 > 6650 1.329e+12 5.92 > 6700 3.528e+11 5.18 > 6750 2.981e+11 4.78 > 6800 5.060e+11 5.51 > 6850 4.760e+11 5.09 > 6900 4.499e+11 5.10 > 6950 1.069e+12 5.86 > 7000 6.326e+11 5.26 > 7050 5.217e+11 5.18 > 7100 5.029e+11 5.16 > 7150 8.043e+12 6.43 > 7200 1.073e+13 6.51 > 7250 2.658e+12 6.18 > 7300 2.053e+11 4.81 > 7350 1.040e+12 5.45 > 7400 2.030e+12 6.08 > 7450 2.131e+12 6.11 > 7500 3.493e+11 5.17 > 7550 2.420e+11 5.04 > 7600 2.344e+12 6.17 > 7650 3.515e+11 5.62 > 7700 2.135e+11 5.35 > 7750 1.411e+11 4.78 > 7800 8.295e+12 6.46 > 7850 7.406e+12 6.39 > 7900 9.030e+12 6.45 > 7950 1.677e+12 6.04 > OO info: debug msg: matrix B restoration in ralg solver > 8000 3.579e+12 6.23 > 8050 1.109e+12 10.92 > 8100 5.111e+12 5.80 > 8150 7.521e+12 6.08 > 8200 7.199e+12 5.85 > OO info: debug msg: matrix B restoration in ralg solver > 8250 7.812e+12 6.05 > 8300 5.366e+12 8.57 > 8350 5.689e+12 5.97 > 8400 5.140e+12 5.97 > 8450 3.909e+12 5.38 > OO info: debug msg: matrix B restoration in ralg solver > 8500 5.130e+12 6.12 > 8550 3.753e+12 6.36 > 8600 2.963e+12 5.43 > 8650 2.528e+12 5.44 > 8700 2.134e+12 5.46 > OO info: debug msg: matrix B restoration in ralg solver > 8750 1.760e+12 5.46 > 8800 1.467e+12 5.27 > 8850 2.764e+12 12.53 > 8900 2.152e+12 5.63 > 8950 2.532e+12 5.86 > OO info: debug msg: matrix B restoration in ralg solver > 9000 1.884e+12 5.67 > 9050 4.073e+12 12.35 > 9100 1.709e+12 5.38 > 9150 1.398e+12 5.57 > 9200 1.248e+12 5.60 > OO info: debug msg: matrix B restoration in ralg solver > 9250 1.044e+12 5.14 > 9300 7.844e+11 5.21 > 9350 6.360e+11 5.47 > 9400 6.253e+11 5.67 > 9450 3.557e+11 4.91 > 9500 3.400e+11 5.29 > 9550 3.160e+11 5.30 > 9600 2.601e+11 4.94 > 9650 2.199e+11 4.85 > 9700 5.335e+12 13.48 > OO info: debug msg: matrix B restoration in ralg solver > 9750 5.933e+12 6.24 > 9800 4.174e+12 8.76 > 9850 3.803e+12 5.52 > 9900 2.854e+12 5.50 > 9950 2.014e+12 5.47 > 10000 3.285e+12 6.13 > 10001 3.285e+12 6.13 > istop: -7 (Max Iter has been reached) > Solver: Time Elapsed = 56.05 CPU Time Elapsed = 31.82 > Plotting: Time Elapsed = 62.35 CPU Time Elapsed = 32.57 > NO FEASIBLE SOLUTION is obtained (max residual = 1.4e+06, > objFunc = 3.2852899e+12) > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From nwagner at iam.uni-stuttgart.de Fri Sep 5 15:53:05 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 05 Sep 2008 21:53:05 +0200 Subject: [SciPy-user] [OpenOpt] problem with ralg (latest SVN) In-Reply-To: <48C18B1C.4030801@scipy.org> References: <48C14EEC.8060300@relativita.com> <48C155BF.7080008@relativita.com> <48C16C52.1040101@scipy.org> <48C18B1C.4030801@scipy.org> Message-ID: On Fri, 05 Sep 2008 22:40:12 +0300 dmitrey wrote: > Hi Nils, > after some my modifications the file nlp_1.py become >hard to be solved > by any solver (I mean connected to OO) - you can try >solving it by > algencan or ipopt and see results (output). > > So I have committed some changes to nlp_1.py. As for >other tests (like > nlp_bench_1, nlp_3) they work ok (nlp_2 for ralg >requires p.maxIter = 2000). > > Regards, D. > > Nils Wagner wrote: >> On Fri, 05 Sep 2008 20:28:50 +0300 >> dmitrey wrote: >> >>> Hi Emanuele, >>> as it is mentioned in openopt install webpage and >>> README.txt numpy v >>> >>>> = 1.1.0 is recommended. Some other oo users informed of >>>> >>> bugs due to >>> older versions. >>> >>> Could you inform what will be outputed if you set >>> p.debug = 1? (either >>> directly or via p = NLP(..., debug=1,...)) >>> >>> If the problem with numpy versions is critical for users >>> of your soft, >>> you'd better to put more recent numpy into Debian soft >>> channel. >>> >>> Regards, D. >>> >>> Emanuele Olivetti wrote: >>> >>>> Same problem with numpy 1.0.4 + scipy 0.6.0 >>>> (shipped with ubuntu 8.04 hardy heron). >>>> >>>> E. >>>> >>>> Emanuele Olivetti wrote: >>>> >>>> >>>>> Dear all and Dmitrey, >>>>> >>>>> I've just updated to latest openopt (SVN). When using >>>>> numpy 1.0.3 >>>>> and scipy 0.5.2 (standard in Ubuntu 7.10 gutsy gibbon) >>>>> openopt says >>>>> that "ralg" (NLP) algorithm is missing! With more recent >>>>> numpy >>>>> and scipy it seems to work reliably. But what happened >>>>> with respect >>>>> to older numpy+scipy? In that case even running >>>>> examples/nlp_1.py >>>>> returns: >>>>> ---- >>>>> $ python nlp_1.py >>>>> OpenOpt checks user-supplied gradient df (shape: (150,) >>>>> ) >>>>> according to: >>>>> prob.diffInt = [ 1.00000000e-07] >>>>> |1 - info_user/info_numerical| <= prob.maxViolation >>>>> = 0.01 >>>>> derivatives are equal >>>>> ======================== >>>>> OpenOpt checks user-supplied gradient dc (shape: (2, >>>>> 150) ) >>>>> according to: >>>>> prob.diffInt = [ 1.00000000e-07] >>>>> |1 - info_user/info_numerical| <= prob.maxViolation >>>>> = 0.01 >>>>> derivatives are equal >>>>> ======================== >>>>> OpenOpt checks user-supplied gradient dh (shape: (2, >>>>> 150) ) >>>>> according to: >>>>> prob.diffInt = [ 1.00000000e-07] >>>>> |1 - info_user/info_numerical| <= prob.maxViolation >>>>> = 0.01 >>>>> derivatives are equal >>>>> ======================== >>>>> OO Error:incorrect solver is called, maybe the solver >>>>> "ralg" is not >>>>> installed. Maybe setting p.debug=1 could specify the >>>>> matter more precisely >>>>> Traceback (most recent call last): >>>>> File "nlp_1.py", line 110, in >>>>> r = p.solve('ralg') >>>>> File >>>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", >>>>> line 185, in solve >>>>> return runProbSolver(self, solvers, *args, **kwargs) >>>>> File >>>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >>>>> line 48, in runProbSolver >>>>> p.err('incorrect solver is called, maybe the solver >>>>> "' + solver_str >>>>> +'" is not installed. Maybe setting p.debug=1 could >>>>> specify the matter >>>>> more precisely') >>>>> File >>>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/oologfcn.py", >>>>> line 16, in ooerr >>>>> raise OpenOptException(msg) >>>>> scikits.openopt.Kernel.oologfcn.OpenOptException: >>>>> incorrect solver is >>>>> called, maybe the solver "ralg" is not installed. Maybe >>>>> setting >>>>> p.debug=1 could specify the matter more precisely >>>>> ---- >>>>> >>>>> This did not happen before so I guess it is due to a >>>>> recent >>>>> commit. It is possible to solve the problem? >>>>> >>>>> Kind Regards, >>>>> >>>>> Emanuele >>>>> >>>>> _______________________________________________ >>>>> SciPy-user mailing list >>>>> SciPy-user at scipy.org >>>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>>> >>>>> >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>>> >>>> >>>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> Dmitrey, >> >> I am using >> >>>>> numpy.__version__ >>>>> >> '1.3.0.dev5790' >> >> Cheers, >> Nils >> >> Here comes the output of nlp_1.py: >> >> OpenOpt checks user-supplied gradient df (shape: (150,) >>) >> according to: >> prob.diffInt = [ 1.00000000e-07] >> |1 - info_user/info_numerical| <= prob.maxViolation >>= >> 0.01 >> derivatives are equal >> ======================== >> OpenOpt checks user-supplied gradient dc (shape: (2, >>150) >> ) >> according to: >> prob.diffInt = [ 1.00000000e-07] >> |1 - info_user/info_numerical| <= prob.maxViolation >>= >> 0.01 >> derivatives are equal >> ======================== >> OpenOpt checks user-supplied gradient dh (shape: (2, >>150) >> ) >> according to: >> prob.diffInt = [ 1.00000000e-07] >> |1 - info_user/info_numerical| <= prob.maxViolation >>= >> 0.01 >> derivatives are equal >> ======================== >> ----------------------------------------------------- >> solver: ralg problem: unnamed goal: minimum >> iter objFunVal log10(maxResidual) >> 0 8.596e+03 3.91 >> OpenOpt debug msg: hs: 4.0 >> OpenOpt debug msg: ls: 2 >> 50 2.800e+03 0.79 >> 100 1.754e+03 0.52 >> 150 9.075e+02 0.31 >> 200 4.455e+02 -0.03 >> 250 3.682e+02 -0.48 >> 300 3.465e+02 -1.15 >> 350 3.409e+02 -1.81 >> 400 1.911e+02 -3.14 >> 450 1.373e+02 -3.07 >> OO info: debug msg: matrix B restoration in ralg solver >> 500 1.065e+03 1.20 >> 550 2.224e+03 1.21 >> 600 1.822e+03 0.43 >> 650 2.178e+03 0.45 >> 700 2.576e+03 0.48 >> 750 2.840e+03 0.53 >> 800 3.068e+03 0.59 >> 850 7.958e+03 1.37 >> 900 2.174e+04 1.54 >> 950 3.341e+04 1.37 >> 1000 7.463e+04 2.17 >> 1050 3.692e+05 2.50 >> 1100 1.940e+05 2.16 >> 1150 1.482e+05 1.77 >> 1200 1.719e+05 1.86 >> 1250 2.963e+05 2.52 >> 1300 1.603e+05 2.27 >> 1350 2.299e+05 2.56 >> 1400 3.243e+05 2.63 >> 1450 2.663e+05 2.51 >> 1500 3.064e+05 2.55 >> 1550 4.297e+05 2.74 >> 1600 1.629e+05 2.80 >> 1650 2.379e+05 2.33 >> 1700 2.086e+05 2.28 >> 1750 1.214e+05 2.22 >> 1800 4.913e+04 1.58 >> 1850 3.862e+04 1.65 >> 1900 1.610e+05 2.53 >> 1950 3.576e+04 1.44 >> OO info: debug msg: matrix B restoration in ralg solver >> 2000 7.286e+05 2.42 >> 2050 5.268e+05 2.50 >> 2100 1.403e+05 2.01 >> 2150 1.029e+05 1.96 >> 2200 9.997e+04 2.15 >> 2250 7.424e+05 2.92 >> 2300 5.514e+04 1.55 >> 2350 2.518e+05 2.66 >> 2400 5.051e+04 1.78 >> 2450 5.006e+04 2.05 >> 2500 4.279e+04 1.44 >> 2550 4.509e+04 1.62 >> 2600 1.331e+05 2.45 >> 2650 4.061e+04 1.41 >> 2700 5.198e+04 1.90 >> 2750 3.489e+09 4.77 >> 2800 6.938e+04 2.22 >> 2850 2.474e+10 5.20 >> 2900 4.259e+07 3.81 >> 2950 1.044e+05 2.40 >> 3000 6.411e+10 5.40 >> 3050 6.232e+07 3.89 >> 3100 1.830e+06 3.13 >> 3150 4.635e+04 1.45 >> 3200 1.770e+09 4.62 >> OO info: debug msg: matrix B restoration in ralg solver >> 3250 1.764e+11 5.57 >> 3300 3.792e+09 4.01 >> 3350 1.554e+10 5.05 >> 3400 7.294e+09 4.81 >> 3450 7.227e+09 4.81 >> OO info: debug msg: matrix B restoration in ralg solver >> 3500 1.415e+11 5.56 >> 3550 1.795e+10 6.16 >> 3600 5.205e+09 4.40 >> 3650 1.641e+10 5.04 >> 3700 1.408e+10 5.01 >> OO info: debug msg: matrix B restoration in ralg solver >> 3750 1.277e+10 4.96 >> 3800 5.576e+09 3.89 >> 3850 5.008e+09 3.97 >> 3900 4.475e+09 4.04 >> 3950 3.993e+09 4.11 >> 4000 3.558e+09 4.17 >> 4050 3.237e+09 4.24 >> 4100 2.844e+09 4.24 >> 4150 1.077e+10 4.83 >> 4200 9.891e+09 4.82 >> OO info: debug msg: matrix B restoration in ralg solver >> 4250 4.720e+09 4.12 >> 4300 3.411e+09 4.02 >> 4350 1.768e+09 6.43 >> 4400 1.851e+09 4.31 >> 4450 1.448e+09 3.99 >> 4500 1.248e+09 3.84 >> 4550 1.099e+09 3.80 >> 4600 6.053e+09 4.85 >> 4650 8.905e+08 3.86 >> 4700 1.446e+09 4.43 >> OO info: debug msg: matrix B restoration in ralg solver >> 4750 6.292e+09 4.14 >> 4800 2.558e+09 3.96 >> 4850 2.898e+09 4.53 >> 4900 1.581e+09 4.21 >> 4950 1.272e+09 4.28 >> 5000 5.860e+09 6.34 >> 5050 4.163e+09 4.56 >> 5100 3.478e+09 4.22 >> 5150 3.238e+09 4.31 >> 5200 2.862e+09 3.92 >> OO info: debug msg: matrix B restoration in ralg solver >> 5250 3.259e+09 4.36 >> 5300 2.207e+09 3.91 >> 5350 1.760e+09 3.74 >> 5400 1.560e+09 3.93 >> 5450 1.925e+09 4.41 >> 5500 1.739e+09 4.41 >> 5550 1.640e+09 4.42 >> 5600 8.408e+10 4.93 >> 5650 9.792e+10 4.69 >> 5700 1.303e+11 4.75 >> 5750 2.450e+11 5.44 >> 5800 4.913e+11 5.33 >> 5850 2.536e+11 6.00 >> 5900 3.098e+11 5.70 >> 5950 8.987e+10 5.37 >> OO info: debug msg: matrix B restoration in ralg solver >> 6000 1.037e+12 6.00 >> 6050 3.448e+11 8.99 >> 6100 8.307e+12 6.40 >> 6150 1.589e+12 5.87 >> 6200 1.213e+12 5.27 >> OO info: debug msg: matrix B restoration in ralg solver >> 6250 1.224e+12 5.45 >> 6300 7.495e+11 5.00 >> 6350 3.998e+11 15.67 >> 6400 3.987e+11 5.00 >> 6450 3.127e+11 5.02 >> 6500 2.419e+11 5.27 >> 6550 3.691e+11 5.13 >> 6600 6.414e+11 5.74 >> 6650 1.329e+12 5.92 >> 6700 3.528e+11 5.18 >> 6750 2.981e+11 4.78 >> 6800 5.060e+11 5.51 >> 6850 4.760e+11 5.09 >> 6900 4.499e+11 5.10 >> 6950 1.069e+12 5.86 >> 7000 6.326e+11 5.26 >> 7050 5.217e+11 5.18 >> 7100 5.029e+11 5.16 >> 7150 8.043e+12 6.43 >> 7200 1.073e+13 6.51 >> 7250 2.658e+12 6.18 >> 7300 2.053e+11 4.81 >> 7350 1.040e+12 5.45 >> 7400 2.030e+12 6.08 >> 7450 2.131e+12 6.11 >> 7500 3.493e+11 5.17 >> 7550 2.420e+11 5.04 >> 7600 2.344e+12 6.17 >> 7650 3.515e+11 5.62 >> 7700 2.135e+11 5.35 >> 7750 1.411e+11 4.78 >> 7800 8.295e+12 6.46 >> 7850 7.406e+12 6.39 >> 7900 9.030e+12 6.45 >> 7950 1.677e+12 6.04 >> OO info: debug msg: matrix B restoration in ralg solver >> 8000 3.579e+12 6.23 >> 8050 1.109e+12 10.92 >> 8100 5.111e+12 5.80 >> 8150 7.521e+12 6.08 >> 8200 7.199e+12 5.85 >> OO info: debug msg: matrix B restoration in ralg solver >> 8250 7.812e+12 6.05 >> 8300 5.366e+12 8.57 >> 8350 5.689e+12 5.97 >> 8400 5.140e+12 5.97 >> 8450 3.909e+12 5.38 >> OO info: debug msg: matrix B restoration in ralg solver >> 8500 5.130e+12 6.12 >> 8550 3.753e+12 6.36 >> 8600 2.963e+12 5.43 >> 8650 2.528e+12 5.44 >> 8700 2.134e+12 5.46 >> OO info: debug msg: matrix B restoration in ralg solver >> 8750 1.760e+12 5.46 >> 8800 1.467e+12 5.27 >> 8850 2.764e+12 12.53 >> 8900 2.152e+12 5.63 >> 8950 2.532e+12 5.86 >> OO info: debug msg: matrix B restoration in ralg solver >> 9000 1.884e+12 5.67 >> 9050 4.073e+12 12.35 >> 9100 1.709e+12 5.38 >> 9150 1.398e+12 5.57 >> 9200 1.248e+12 5.60 >> OO info: debug msg: matrix B restoration in ralg solver >> 9250 1.044e+12 5.14 >> 9300 7.844e+11 5.21 >> 9350 6.360e+11 5.47 >> 9400 6.253e+11 5.67 >> 9450 3.557e+11 4.91 >> 9500 3.400e+11 5.29 >> 9550 3.160e+11 5.30 >> 9600 2.601e+11 4.94 >> 9650 2.199e+11 4.85 >> 9700 5.335e+12 13.48 >> OO info: debug msg: matrix B restoration in ralg solver >> 9750 5.933e+12 6.24 >> 9800 4.174e+12 8.76 >> 9850 3.803e+12 5.52 >> 9900 2.854e+12 5.50 >> 9950 2.014e+12 5.47 >> 10000 3.285e+12 6.13 >> 10001 3.285e+12 6.13 >> istop: -7 (Max Iter has been reached) >> Solver: Time Elapsed = 56.05 CPU Time Elapsed = 31.82 >> Plotting: Time Elapsed = 62.35 CPU Time Elapsed = 32.57 >> NO FEASIBLE SOLUTION is obtained (max residual = >>1.4e+06, >> objFunc = 3.2852899e+12) >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Now it works for me ------------------------------------------------------------------------ r1270 | dmitrey.kroshko | 2008-09-05 21:33:49 +0200 (Fri, 05 Sep 2008) | 1 line some changes in nlp_1.py OpenOpt checks user-supplied gradient df (shape: (150,) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== ----------------------------------------------------- solver: ralg problem: unnamed goal: minimum iter objFunVal log10(maxResidual) 0 8.596e+03 5.73 OpenOpt debug msg: hs: 16.0 OpenOpt debug msg: ls: 4 50 5.237e+03 1.08 100 7.347e+03 1.04 150 2.248e+04 1.24 200 7.588e+03 1.24 250 3.281e+03 0.74 300 2.780e+03 0.59 350 2.328e+03 0.52 400 1.748e+03 0.39 450 1.433e+03 0.27 500 9.347e+02 0.10 550 5.696e+02 -0.17 600 4.870e+02 -0.46 650 3.879e+02 -0.84 700 3.319e+02 -1.35 750 1.433e+02 -1.42 800 1.444e+02 -1.46 850 1.380e+02 -3.10 900 1.337e+02 -3.03 950 1.294e+02 -3.10 OO info: debug msg: matrix B restoration in ralg solver 1000 1.282e+02 -3.10 1050 1.281e+02 -3.10 1100 1.281e+02 -2.91 1135 1.281e+02 -3.10 /usr/local/lib64/python2.5/site-packages/matplotlib/axes.py:4827: DeprecationWarning: replace "faceted=False" with "edgecolors='none'" DeprecationWarning) #2008/04/18 istop: 3 (|| X[k] - X[k-1] || < xtol) Solver: Time Elapsed = 7.71 CPU Time Elapsed = 5.33 Plotting: Time Elapsed = 13.29 CPU Time Elapsed = 7.63 objFunValue: 128.08949 (feasible, max constraint = 0.0008) Cheers, Nils From emanuele at relativita.com Fri Sep 5 18:51:01 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sat, 06 Sep 2008 00:51:01 +0200 Subject: [SciPy-user] [OpenOpt] problem with ralg (latest SVN) In-Reply-To: <48C16C52.1040101@scipy.org> References: <48C14EEC.8060300@relativita.com> <48C155BF.7080008@relativita.com> <48C16C52.1040101@scipy.org> Message-ID: <48C1B7D5.9060006@relativita.com> Thanks for help. Unfortunately after updating from SVN again I'm not able to reproduce the same error, but instead I get this, about failing to import "cond" from numpy.linalg. It seems that "cond" is not available until the very latest numpy (and even autogenerated NumPy API on scipy.org have no "cond"): ---- $ python openopt/scikits/openopt/examples/nlp_1.py OpenOpt checks user-supplied gradient df (shape: (150,) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) according to: prob.diffInt = [ 1.00000000e-07] |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 derivatives are equal ======================== Traceback (most recent call last): File "openopt/scikits/openopt/examples/nlp_1.py", line 108, in r = p.solve('ralg', debug = 1) File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", line 185, in solve return runProbSolver(self, solvers, *args, **kwargs) File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", line 43, in runProbSolver solverClass = getattr(my_import(__solverPaths__[solver_str]), solver_str) File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", line 268, in my_import mod = __import__(name) File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/UkrOpt/ralg_oo.py", line 2, in from numpy.linalg import norm, cond ImportError: cannot import name cond ---- Any suggestion on how to solve this? Sorry for the mess. Emanuele Note: this error pops out using numpy+scipy shipped with ubuntu. When using recent SVN version of numpy+scipy everything works well. dmitrey wrote: > Hi Emanuele, > as it is mentioned in openopt install webpage and README.txt numpy v > >= 1.1.0 is recommended. Some other oo users informed of bugs due to > older versions. > > Could you inform what will be outputed if you set p.debug = 1? (either > directly or via p = NLP(..., debug=1,...)) > > If the problem with numpy versions is critical for users of your soft, > you'd better to put more recent numpy into Debian soft channel. > > Regards, D. > > Emanuele Olivetti wrote: > >> Same problem with numpy 1.0.4 + scipy 0.6.0 >> (shipped with ubuntu 8.04 hardy heron). >> >> E. >> >> Emanuele Olivetti wrote: >> >> >>> Dear all and Dmitrey, >>> >>> I've just updated to latest openopt (SVN). When using numpy 1.0.3 >>> and scipy 0.5.2 (standard in Ubuntu 7.10 gutsy gibbon) openopt says >>> that "ralg" (NLP) algorithm is missing! With more recent numpy >>> and scipy it seems to work reliably. But what happened with respect >>> to older numpy+scipy? In that case even running examples/nlp_1.py >>> returns: >>> ---- >>> $ python nlp_1.py >>> OpenOpt checks user-supplied gradient df (shape: (150,) ) >>> according to: >>> prob.diffInt = [ 1.00000000e-07] >>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>> derivatives are equal >>> ======================== >>> OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) >>> according to: >>> prob.diffInt = [ 1.00000000e-07] >>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>> derivatives are equal >>> ======================== >>> OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) >>> according to: >>> prob.diffInt = [ 1.00000000e-07] >>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>> derivatives are equal >>> ======================== >>> OO Error:incorrect solver is called, maybe the solver "ralg" is not >>> installed. Maybe setting p.debug=1 could specify the matter more precisely >>> Traceback (most recent call last): >>> File "nlp_1.py", line 110, in >>> r = p.solve('ralg') >>> File >>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", >>> line 185, in solve >>> return runProbSolver(self, solvers, *args, **kwargs) >>> File >>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >>> line 48, in runProbSolver >>> p.err('incorrect solver is called, maybe the solver "' + solver_str >>> +'" is not installed. Maybe setting p.debug=1 could specify the matter >>> more precisely') >>> File >>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/oologfcn.py", >>> line 16, in ooerr >>> raise OpenOptException(msg) >>> scikits.openopt.Kernel.oologfcn.OpenOptException: incorrect solver is >>> called, maybe the solver "ralg" is not installed. Maybe setting >>> p.debug=1 could specify the matter more precisely >>> ---- >>> >>> This did not happen before so I guess it is due to a recent >>> commit. It is possible to solve the problem? >>> >>> Kind Regards, >>> >>> Emanuele >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From emanuele at relativita.com Fri Sep 5 19:00:17 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sat, 06 Sep 2008 01:00:17 +0200 Subject: [SciPy-user] [OpenOpt] problem with ralg (latest SVN) In-Reply-To: <48C1B7D5.9060006@relativita.com> References: <48C14EEC.8060300@relativita.com> <48C155BF.7080008@relativita.com> <48C16C52.1040101@scipy.org> <48C1B7D5.9060006@relativita.com> Message-ID: <48C1BA01.4000501@relativita.com> OK. Running another custom example I got again the initial "ralg missing" error message. Increasing verbosity as you suggested (problem.debug = 1) shows the same error message shown before, i.e. "cond" is not available in numpy.linalg, so import fails: ---- ...... in solve result = self.problem.solve(self.optimization_algorithm) # perform optimization! File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", line 185, in solve return runProbSolver(self, solvers, *args, **kwargs) File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", line 43, in runProbSolver solverClass = getattr(my_import(__solverPaths__[solver_str]), solver_str) File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", line 268, in my_import mod = __import__(name) File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/UkrOpt/ralg_oo.py", line 2, in from numpy.linalg import norm, cond ImportError: cannot import name cond ---- Hope this helps, Emanuele Emanuele Olivetti wrote: > Thanks for help. > > Unfortunately after updating from SVN again I'm not able to reproduce > the same error, but instead I get this, about failing to import "cond" from > numpy.linalg. It seems that "cond" is not available until the very latest > numpy (and even autogenerated NumPy API on scipy.org have no "cond"): > ---- > $ python openopt/scikits/openopt/examples/nlp_1.py > OpenOpt checks user-supplied gradient df (shape: (150,) ) > according to: > prob.diffInt = [ 1.00000000e-07] > |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 > derivatives are equal > ======================== > OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) > according to: > prob.diffInt = [ 1.00000000e-07] > |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 > derivatives are equal > ======================== > OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) > according to: > prob.diffInt = [ 1.00000000e-07] > |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 > derivatives are equal > ======================== > Traceback (most recent call last): > File "openopt/scikits/openopt/examples/nlp_1.py", line 108, in > r = p.solve('ralg', debug = 1) > File > "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", > line 185, in solve > return runProbSolver(self, solvers, *args, **kwargs) > File > "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", > line 43, in runProbSolver > solverClass = getattr(my_import(__solverPaths__[solver_str]), > solver_str) > File > "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", > line 268, in my_import > mod = __import__(name) > File > "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/UkrOpt/ralg_oo.py", > line 2, in > from numpy.linalg import norm, cond > ImportError: cannot import name cond > ---- > > Any suggestion on how to solve this? > > Sorry for the mess. > > Emanuele > > Note: this error pops out using numpy+scipy shipped with > ubuntu. When using recent SVN version of numpy+scipy > everything works well. > > dmitrey wrote: > >> Hi Emanuele, >> as it is mentioned in openopt install webpage and README.txt numpy v >> >= 1.1.0 is recommended. Some other oo users informed of bugs due to >> older versions. >> >> Could you inform what will be outputed if you set p.debug = 1? (either >> directly or via p = NLP(..., debug=1,...)) >> >> If the problem with numpy versions is critical for users of your soft, >> you'd better to put more recent numpy into Debian soft channel. >> >> Regards, D. >> >> Emanuele Olivetti wrote: >> >> >>> Same problem with numpy 1.0.4 + scipy 0.6.0 >>> (shipped with ubuntu 8.04 hardy heron). >>> >>> E. >>> >>> Emanuele Olivetti wrote: >>> >>> >>> >>>> Dear all and Dmitrey, >>>> >>>> I've just updated to latest openopt (SVN). When using numpy 1.0.3 >>>> and scipy 0.5.2 (standard in Ubuntu 7.10 gutsy gibbon) openopt says >>>> that "ralg" (NLP) algorithm is missing! With more recent numpy >>>> and scipy it seems to work reliably. But what happened with respect >>>> to older numpy+scipy? In that case even running examples/nlp_1.py >>>> returns: >>>> ---- >>>> $ python nlp_1.py >>>> OpenOpt checks user-supplied gradient df (shape: (150,) ) >>>> according to: >>>> prob.diffInt = [ 1.00000000e-07] >>>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>>> derivatives are equal >>>> ======================== >>>> OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) >>>> according to: >>>> prob.diffInt = [ 1.00000000e-07] >>>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>>> derivatives are equal >>>> ======================== >>>> OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) >>>> according to: >>>> prob.diffInt = [ 1.00000000e-07] >>>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>>> derivatives are equal >>>> ======================== >>>> OO Error:incorrect solver is called, maybe the solver "ralg" is not >>>> installed. Maybe setting p.debug=1 could specify the matter more precisely >>>> Traceback (most recent call last): >>>> File "nlp_1.py", line 110, in >>>> r = p.solve('ralg') >>>> File >>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", >>>> line 185, in solve >>>> return runProbSolver(self, solvers, *args, **kwargs) >>>> File >>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >>>> line 48, in runProbSolver >>>> p.err('incorrect solver is called, maybe the solver "' + solver_str >>>> +'" is not installed. Maybe setting p.debug=1 could specify the matter >>>> more precisely') >>>> File >>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/oologfcn.py", >>>> line 16, in ooerr >>>> raise OpenOptException(msg) >>>> scikits.openopt.Kernel.oologfcn.OpenOptException: incorrect solver is >>>> called, maybe the solver "ralg" is not installed. Maybe setting >>>> p.debug=1 could specify the matter more precisely >>>> ---- >>>> >>>> This did not happen before so I guess it is due to a recent >>>> commit. It is possible to solve the problem? >>>> >>>> Kind Regards, >>>> >>>> Emanuele >>>> >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>>> >>>> >>>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From dmitrey.kroshko at scipy.org Sat Sep 6 03:16:41 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 06 Sep 2008 10:16:41 +0300 Subject: [SciPy-user] [OpenOpt] problem with ralg (latest SVN) In-Reply-To: <48C1BA01.4000501@relativita.com> References: <48C14EEC.8060300@relativita.com> <48C155BF.7080008@relativita.com> <48C16C52.1040101@scipy.org> <48C1B7D5.9060006@relativita.com> <48C1BA01.4000501@relativita.com> Message-ID: <48C22E59.8030208@scipy.org> Hi Emanuele, update svn and try now, HTH, D. P.S. IIRC you are deal with box-bounded problems, let me remember you once again, that ralg (especially current implementation) handles it very badly (especially when lots of active constraints in optim point) in comparison to scipy_lbfgsb, scipy_tnc or algencan (all are available from oo, requires scipy or algencan installed), these ones have very appropriate specialized box-bound solvers. Regards, D Emanuele Olivetti wrote: > OK. Running another custom example I got again the initial "ralg missing" > error message. Increasing verbosity as you suggested (problem.debug = 1) > shows the same error message shown before, i.e. "cond" is not available > in numpy.linalg, so import fails: > ---- > ...... > in solve > result = self.problem.solve(self.optimization_algorithm) # perform > optimization! > File > "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", > line 185, in solve > return runProbSolver(self, solvers, *args, **kwargs) > File > "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", > line 43, in runProbSolver > solverClass = getattr(my_import(__solverPaths__[solver_str]), > solver_str) > File > "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", > line 268, in my_import > mod = __import__(name) > File > "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/UkrOpt/ralg_oo.py", > line 2, in > from numpy.linalg import norm, cond > ImportError: cannot import name cond > ---- > > Hope this helps, > > Emanuele > > Emanuele Olivetti wrote: > >> Thanks for help. >> >> Unfortunately after updating from SVN again I'm not able to reproduce >> the same error, but instead I get this, about failing to import "cond" from >> numpy.linalg. It seems that "cond" is not available until the very latest >> numpy (and even autogenerated NumPy API on scipy.org have no "cond"): >> ---- >> $ python openopt/scikits/openopt/examples/nlp_1.py >> OpenOpt checks user-supplied gradient df (shape: (150,) ) >> according to: >> prob.diffInt = [ 1.00000000e-07] >> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >> derivatives are equal >> ======================== >> OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) >> according to: >> prob.diffInt = [ 1.00000000e-07] >> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >> derivatives are equal >> ======================== >> OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) >> according to: >> prob.diffInt = [ 1.00000000e-07] >> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >> derivatives are equal >> ======================== >> Traceback (most recent call last): >> File "openopt/scikits/openopt/examples/nlp_1.py", line 108, in >> r = p.solve('ralg', debug = 1) >> File >> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", >> line 185, in solve >> return runProbSolver(self, solvers, *args, **kwargs) >> File >> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >> line 43, in runProbSolver >> solverClass = getattr(my_import(__solverPaths__[solver_str]), >> solver_str) >> File >> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >> line 268, in my_import >> mod = __import__(name) >> File >> "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/UkrOpt/ralg_oo.py", >> line 2, in >> from numpy.linalg import norm, cond >> ImportError: cannot import name cond >> ---- >> >> Any suggestion on how to solve this? >> >> Sorry for the mess. >> >> Emanuele >> >> Note: this error pops out using numpy+scipy shipped with >> ubuntu. When using recent SVN version of numpy+scipy >> everything works well. >> >> dmitrey wrote: >> >> >>> Hi Emanuele, >>> as it is mentioned in openopt install webpage and README.txt numpy v >>> >= 1.1.0 is recommended. Some other oo users informed of bugs due to >>> older versions. >>> >>> Could you inform what will be outputed if you set p.debug = 1? (either >>> directly or via p = NLP(..., debug=1,...)) >>> >>> If the problem with numpy versions is critical for users of your soft, >>> you'd better to put more recent numpy into Debian soft channel. >>> >>> Regards, D. >>> >>> Emanuele Olivetti wrote: >>> >>> >>> >>>> Same problem with numpy 1.0.4 + scipy 0.6.0 >>>> (shipped with ubuntu 8.04 hardy heron). >>>> >>>> E. >>>> >>>> Emanuele Olivetti wrote: >>>> >>>> >>>> >>>> >>>>> Dear all and Dmitrey, >>>>> >>>>> I've just updated to latest openopt (SVN). When using numpy 1.0.3 >>>>> and scipy 0.5.2 (standard in Ubuntu 7.10 gutsy gibbon) openopt says >>>>> that "ralg" (NLP) algorithm is missing! With more recent numpy >>>>> and scipy it seems to work reliably. But what happened with respect >>>>> to older numpy+scipy? In that case even running examples/nlp_1.py >>>>> returns: >>>>> ---- >>>>> $ python nlp_1.py >>>>> OpenOpt checks user-supplied gradient df (shape: (150,) ) >>>>> according to: >>>>> prob.diffInt = [ 1.00000000e-07] >>>>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>>>> derivatives are equal >>>>> ======================== >>>>> OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) >>>>> according to: >>>>> prob.diffInt = [ 1.00000000e-07] >>>>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>>>> derivatives are equal >>>>> ======================== >>>>> OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) >>>>> according to: >>>>> prob.diffInt = [ 1.00000000e-07] >>>>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>>>> derivatives are equal >>>>> ======================== >>>>> OO Error:incorrect solver is called, maybe the solver "ralg" is not >>>>> installed. Maybe setting p.debug=1 could specify the matter more precisely >>>>> Traceback (most recent call last): >>>>> File "nlp_1.py", line 110, in >>>>> r = p.solve('ralg') >>>>> File >>>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", >>>>> line 185, in solve >>>>> return runProbSolver(self, solvers, *args, **kwargs) >>>>> File >>>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >>>>> line 48, in runProbSolver >>>>> p.err('incorrect solver is called, maybe the solver "' + solver_str >>>>> +'" is not installed. Maybe setting p.debug=1 could specify the matter >>>>> more precisely') >>>>> File >>>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/oologfcn.py", >>>>> line 16, in ooerr >>>>> raise OpenOptException(msg) >>>>> scikits.openopt.Kernel.oologfcn.OpenOptException: incorrect solver is >>>>> called, maybe the solver "ralg" is not installed. Maybe setting >>>>> p.debug=1 could specify the matter more precisely >>>>> ---- >>>>> >>>>> This did not happen before so I guess it is due to a recent >>>>> commit. It is possible to solve the problem? >>>>> >>>>> Kind Regards, >>>>> >>>>> Emanuele >>>>> >>>>> _______________________________________________ >>>>> SciPy-user mailing list >>>>> SciPy-user at scipy.org >>>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From emanuele at relativita.com Sun Sep 7 03:47:40 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sun, 07 Sep 2008 09:47:40 +0200 Subject: [SciPy-user] [OpenOpt] problem with ralg (latest SVN) In-Reply-To: <48C22E59.8030208@scipy.org> References: <48C14EEC.8060300@relativita.com> <48C155BF.7080008@relativita.com> <48C16C52.1040101@scipy.org> <48C1B7D5.9060006@relativita.com> <48C1BA01.4000501@relativita.com> <48C22E59.8030208@scipy.org> Message-ID: <48C3871C.3000109@relativita.com> Thanks Dmitrey, I've just tried to update, install and test latest OpenOpt (SVN) and it seems to work well with numpy 1.0.3 + scipy 0.5.2 shipped with ubuntu 7.10 (or better, now nlp_1.py runs some iterations and then the Python process gives "segmentation fault", but this is a known Python/NumPy issue as far as I remember). Anyway doesn't stop anymore. Thanks for the fix. About your suggstion to use other solvers than ralg, I've some good preliminary experience with scipy_lbfgsb; even scipy_tnc seems to work well but requires a too recent scipy. This is what happens with scipy 0.5.2 bundled with ubuntu 7.10 when calling scipy_tnc: --- result = self.problem.solve(self.optimization_algorithm) # perform optimization! File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", line 185, in solve return runProbSolver(self, solvers, *args, **kwargs) File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", line 167, in runProbSolver solver(p) File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/scipy_optim/scipy_tnc_oo.py", line 38, in __solver__ xf, nfeval, rc = fmin_tnc(p.f, x0 = p.x0, fprime=p.df, args=(), approx_grad=0, bounds=bounds, messages=messages, maxfun=maxfun, ftol=p.ftol, xtol=p.xtol, pgtol=p.gradtol) TypeError: fmin_tnc() got an unexpected keyword argument 'xtol' ---- About algencan, I hanven't tried yet and I'm really interested. It is just my lack of time and the external dependencies that slow down my attempt. Anyway I've added optional logscale to my code, which is similar to setting the lower bound I need: problem.lb = N.zeros(problem.n) + contol Best, Emanuele dmitrey wrote: > Hi Emanuele, > update svn and try now, > HTH, D. > > P.S. IIRC you are deal with box-bounded problems, let me remember you > once again, that ralg (especially current implementation) handles it > very badly (especially when lots of active constraints in optim point) > in comparison to scipy_lbfgsb, scipy_tnc or algencan (all are available > from oo, requires scipy or algencan installed), these ones have very > appropriate specialized box-bound solvers. > Regards, D > > Emanuele Olivetti wrote: > >> OK. Running another custom example I got again the initial "ralg missing" >> error message. Increasing verbosity as you suggested (problem.debug = 1) >> shows the same error message shown before, i.e. "cond" is not available >> in numpy.linalg, so import fails: >> ---- >> ...... >> in solve >> result = self.problem.solve(self.optimization_algorithm) # perform >> optimization! >> File >> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", >> line 185, in solve >> return runProbSolver(self, solvers, *args, **kwargs) >> File >> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >> line 43, in runProbSolver >> solverClass = getattr(my_import(__solverPaths__[solver_str]), >> solver_str) >> File >> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >> line 268, in my_import >> mod = __import__(name) >> File >> "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/UkrOpt/ralg_oo.py", >> line 2, in >> from numpy.linalg import norm, cond >> ImportError: cannot import name cond >> ---- >> >> Hope this helps, >> >> Emanuele >> >> Emanuele Olivetti wrote: >> >> >>> Thanks for help. >>> >>> Unfortunately after updating from SVN again I'm not able to reproduce >>> the same error, but instead I get this, about failing to import "cond" from >>> numpy.linalg. It seems that "cond" is not available until the very latest >>> numpy (and even autogenerated NumPy API on scipy.org have no "cond"): >>> ---- >>> $ python openopt/scikits/openopt/examples/nlp_1.py >>> OpenOpt checks user-supplied gradient df (shape: (150,) ) >>> according to: >>> prob.diffInt = [ 1.00000000e-07] >>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>> derivatives are equal >>> ======================== >>> OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) >>> according to: >>> prob.diffInt = [ 1.00000000e-07] >>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>> derivatives are equal >>> ======================== >>> OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) >>> according to: >>> prob.diffInt = [ 1.00000000e-07] >>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>> derivatives are equal >>> ======================== >>> Traceback (most recent call last): >>> File "openopt/scikits/openopt/examples/nlp_1.py", line 108, in >>> r = p.solve('ralg', debug = 1) >>> File >>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", >>> line 185, in solve >>> return runProbSolver(self, solvers, *args, **kwargs) >>> File >>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >>> line 43, in runProbSolver >>> solverClass = getattr(my_import(__solverPaths__[solver_str]), >>> solver_str) >>> File >>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >>> line 268, in my_import >>> mod = __import__(name) >>> File >>> "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/UkrOpt/ralg_oo.py", >>> line 2, in >>> from numpy.linalg import norm, cond >>> ImportError: cannot import name cond >>> ---- >>> >>> Any suggestion on how to solve this? >>> >>> Sorry for the mess. >>> >>> Emanuele >>> >>> Note: this error pops out using numpy+scipy shipped with >>> ubuntu. When using recent SVN version of numpy+scipy >>> everything works well. >>> >>> dmitrey wrote: >>> >>> >>> >>>> Hi Emanuele, >>>> as it is mentioned in openopt install webpage and README.txt numpy v >>>> >= 1.1.0 is recommended. Some other oo users informed of bugs due to >>>> older versions. >>>> >>>> Could you inform what will be outputed if you set p.debug = 1? (either >>>> directly or via p = NLP(..., debug=1,...)) >>>> >>>> If the problem with numpy versions is critical for users of your soft, >>>> you'd better to put more recent numpy into Debian soft channel. >>>> >>>> Regards, D. >>>> >>>> Emanuele Olivetti wrote: >>>> >>>> >>>> >>>> >>>>> Same problem with numpy 1.0.4 + scipy 0.6.0 >>>>> (shipped with ubuntu 8.04 hardy heron). >>>>> >>>>> E. >>>>> >>>>> Emanuele Olivetti wrote: >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>> Dear all and Dmitrey, >>>>>> >>>>>> I've just updated to latest openopt (SVN). When using numpy 1.0.3 >>>>>> and scipy 0.5.2 (standard in Ubuntu 7.10 gutsy gibbon) openopt says >>>>>> that "ralg" (NLP) algorithm is missing! With more recent numpy >>>>>> and scipy it seems to work reliably. But what happened with respect >>>>>> to older numpy+scipy? In that case even running examples/nlp_1.py >>>>>> returns: >>>>>> ---- >>>>>> $ python nlp_1.py >>>>>> OpenOpt checks user-supplied gradient df (shape: (150,) ) >>>>>> according to: >>>>>> prob.diffInt = [ 1.00000000e-07] >>>>>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>>>>> derivatives are equal >>>>>> ======================== >>>>>> OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) >>>>>> according to: >>>>>> prob.diffInt = [ 1.00000000e-07] >>>>>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>>>>> derivatives are equal >>>>>> ======================== >>>>>> OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) >>>>>> according to: >>>>>> prob.diffInt = [ 1.00000000e-07] >>>>>> |1 - info_user/info_numerical| <= prob.maxViolation = 0.01 >>>>>> derivatives are equal >>>>>> ======================== >>>>>> OO Error:incorrect solver is called, maybe the solver "ralg" is not >>>>>> installed. Maybe setting p.debug=1 could specify the matter more precisely >>>>>> Traceback (most recent call last): >>>>>> File "nlp_1.py", line 110, in >>>>>> r = p.solve('ralg') >>>>>> File >>>>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", >>>>>> line 185, in solve >>>>>> return runProbSolver(self, solvers, *args, **kwargs) >>>>>> File >>>>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", >>>>>> line 48, in runProbSolver >>>>>> p.err('incorrect solver is called, maybe the solver "' + solver_str >>>>>> +'" is not installed. Maybe setting p.debug=1 could specify the matter >>>>>> more precisely') >>>>>> File >>>>>> "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/oologfcn.py", >>>>>> line 16, in ooerr >>>>>> raise OpenOptException(msg) >>>>>> scikits.openopt.Kernel.oologfcn.OpenOptException: incorrect solver is >>>>>> called, maybe the solver "ralg" is not installed. Maybe setting >>>>>> p.debug=1 could specify the matter more precisely >>>>>> ---- >>>>>> >>>>>> This did not happen before so I guess it is due to a recent >>>>>> commit. It is possible to solve the problem? >>>>>> >>>>>> Kind Regards, >>>>>> >>>>>> Emanuele >>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-user mailing list >>>>>> SciPy-user at scipy.org >>>>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> SciPy-user mailing list >>>>> SciPy-user at scipy.org >>>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>>> >>>> >>>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From dmitrey.kroshko at scipy.org Sun Sep 7 04:10:04 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sun, 07 Sep 2008 11:10:04 +0300 Subject: [SciPy-user] [OpenOpt] problem with ralg (latest SVN) In-Reply-To: <48C3871C.3000109@relativita.com> References: <48C14EEC.8060300@relativita.com> <48C155BF.7080008@relativita.com> <48C16C52.1040101@scipy.org> <48C1B7D5.9060006@relativita.com> <48C1BA01.4000501@relativita.com> <48C22E59.8030208@scipy.org> <48C3871C.3000109@relativita.com> Message-ID: <48C38C5C.1080202@scipy.org> Emanuele Olivetti wrote: > Thanks Dmitrey, > > I've just tried to update, install and test latest OpenOpt (SVN) > and it seems to work well with numpy 1.0.3 + scipy 0.5.2 shipped > with ubuntu 7.10 (or better, now nlp_1.py runs some > iterations and then the Python process gives "segmentation fault", > but this is a known Python/NumPy issue as far as I remember). > Anyway doesn't stop anymore. Thanks for the fix. > > About your suggstion to use other solvers than ralg, I've > some good preliminary experience with scipy_lbfgsb; even > scipy_tnc seems to work well but requires a too recent scipy. > This is what happens with scipy 0.5.2 bundled with ubuntu 7.10 > when calling scipy_tnc: > --- > TypeError: fmin_tnc() got an unexpected keyword argument 'xtol' > ---- > About algencan, I hanven't tried yet and I'm really interested. It is > just my lack of time and the external dependencies that slow down > my attempt. Anyway I've added optional logscale to my code, which > is similar to setting the lower bound I need: > problem.lb = N.zeros(problem.n) + contol > > Best, > > Emanuele AFAIK currently (latest subversion) there are no OO solvers where the adding contol to box-bounds matters. As for next OO release I intend to have it done at *September 15.* Regards, D. From gyromagnetic at gmail.com Sun Sep 7 11:37:28 2008 From: gyromagnetic at gmail.com (gfunch) Date: Sun, 7 Sep 2008 09:37:28 -0600 Subject: [SciPy-user] segmentation fault in scipy.test() Message-ID: Hi, I have tried to build SciPy (svn) on an x86_64 system running Linux CentOS5, but ran into a segmentation fault while running the tests. I first compiled and installed Lapack and ATLAS per some instructions I found the web. I then built and installed numpy (which tested fine) and then built scipy. What commands should I run, and output should I generate, to best diagnose the problem, and perhaps seek your kind help? Thanks. -gyro From david at ar.media.kyoto-u.ac.jp Sun Sep 7 11:27:37 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 08 Sep 2008 00:27:37 +0900 Subject: [SciPy-user] segmentation fault in scipy.test() In-Reply-To: References: Message-ID: <48C3F2E9.4050906@ar.media.kyoto-u.ac.jp> gfunch wrote: > Hi, > I have tried to build SciPy (svn) on an x86_64 system running Linux > CentOS5, but ran into a segmentation fault while running the tests. > > I first compiled and installed Lapack and ATLAS per some instructions > I found the web. I then built and installed numpy (which tested fine) > and then built scipy. > > What commands should I run, and output should I generate, to best > diagnose the problem, and perhaps seek your kind help? > Hi Gyro, Sorry for the bug in scipy. Could you give us the output when you run the tests (e.g. which test failed and segfaulted ?). Something useful would be the build log (output when run python setup.py build/install; we need the output when build from scratch, e.g. after having removed the build directory). If you are familiar with gdb, something useful may be a backtrace, but first, I would like to check whether it is not a problem related to ATLAS/Lapack, cheers, David From gyromagnetic at gmail.com Sun Sep 7 12:15:22 2008 From: gyromagnetic at gmail.com (gfunch) Date: Sun, 7 Sep 2008 10:15:22 -0600 Subject: [SciPy-user] segmentation fault in scipy.test() In-Reply-To: <48C3F2E9.4050906@ar.media.kyoto-u.ac.jp> References: <48C3F2E9.4050906@ar.media.kyoto-u.ac.jp> Message-ID: On Sun, Sep 7, 2008 at 9:27 AM, David Cournapeau wrote: > gfunch wrote: >> Hi, >> I have tried to build SciPy (svn) on an x86_64 system running Linux >> CentOS5, but ran into a segmentation fault while running the tests. >> >> I first compiled and installed Lapack and ATLAS per some instructions >> I found the web. I then built and installed numpy (which tested fine) >> and then built scipy. >> >> What commands should I run, and output should I generate, to best >> diagnose the problem, and perhaps seek your kind help? >> > > Hi Gyro, > > Sorry for the bug in scipy. Could you give us the output when you > run the tests (e.g. which test failed and segfaulted ?). Something > useful would be the build log (output when run python setup.py > build/install; we need the output when build from scratch, e.g. after > having removed the build directory). > If you are familiar with gdb, something useful may be a backtrace, > but first, I would like to check whether it is not a problem related to > ATLAS/Lapack, > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi David, Thanks for the very prompt reply. Below is the output you requested. -gyro ----- Here is the output of the tests: $ python Python 2.5.2 (r252:60911, Sep 5 2008, 07:14:57) [GCC 4.1.2 20070626 (Red Hat 4.1.2-14)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test() Running unit tests for scipy NumPy version 1.2.0.dev5741 NumPy is installed in /home/gf/local/lib/python2.5/site-packages/numpy SciPy version 0.7.0.dev4692 SciPy is installed in /home/gf/local/lib/python2.5/site-packages/scipy Python version 2.5.2 (r252:60911, Sep 5 2008, 07:14:57) [GCC 4.1.2 20070626 (Red Hat 4.1.2-14)] nose version 0.10.3 ... [snip: a bunch of test results] ... warnings.warn(str1, DeprecationWarning) .........E...........................FF.............ATLAS version 3.8.2 built by gf on Fri Sep 5 11:20:02 MDT 2008: UNAME : Linux 2.6.18-53.el5xen #1 SMP Mon Nov 12 02:46:57 EST 2007 x86_64 x86_64 x86_64 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_PIII -DATL_CPUMHZ=2493 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 0 F77 : gfortran, version GNU Fortran (GCC) 4.1.2 20070626 (Red Hat 4.1.2-14) F77FLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -fPIC -m64 SMC : gcc, version gcc (GCC) 4.1.2 20070626 (Red Hat 4.1.2-14) SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -fPIC -m64 SKC : gcc, version gcc (GCC) 4.1.2 20070626 (Red Hat 4.1.2-14) SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -fPIC -m64 .............................................F.Segmentation fault ----- Here is the top of the output from $ python setup.py build mkl_info: libraries mkl,vml,guide not found in /home/gf/local/lib/ libraries mkl,vml,guide not found in /home/gf/local/lib64/ libraries mkl,vml,guide not found in /home/gf/local/lib/atlas libraries mkl,vml,guide not found in /usr/local/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /home/gf/local/lib/ libraries fftw3 not found in /home/gf/local/lib64/ libraries fftw3 not found in /home/gf/local/lib/atlas libraries fftw3 not found in /usr/local/lib fftw3 not found NOT AVAILABLE fftw2_info: libraries rfftw,fftw not found in /home/gf/local/lib/ libraries rfftw,fftw not found in /home/gf/local/lib64/ libraries rfftw,fftw not found in /home/gf/local/lib/atlas libraries rfftw,fftw not found in /usr/local/lib fftw2 not found NOT AVAILABLE dfftw_info: libraries drfftw,dfftw not found in /home/gf/local/lib/ libraries drfftw,dfftw not found in /home/gf/local/lib64/ libraries drfftw,dfftw not found in /home/gf/local/lib/atlas libraries drfftw,dfftw not found in /usr/local/lib dfftw not found NOT AVAILABLE djbfft_info: NOT AVAILABLE blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /home/gf/local/lib/ libraries mkl,vml,guide not found in /home/gf/local/lib64/ libraries mkl,vml,guide not found in /home/gf/local/lib/atlas libraries mkl,vml,guide not found in /usr/local/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLASATLAS Setting PTATLASATLAS Setting PTATLASATLAS FOUND: libraries ['ptf77blas', 'ptcblas', 'atlas'] library_dirs ['/home/gf/local/lib/atlas'] language c customize GnuFCompiler Found executable /usr/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/home/gf/local/lib/atlas -lptf77blas -lptcblas -latlas -o _configtest ATLAS version 3.8.2 built by on Fri Sep 5 11:20:02 MDT 2008: UNAME : Linux 2.6.18-53.el5xen #1 SMP Mon Nov 12 02:46:57 EST 2007 x86_64 x86_64 x86_6 4 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_PIII -DATL_CPUMHZ2493 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GA S_x8664 F2CDEFS : -DAdd_ -DF77_INTEGERint -DStringSunStyle CACHEEDGE: 0 F77 : gfortran, version GNU Fortran (GCC) 4.1.2 20070626 (Red Hat 4.1.2-14) F77FLAGS : -fomit-frame-pointer -mfpmath387 -O2 -falign-loops4 -fPIC -m64 SMC : gcc, version gcc (GCC) 4.1.2 20070626 (Red Hat 4.1.2-14) SMCFLAGS : -fomit-frame-pointer -mfpmath387 -O2 -falign-loops4 -fPIC -m64 SKC : gcc, version gcc (GCC) 4.1.2 20070626 (Red Hat 4.1.2-14) SKCFLAGS : -fomit-frame-pointer -mfpmath387 -O2 -falign-loops4 -fPIC -m64 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries ['ptf77blas', 'ptcblas', 'atlas'] library_dirs ['/home/gf/local/lib/atlas'] language c define_macros [('ATLAS_INFO', '"\\"3.8.2\\""')] ATLAS version 3.8.2 lapack_opt_info: lapack_mkl_info: NOT AVAILABLE atlas_threads_info: Setting PTATLASATLAS libraries lapack_atlas not found in /home/gf/local/lib/atlas numpy.distutils.system_info.atlas_threads_info Setting PTATLASATLAS Setting PTATLASATLAS FOUND: libraries ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs ['/home/gf/local/lib/atlas'] language f77 gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/home/gf/local/lib/atlas -llapack -lptf77blas -lptcblas -latlas -o _configtest ATLAS version 3.8.2 built by on Fri Sep 5 11:20:02 MDT 2008: UNAME : Linux 2.6.18-53.el5xen #1 SMP Mon Nov 12 02:46:57 EST 2007 x86_64 x86_64 x86_6 4 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_PIII -DATL_CPUMHZ2493 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GA S_x8664 F2CDEFS : -DAdd_ -DF77_INTEGERint -DStringSunStyle CACHEEDGE: 0 F77 : gfortran, version GNU Fortran (GCC) 4.1.2 20070626 (Red Hat 4.1.2-14) F77FLAGS : -fomit-frame-pointer -mfpmath387 -O2 -falign-loops4 -fPIC -m64 SMC : gcc, version gcc (GCC) 4.1.2 20070626 (Red Hat 4.1.2-14) SMCFLAGS : -fomit-frame-pointer -mfpmath387 -O2 -falign-loops4 -fPIC -m64 SKC : gcc, version gcc (GCC) 4.1.2 20070626 (Red Hat 4.1.2-14) SKCFLAGS : -fomit-frame-pointer -mfpmath387 -O2 -falign-loops4 -fPIC -m64 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs ['/home/gf/local/lib/atlas'] language f77 define_macros [('ATLAS_INFO', '"\\"3.8.2\\""')] ATLAS version 3.8.2 ATLAS version 3.8.2 umfpack_info: libraries umfpack not found in /home/gf/local/lib/ libraries umfpack not found in /home/gf/local/lib64/ libraries umfpack not found in /home/gf/local/lib/atlas libraries umfpack not found in /usr/local/lib NOT AVAILABLE From david at ar.media.kyoto-u.ac.jp Sun Sep 7 12:04:18 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 08 Sep 2008 01:04:18 +0900 Subject: [SciPy-user] segmentation fault in scipy.test() In-Reply-To: References: <48C3F2E9.4050906@ar.media.kyoto-u.ac.jp> Message-ID: <48C3FB82.7040603@ar.media.kyoto-u.ac.jp> gfunch wrote: > Running unit tests for scipy > NumPy version 1.2.0.dev5741 > NumPy is installed in /home/gf/local/lib/python2.5/site-packages/numpy > SciPy version 0.7.0.dev4692 > SciPy is installed in /home/gf/local/lib/python2.5/site-packages/scipy > Python version 2.5.2 (r252:60911, Sep 5 2008, 07:14:57) [GCC 4.1.2 > 20070626 (Red Hat 4.1.2-14)] > nose version 0.10.3 > > ... [snip: a bunch of test results] ... > > Sorry, I forgot to tell you to run the test in verbose mode (scipy.test(verbose = 10)), because otherwise, the output is useless for debugging purpose. David From gyromagnetic at gmail.com Sun Sep 7 12:30:42 2008 From: gyromagnetic at gmail.com (gfunch) Date: Sun, 7 Sep 2008 10:30:42 -0600 Subject: [SciPy-user] segmentation fault in scipy.test() In-Reply-To: <48C3FB82.7040603@ar.media.kyoto-u.ac.jp> References: <48C3F2E9.4050906@ar.media.kyoto-u.ac.jp> <48C3FB82.7040603@ar.media.kyoto-u.ac.jp> Message-ID: On Sun, Sep 7, 2008 at 10:04 AM, David Cournapeau wrote: > > Sorry, I forgot to tell you to run the test in verbose mode > (scipy.test(verbose = 10)), because otherwise, the output is useless for > debugging purpose. > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi David, Below are the 'non-ok' results (and context) from the output of scipy.test(verbose=10). -gyro test_recasts (test_recaster.TestRecaster) ... ok test_smallest_int_sctype (test_recaster.TestRecaster) ... ok Failure: ImportError (/home/breisfel/local/lib/python2.5/site-packages/scipy/lib/blas/fblas.so: undefined symbol: _gfortran_st_write_done) ... ERROR test_lapack.test_all_lapack ... ok test_lapack.test_all_lapack ... ok test_lapack.test_all_lapack ... ok test_lapack.test_all_lapack ... ok test_lapack.test_all_lapack ... FAILtest_lapack.test_all_lapack ... FAIL test_lapack.test_all_lapack ... ok test_lapack.test_all_lapack ... ok test_fblas (test_blas.TestBLAS) ... ok test_axpy (test_blas.TestCBLAS1Simple) ... ok test_amax (test_blas.TestFBLAS1Simple) ... ok test_asum (test_blas.TestFBLAS1Simple) ... FAIL test_axpy (test_blas.TestFBLAS1Simple) ... ok test_complex_dotc (test_blas.TestFBLAS1Simple) ... Segmentation fault From david at ar.media.kyoto-u.ac.jp Sun Sep 7 12:17:54 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 08 Sep 2008 01:17:54 +0900 Subject: [SciPy-user] segmentation fault in scipy.test() In-Reply-To: References: <48C3F2E9.4050906@ar.media.kyoto-u.ac.jp> <48C3FB82.7040603@ar.media.kyoto-u.ac.jp> Message-ID: <48C3FEB2.3060601@ar.media.kyoto-u.ac.jp> gfunch wrote: > > Hi David, > Below are the 'non-ok' results (and context) from the output of > scipy.test(verbose=10). > As I suspected, a fortran/atlas issue. I guess you have both g77 and gfortran, than you built atlas with gfortran for the F77 interface, and used g77 to build scipy. This cannot work. You should rebuild both numpy and scipy from scratch (delete both installation directories AND build directories), and use gfortran: python setup.py build --fcompiler=gnu95 ... cheers, David From gyromagnetic at gmail.com Sun Sep 7 12:53:37 2008 From: gyromagnetic at gmail.com (gfunch) Date: Sun, 7 Sep 2008 10:53:37 -0600 Subject: [SciPy-user] segmentation fault in scipy.test() In-Reply-To: <48C3FEB2.3060601@ar.media.kyoto-u.ac.jp> References: <48C3F2E9.4050906@ar.media.kyoto-u.ac.jp> <48C3FB82.7040603@ar.media.kyoto-u.ac.jp> <48C3FEB2.3060601@ar.media.kyoto-u.ac.jp> Message-ID: On Sun, Sep 7, 2008 at 10:17 AM, David Cournapeau wrote: > gfunch wrote: >> >> Hi David, >> Below are the 'non-ok' results (and context) from the output of >> scipy.test(verbose=10). >> > > As I suspected, a fortran/atlas issue. I guess you have both g77 and > gfortran, than you built atlas with gfortran for the F77 interface, and > used g77 to build scipy. This cannot work. You should rebuild both numpy > and scipy from scratch (delete both installation directories AND build > directories), and use gfortran: > > python setup.py build --fcompiler=gnu95 ... > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi David, Yes, that's it! Thanks. I do get a few errors and failures: ERROR: Failure: AttributeError ('module' object has no attribute 'knownfailureif') FAIL: test_lapack.test_all_lapack AssertionError: Arrays are not almost equal FAILED (SKIP=14, errors=3, failures=2) Is this 'normal'? Thanks, again! -gyro From david at ar.media.kyoto-u.ac.jp Sun Sep 7 12:48:45 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 08 Sep 2008 01:48:45 +0900 Subject: [SciPy-user] segmentation fault in scipy.test() In-Reply-To: References: <48C3F2E9.4050906@ar.media.kyoto-u.ac.jp> <48C3FB82.7040603@ar.media.kyoto-u.ac.jp> <48C3FEB2.3060601@ar.media.kyoto-u.ac.jp> Message-ID: <48C405ED.8000006@ar.media.kyoto-u.ac.jp> gfunch wrote: > Hi David, > Yes, that's it! Thanks. > Cool > I do get a few errors and failures: > ERROR: Failure: AttributeError ('module' object has no attribute > 'knownfailureif') > You should update your numpy to a more recent version: those are really recent (a couple of hours ago) changes for the soon to be released numpy 1.2, which next scipy release will depend on. Those are harmless, though, if you don't want to waste time on updating numpy, cheers, David From david.strozzi at gmail.com Sun Sep 7 22:18:56 2008 From: david.strozzi at gmail.com (David Strozzi) Date: Sun, 7 Sep 2008 19:18:56 -0700 Subject: [SciPy-user] modified bessel function of the 2nd/3rd kind Message-ID: Folks, scipy.special contains these Bessel functions: Bessel Functions ---------------- * jn -- Bessel function of integer order and real argument. * jv -- Bessel function of real-valued order and complex argument. * jve -- Exponentially scaled Bessel function. * yn -- Bessel function of second kind (integer order). * yv -- Bessel function of the second kind (real-valued order). * yve -- Exponentially scaled Bessel function of the second kind. * kn -- Modified Bessel function of the third kind (integer order). * kv -- Modified Bessel function of the third kind (real order). * kve -- Exponentially scaled modified Bessel function of the third kind. As has been noted on this list before, kn, kv are also, in fact much more commonly, referred to as mod. Bess. funcs of the *2nd* kind. Can this info please be updated? Is there any good reason not to have the help message say "of the second kind (sometimes called the third kind)"? As someone who works with these I have never heard them called the 3rd kind, only the 2nd. Thanks, Dave Strozzi From bernardo.rocha at meduni-graz.at Mon Sep 8 11:03:49 2008 From: bernardo.rocha at meduni-graz.at (bernardo martins rocha) Date: Mon, 08 Sep 2008 17:03:49 +0200 Subject: [SciPy-user] Illegal Instruction Message-ID: <48C53ED5.3060905@meduni-graz.at> Hi Guys, i'm trying to use scipy.optimize.leastsq but whenever I call it I get the following error: /"In [13]: p0 = [0.5,1,0.5,1] # initial guesses In [14]: guessfit = dbexpl(t,p0) In [15]: pbest = leastsq(residuals,p0,args=(data,t),full_output=1) Illegal instruction" /What does this error mean? I don't think it's is a error in the way I'm calling the method because I haven't changed my script that was working fine some weeks ago. Thanks in advance! Bernardo M. Rocha / / From robert.kern at gmail.com Mon Sep 8 11:47:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 8 Sep 2008 10:47:14 -0500 Subject: [SciPy-user] Illegal Instruction In-Reply-To: <48C53ED5.3060905@meduni-graz.at> References: <48C53ED5.3060905@meduni-graz.at> Message-ID: <3d375d730809080847j1b0ef3f7ydf26f811784143a6@mail.gmail.com> On Mon, Sep 8, 2008 at 10:03, bernardo martins rocha wrote: > Hi Guys, > > i'm trying to use scipy.optimize.leastsq but whenever I call it I get > the following error: > > /"In [13]: p0 = [0.5,1,0.5,1] # initial guesses > > In [14]: guessfit = dbexpl(t,p0) > > In [15]: pbest = leastsq(residuals,p0,args=(data,t),full_output=1) > Illegal instruction" > > /What does this error mean? I don't think it's is a error in the way I'm > calling the method because I haven't changed my script that was working > fine some weeks ago. Are you on Windows? Which scipy binaries are you using? Does your CPU support SSE2? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From contact at pythonxy.com Mon Sep 8 14:01:29 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Mon, 08 Sep 2008 20:01:29 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.0.5 Message-ID: <48C56879.7070607@pythonxy.com> Hi all, As you may already know, Python(x,y) is a free scientific-oriented Python Distribution based on Qt and Eclipse providing a self-consistent scientific development environment. Release 2.0.5 is now available on http://www.pythonxy.com. (Full Edition, Basic Edition, Light Edition and Update) Changes history Version 2.0.5 (09-06-2008) * Updated: o Enthought Tool Suite 3.0.0(.1) (docs and examples updated) o PyDAP 2.2.6.5 o xy 1.0.5 (New shortcuts and help links) * Corrected: o VTK: VTKData folder was not found by the example scripts (see VTK documentation folder) o Eclipse/Windows Vista: Java RE updated (version 6 Update 7) in Eclipse main installer - the previous version was freezing on some machines under Windows Vista o Console 2 (New settings) o Notepad++ (New Console 2 settings) Regards, Pierre Raybaut From pav at iki.fi Mon Sep 8 14:59:05 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 8 Sep 2008 18:59:05 +0000 (UTC) Subject: [SciPy-user] modified bessel function of the 2nd/3rd kind References: Message-ID: Sun, 07 Sep 2008 19:18:56 -0700, David Strozzi wrote: > Folks, > > scipy.special contains these Bessel functions: > > Bessel Functions > ---------------- [clip] > > As has been noted on this list before, kn, kv are also, in fact much > more commonly, referred to as mod. Bess. funcs of the *2nd* kind. Can > this info please be updated? Is there any good reason not to have the > help message say "of the second kind (sometimes called the third kind)"? > > As someone who works with these I have never heard them called the 3rd > kind, only the 2nd. Changed in r4704; "of the third kind" seems to be some kind of historical remnant that's rarely used today. -- Pauli Virtanen From bernardo.rocha at meduni-graz.at Mon Sep 8 15:59:50 2008 From: bernardo.rocha at meduni-graz.at (Bernardo Martins Rocha) Date: Mon, 08 Sep 2008 21:59:50 +0200 Subject: [SciPy-user] Illegal instruction Message-ID: <48C5A0560200002A0001C194@si062.meduni-graz.at> Hi.. I running Opensuse 11 in an AMD Opteron x64. So, I think the CPU supports SSE2. I'm using the scipy binaries provided by the opensuse-science repository. The strange thing that I also tried some examples (written by Travis Oliphant) and I?got the same error when I call leastsq. I did the scipy.test(level=1) and I got this error too. Should I try to install the scipy from the svn repository? Thanks! Best regards, Bernardo M. Rocha Message: 3 Date: Mon, 08 Sep 2008 17:03:49 +0200 From: bernardo martins rocha Subject: [SciPy-user] Illegal Instruction To: scipy-user at scipy.org Message-ID: <48C53ED5.3060905 at meduni-graz.at> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Hi Guys, i'm trying to use scipy.optimize.leastsq but whenever I call it I get the following error: /"In [13]: p0 = [0.5,1,0.5,1] # initial guesses In [14]: guessfit = dbexpl(t,p0) In [15]: pbest = leastsq(residuals,p0,args=(data,t),full_output=1) Illegal instruction" /What does this error mean? I don't think it's is a error in the way I'm calling the method because I haven't changed my script that was working fine some weeks ago. Thanks in advance! Bernardo M. Rocha / / ------------------------------ Message: 4 Date: Mon, 8 Sep 2008 10:47:14 -0500 From: "Robert Kern" Subject: Re: [SciPy-user] Illegal Instruction To: "SciPy Users List" Message-ID: <3d375d730809080847j1b0ef3f7ydf26f811784143a6 at mail.gmail.com> Content-Type: text/plain; charset=UTF-8 On Mon, Sep 8, 2008 at 10:03, bernardo martins rocha wrote: > Hi Guys, > > i'm trying to use scipy.optimize.leastsq but whenever I call it I get > the following error: > > /"In [13]: p0 = [0.5,1,0.5,1] # initial guesses > > In [14]: guessfit = dbexpl(t,p0) > > In [15]: pbest = leastsq(residuals,p0,args=(data,t),full_output=1) > Illegal instruction" > > /What does this error mean? I don't think it's is a error in the way I'm > calling the method because I haven't changed my script that was working > fine some weeks ago. Are you on Windows? Which scipy binaries are you using? Does your CPU support SSE2? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ------------------------------ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user End of SciPy-user Digest, Vol 61, Issue 10 ****************************************** From FDU.xiaojf at gmail.com Tue Sep 9 04:47:47 2008 From: FDU.xiaojf at gmail.com (xiaojf) Date: Tue, 9 Sep 2008 01:47:47 -0700 (PDT) Subject: [SciPy-user] maybe a bug in scipy.io.loadmat Message-ID: Hi all, I found that scipy.io.loadmat(filename) doesn't work when filename contains '/' instead of '\\' as file path separater. """ Help on function loadmat in module scipy.io.mio: loadmat(file_name, mdict=None, appendmat=True, basename='raw', **kwargs) Load Matlab(tm) file file_name - Name of the mat file (do not need .mat extension if appendmat==True) If name not a full path name, search for the file on the sys.path list and use the first one found (the current directory is searched first). Can also pass open file-like object """ The problem is in function find_mat_file, which tries to find the file in directories listed in sys.path when there is no os.sep in file_name. I couldn't understand why the function find_mat_file() tries to find .mat file in directories listed in sys.path, since sys.path is module search path not data file search path. def find_mat_file(file_name, appendmat=True): ''' Try to find .mat file on system path file_name - file name string append_mat - If True, and file_name does not end in '.mat', appends it ''' if appendmat and file_name[-4:] == ".mat": file_name = file_name[:-4] if os.sep in file_name: full_name = file_name if appendmat: full_name = file_name + ".mat" else: full_name = None junk, file_name = os.path.split(file_name) for path in sys.path: test_name = os.path.join(path, file_name) if appendmat: test_name += ".mat" try: fid = open(test_name,'rb') fid.close() full_name = test_name break except IOError: pass return full_name From cyril.giraudon at free.fr Tue Sep 9 09:37:26 2008 From: cyril.giraudon at free.fr (cyril giraudon) Date: Tue, 09 Sep 2008 15:37:26 +0200 Subject: [SciPy-user] butterworth filter Message-ID: <48C67C16.9030701@free.fr> Hi, I use scipy 0.6.0 and i try to reproduce the plot of the matlab butter function web documentation (first response for a google request "matlab butter example"). The matlab code is : [z,p,k] = butter(9,300/500,'high'); [sos,g] = zp2sos(z,p,k); % Convert to SOS form Hd = dfilt.df2tsos(sos,g); % Create a dfilt object h = fvtool(Hd); % Plot magnitude response set(h,'Analysis','freq') % Display frequency response In scipy, I write : from scipy.signal import butter, freqz from pylab import show, grid, log, plot b, a = butter(9, 300./500., 'high') fi = freqz(b, a) plot(fi[0], 20*log(abs(fi[1]))) grid() show() Why are the two filters not the same ? Thanks a lot, Cyril. From kdere at gmu.edu Tue Sep 9 12:12:00 2008 From: kdere at gmu.edu (Ken Dere) Date: Tue, 9 Sep 2008 16:12:00 +0000 (UTC) Subject: [SciPy-user] need an IDL-like rebin function Message-ID: I have tried to use zoom as a rebin-like function but it's behavior at the edges of the array is not acceptable and I don't seem to be able to influence this behavior. Actually, I don't even want interpolation at this point. advice appreciated Ken Dere From gael.varoquaux at normalesup.org Tue Sep 9 12:17:44 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 9 Sep 2008 18:17:44 +0200 Subject: [SciPy-user] need an IDL-like rebin function In-Reply-To: References: Message-ID: <20080909161744.GC28438@phare.normalesup.org> On Tue, Sep 09, 2008 at 04:12:00PM +0000, Ken Dere wrote: > I have tried to use zoom as a rebin-like function but it's behavior at the > edges of the array is not acceptable and I don't seem to be able to influence > this behavior. Actually, I don't even want interpolation at this point. You might find useful examples on the following page: http://www.scipy.org/Cookbook/Rebinning HTH, Ga?l From bryan at ideotrope.org Tue Sep 9 12:48:14 2008 From: bryan at ideotrope.org (Bryan Keith) Date: Tue, 9 Sep 2008 10:48:14 -0600 (MDT) Subject: [SciPy-user] scipy.test fails: clapack module is empty Message-ID: <4967.64.78.232.178.1220978894.squirrel@ideotrope.org> Hello, I'm trying to install scipy (0.6.0) on Ubuntu 8.04 64 bit with Python 2.5.2 and numpy 1.0.4 Installation (via apt-get) seems to go fine, but the test suite fails. I'm not sure how concerned I should be about this failure. I've searched for some of the errors that I'm getting but couldn't find anything to either resolve the tests and make me decide to ignore them. I'm pasting the test results below. Any help is appreciated. Thank you. Bryan >>> import scipy >>> scipy.test(level=1) Found 9/9 tests for scipy.cluster.vq Found 18/18 tests for scipy.fftpack.basic Found 4/4 tests for scipy.fftpack.helper Found 20/20 tests for scipy.fftpack.pseudo_diffs Found 1/1 tests for scipy.integrate Found 10/10 tests for scipy.integrate.quadpack Found 3/3 tests for scipy.integrate.quadrature Found 6/6 tests for scipy.interpolate Found 6/6 tests for scipy.interpolate.fitpack Found 4/4 tests for scipy.io.array_import Found 28/28 tests for scipy.io.mio Found 13/13 tests for scipy.io.mmio Found 5/5 tests for scipy.io.npfile Found 4/4 tests for scipy.io.recaster Found 16/16 tests for scipy.lib.blas Found 128/128 tests for scipy.lib.blas.fblas **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42/42 tests for scipy.lib.lapack Found 41/41 tests for scipy.linalg.basic Found 16/16 tests for scipy.linalg.blas Found 72/72 tests for scipy.linalg.decomp Found 128/128 tests for scipy.linalg.fblas Found 6/6 tests for scipy.linalg.iterative Found 4/4 tests for scipy.linalg.lapack Found 7/7 tests for scipy.linalg.matfuncs Found 9/9 tests for scipy.linsolve.umfpack Found 2/2 tests for scipy.maxentropy Found 3/3 tests for scipy.misc.pilutil Found 399/399 tests for scipy.ndimage Found 5/5 tests for scipy.odr Found 8/8 tests for scipy.optimize Found 1/1 tests for scipy.optimize.cobyla Found 10/10 tests for scipy.optimize.nonlin Found 4/4 tests for scipy.optimize.zeros Found 5/5 tests for scipy.signal.signaltools Found 4/4 tests for scipy.signal.wavelets Found 152/152 tests for scipy.sparse Found 342/342 tests for scipy.special.basic Found 3/3 tests for scipy.special.spfun_stats Found 107/107 tests for scipy.stats Found 73/73 tests for scipy.stats.distributions Found 10/10 tests for scipy.stats.morestats Found 0/0 tests for __main__ .../usr/lib/python2.5/site-packages/scipy/cluster/vq.py:477: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " exception raised as expected: One of the clusters is empty. Re-run kmean with a different initialization. ................................................Residual: 1.05006987327e-07 ..................../usr/lib/python2.5/site-packages/scipy/interpolate/fitpack2.py:458: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ...... Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. .........................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ............................FF....................................................... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ...........................................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .......... **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ...Result may be inaccurate, approximate err = 1.11929781998e-08 ...Result may be inaccurate, approximate err = 7.73070496507e-12 ......Use minimum degree ordering on A'+A. ..Use minimum degree ordering on A'+A. F..Use minimum degree ordering on A'+A. F............................................................................................................/usr/lib/python2.5/site-packages/scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' ........................................................................................................................................................................................................................................................................................................F..F.........................................................Use minimum degree ordering on A'+A. .....................................Use minimum degree ordering on A'+A. .....................................Use minimum degree ordering on A'+A. ................................Use minimum degree ordering on A'+A. ....................................................................................................................................................................................................................................................................................................................................................0.2 0.2 0.2 ......0.2 ..0.2 0.2 0.2 0.2 0.2 .........................................................................................................................................................................................................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ...... ====================================================================== FAIL: check_syevr (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769474, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769474, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: Solve: single precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py", line 32, in check_solve_complex_without_umfpack assert_array_almost_equal(a*x, b) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 20.0%) x: array([ 1.00000000+0.j, 1.99999809+0.j, 3.00000000+0.j, 4.00000048+0.j, 5.00000000+0.j], dtype=complex64) y: array([1, 2, 3, 4, 5]) ====================================================================== FAIL: Solve: single precision ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py", line 43, in check_solve_without_umfpack assert_array_almost_equal(a*x, b) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 20.0%) x: array([ 1. , 1.99999809, 3. , 4.00000048, 5. ], dtype=float32) y: array([1, 2, 3, 4, 5]) ====================================================================== FAIL: test_explicit (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 50, in test_explicit -8.7849712165253724e-02]), File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.26462971e+03, -5.42545890e+01, -8.64250389e-02]) y: array([ 1.26465481e+03, -5.40184100e+01, -8.78497122e-02]) ====================================================================== FAIL: test_multi (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 191, in test_multi 0.5101147161764654, 0.5173902330489161]), File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.31272063, 2.44289312, 7.76215871, 0.55995622, 0.46423343]) y: array([ 4.37998803, 2.43330576, 8.00288459, 0.51011472, 0.51739023]) ---------------------------------------------------------------------- Ran 1728 tests in 7.579s FAILED (failures=6) >>> From bernardo.rocha at meduni-graz.at Tue Sep 9 13:02:46 2008 From: bernardo.rocha at meduni-graz.at (bernardo martins rocha) Date: Tue, 09 Sep 2008 19:02:46 +0200 Subject: [SciPy-user] scipy.test fails: clapack module is empty (Bryan Keith) In-Reply-To: References: Message-ID: <48C6AC36.6030808@meduni-graz.at> Hi Bryan Keith, > 8. scipy.test fails: clapack module is empty (Bryan Keith) > I've got the same error with my installation (opensuse11 via yast --- science repository), please have a look at the message below. I also have a "Illegal Instruction" error in the end of the test, which is worse than your problem. I would like to get rid off this. Any suggestions? Thanks! -------------------------- In [1]: import scipy In [2]: scipy.test() Failed importing scipy.linsolve.umfpack: 'module' object has no attribute 'umfpack' Found 9/9 tests for scipy.cluster.tests.test_vq Found 20/20 tests for scipy.fftpack.tests.test_pseudo_diffs Found 4/4 tests for scipy.fftpack.tests.test_helper Found 18/18 tests for scipy.fftpack.tests.test_basic Found 1/1 tests for scipy.integrate.tests.test_integrate Found 3/3 tests for scipy.integrate.tests.test_quadrature Found 10/10 tests for scipy.integrate.tests.test_quadpack Found 6/6 tests for scipy.tests.test_fitpack Found 6/6 tests for scipy.tests.test_interpolate Found 28/28 tests for scipy.io.tests.test_mio Found 4/4 tests for scipy.io.tests.test_recaster Found 5/5 tests for scipy.io.tests.test_npfile Found 13/13 tests for scipy.io.tests.test_mmio Found 4/4 tests for scipy.io.tests.test_array_import Found 16/16 tests for scipy.lib.blas.tests.test_blas Found 128/128 tests for scipy.lib.blas.tests.test_fblas **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42/42 tests for scipy.lib.lapack.tests.test_lapack Found 4/4 tests for scipy.linalg.tests.test_lapack Found 16/16 tests for scipy.linalg.tests.test_blas Found 6/6 tests for scipy.linalg.tests.test_iterative Found 41/41 tests for scipy.linalg.tests.test_basic Found 128/128 tests for scipy.linalg.tests.test_fblas Found 7/7 tests for scipy.linalg.tests.test_matfuncs Found 72/72 tests for scipy.linalg.tests.test_decomp Failed importing /usr/lib64/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py: 'module' object has no attribute 'umfpack' Found 2/2 tests for scipy.maxentropy.tests.test_maxentropy Found 3/3 tests for scipy.misc.tests.test_pilutil Found 399/399 tests for scipy.ndimage.tests.test_ndimage Found 5/5 tests for scipy.odr.tests.test_odr Found 4/4 tests for scipy.optimize.tests.test_zeros Found 1/1 tests for scipy.optimize.tests.test_cobyla Found 8/8 tests for scipy.optimize.tests.test_optimize Found 10/10 tests for scipy.optimize.tests.test_nonlin Found 5/5 tests for scipy.signal.tests.test_signaltools Found 4/4 tests for scipy.signal.tests.test_wavelets Found 152/152 tests for scipy.sparse.tests.test_sparse Found 3/3 tests for scipy.special.tests.test_spfun_stats Found 342/342 tests for scipy.special.tests.test_basic Found 10/10 tests for scipy.stats.tests.test_morestats Found 107/107 tests for scipy.stats.tests.test_stats Found 73/73 tests for scipy.stats.tests.test_distributions Found 0/0 tests for scipy.weave.tests.test_c_spec Found 1/1 tests for scipy.weave.tests.test_ast_tools Found 9/9 tests for scipy.weave.tests.test_build_tools Found 2/2 tests for scipy.weave.tests.test_blitz_tools building extensions here: /home/rocha/.python25_compiled/m11 Found 1/1 tests for scipy.weave.tests.test_ext_tools Found 0/0 tests for scipy.weave.tests.test_inline_tools Found 26/26 tests for scipy.weave.tests.test_catalog Found 0/0 tests for scipy.weave.tests.test_scxx_sequence Found 74/74 tests for scipy.weave.tests.test_size_check Failed importing /usr/lib64/python2.5/site-packages/scipy/weave/tests/old_test_wx_spec.py: Could not locate wxPython base directory. Found 0/0 tests for scipy.weave.tests.test_scxx_dict Found 0/0 tests for scipy.weave.tests.test_scxx_object Found 3/3 tests for scipy.weave.tests.test_standard_array_spec Found 16/16 tests for scipy.weave.tests.test_slice_handler Failed importing /usr/lib64/python2.5/site-packages/scipy/weave/tests/test_wx_spec.py: Could not locate wxPython base directory. ../usr/lib64/python2.5/site-packages/scipy/cluster/vq.py:477: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " exception raised as expected: One of the clusters is empty. Re-run kmean with a different initialization. ..........Illegal instruction Bernardo M. Rocha From kdere at gmu.edu Tue Sep 9 14:05:50 2008 From: kdere at gmu.edu (Ken Dere) Date: Tue, 9 Sep 2008 18:05:50 +0000 (UTC) Subject: [SciPy-user] need an IDL-like rebin function References: <20080909161744.GC28438@phare.normalesup.org> Message-ID: Gael Varoquaux normalesup.org> writes: > > On Tue, Sep 09, 2008 at 04:12:00PM +0000, Ken Dere wrote: > > I have tried to use zoom as a rebin-like function but it's behavior at the > > edges of the array is not acceptable and I don't seem to be able to influence > > this behavior. Actually, I don't even want interpolation at this point. > > You might find useful examples on the following page: > > http://www.scipy.org/Cookbook/Rebinning > > HTH, > > Ga?l > Thanks. I looked at them. I could not get them to work and their indexing logic was so complicated I would never be able to fix them. Ken From amcmorl at gmail.com Tue Sep 9 14:57:05 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Tue, 9 Sep 2008 14:57:05 -0400 Subject: [SciPy-user] need an IDL-like rebin function In-Reply-To: References: <20080909161744.GC28438@phare.normalesup.org> Message-ID: 2008/9/9 Ken Dere : > Gael Varoquaux normalesup.org> writes: > >> >> On Tue, Sep 09, 2008 at 04:12:00PM +0000, Ken Dere wrote: >> > I have tried to use zoom as a rebin-like function but it's behavior at the >> > edges of the array is not acceptable and I don't seem to be able to > influence >> > this behavior. Actually, I don't even want interpolation at this point. >> >> You might find useful examples on the following page: >> >> http://www.scipy.org/Cookbook/Rebinning >> >> HTH, >> >> Ga?l >> > > Thanks. I looked at them. I could not get them to work and their indexing > logic was so complicated I would never be able to fix them. What wasn't working? It's good for the cookbook examples to be a) functional and b) well documented, so if there is a problem we should try to fix it. All the examples have worked correctly for me in the past, but I haven't used them for a while. Angus. -- AJC McMorland Post-doctoral research fellow Neurobiology, University of Pittsburgh From teddy.kord at googlemail.com Tue Sep 9 15:54:02 2008 From: teddy.kord at googlemail.com (Ish Aden) Date: Tue, 9 Sep 2008 20:54:02 +0100 Subject: [SciPy-user] Pytrilinos on Windows Message-ID: <34d365e70809091254x7ec84543y13b032aae141cd6b@mail.gmail.com> Hello Could anyone who's successfully installed Pytrilinos on a Windows machine explain how they did it. Also, are there any mature/close to mature Python PDE solvers out there in addition to 'sfepy'. Thanks in advance. Ted -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgoli at sun.ac.za Tue Sep 9 16:12:29 2008 From: bgoli at sun.ac.za (Brett G. Olivier) Date: Tue, 09 Sep 2008 22:12:29 +0200 Subject: [SciPy-user] Pytrilinos on Windows In-Reply-To: <34d365e70809091254x7ec84543y13b032aae141cd6b@mail.gmail.com> References: <34d365e70809091254x7ec84543y13b032aae141cd6b@mail.gmail.com> Message-ID: <48C6D8AD.4080608@sun.ac.za> Ish Aden wrote: > Could anyone who's successfully installed Pytrilinos on a Windows > machine explain how they did it. > > Also, are there any mature/close to mature Python PDE solvers out there > in addition to 'sfepy'. Have you had a look at FiPy (http://www.ctcms.nist.gov/fipy/) Cheers Brett From kpdere at verizon.net Tue Sep 9 17:25:22 2008 From: kpdere at verizon.net (Ken Dere) Date: Tue, 9 Sep 2008 21:25:22 +0000 (UTC) Subject: [SciPy-user] need an IDL-like rebin function - operator error References: Message-ID: Ken Dere gmu.edu> writes: > > I have tried to use zoom as a rebin-like function but it's behavior at the > edges of the array is not acceptable and I don't seem to be able to influence > this behavior. Actually, I don't even want interpolation at this point. > > advice appreciated > > Ken Dere > My mistake. It really does work. Sorry for the bother. Ken From teddy.kord at googlemail.com Tue Sep 9 19:28:21 2008 From: teddy.kord at googlemail.com (Ted Kord) Date: Tue, 9 Sep 2008 16:28:21 -0700 (PDT) Subject: [SciPy-user] Pytrilinos on Windows In-Reply-To: <48C6D8AD.4080608@sun.ac.za> References: <34d365e70809091254x7ec84543y13b032aae141cd6b@mail.gmail.com> <48C6D8AD.4080608@sun.ac.za> Message-ID: <1bbf2130-7eae-45f1-a504-c54c5462db7d@m45g2000hsb.googlegroups.com> I'll have a look. Thanks. Ted From oliphant at enthought.com Wed Sep 10 00:38:24 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 09 Sep 2008 23:38:24 -0500 Subject: [SciPy-user] NumPy arrays that use memory allocated from other libraries or tools Message-ID: <48C74F40.3090103@enthought.com> I wanted to point anybody interested to a blog post that describes a useful pattern for having a NumPy array that points to the memory created by a different memory manager than the standard one used by NumPy. The pattern shows how to create a NumPy array that points to previously allocated memory and then shows how to construct an object that allows the correct deallocator to be called when the NumPy array is freed. This may be useful if you are wrapping code that has it's own memory management scheme. Comments and feedback is welcome. The post is http://blog.enthought.com/?p=62 Best regards, -Travis Oliphant From simon.palmer at gmail.com Wed Sep 10 20:41:59 2008 From: simon.palmer at gmail.com (SimonPalmer) Date: Wed, 10 Sep 2008 17:41:59 -0700 (PDT) Subject: [SciPy-user] examples of using norm_gen Message-ID: <510dfcba-748d-4541-91e5-a6fb340bc046@p31g2000prf.googlegroups.com> can anyone point me in the direction of examples of how to use norm/ norm_gen from the scipy.stats.distributions module? thanks Simon From robert.kern at gmail.com Thu Sep 11 00:05:30 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 10 Sep 2008 23:05:30 -0500 Subject: [SciPy-user] examples of using norm_gen In-Reply-To: <510dfcba-748d-4541-91e5-a6fb340bc046@p31g2000prf.googlegroups.com> References: <510dfcba-748d-4541-91e5-a6fb340bc046@p31g2000prf.googlegroups.com> Message-ID: <3d375d730809102105v167195ceuf8c0cf9e725c5059@mail.gmail.com> On Wed, Sep 10, 2008 at 19:41, SimonPalmer wrote: > can anyone point me in the direction of examples of how to use norm/ > norm_gen from the scipy.stats.distributions module? The docstring gives a good overview of its capabilities. Is there something specific you found confusing or inadequate about it? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From simon.palmer at gmail.com Thu Sep 11 00:17:27 2008 From: simon.palmer at gmail.com (SimonPalmer) Date: Wed, 10 Sep 2008 21:17:27 -0700 (PDT) Subject: [SciPy-user] examples of using norm_gen In-Reply-To: <3d375d730809102105v167195ceuf8c0cf9e725c5059@mail.gmail.com> References: <510dfcba-748d-4541-91e5-a6fb340bc046@p31g2000prf.googlegroups.com> <3d375d730809102105v167195ceuf8c0cf9e725c5059@mail.gmail.com> Message-ID: Inadequate, certainly not. Confusing yes. However, I think my problem is really a lack of experience with python. I have read the docstrings, and looking at what the module does it seems to suit my purposes perfectly, I'm slightly embarrassed to say that my lack of python skills means I am baffled about how I would actually use it, hence the request to see some sample code. I'm missing something obvious, I can't blame the module. On Sep 11, 5:05?am, "Robert Kern" wrote: > On Wed, Sep 10, 2008 at 19:41, SimonPalmer wrote: > > can anyone point me in the direction of examples of how to use norm/ > > norm_gen from the scipy.stats.distributions module? > > The docstring gives a good overview of its capabilities. Is there > something specific you found confusing or inadequate about it? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user From tonyyu at MIT.EDU Thu Sep 11 11:03:06 2008 From: tonyyu at MIT.EDU (Tony S Yu) Date: Thu, 11 Sep 2008 11:03:06 -0400 Subject: [SciPy-user] Unexpected typecasting when adding sparse.lil_matrix Message-ID: This may be expected behavior, but I found it surprising. Addition (or any other simple operation) of two lil sparse matrices returns a csc sparse matrix. The scipy website suggests that csc and csr matrices are more efficient than lil matrices for multiplication and inversion (and I guess for addition too), but this typecasting is still a little surprising. I was just curious if this is intentional. Thanks, -Tony #~~~~~~~~~ In [1]: import scipy.sparse as sparse In [2]: A = sparse.lil_eye([3, 3]) In [3]: A + A Out[3]: <3x3 sparse matrix of type '' with 3 stored elements (space for 3) in Compressed Sparse Column format> From wnbell at gmail.com Thu Sep 11 11:49:02 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 11 Sep 2008 11:49:02 -0400 Subject: [SciPy-user] Unexpected typecasting when adding sparse.lil_matrix In-Reply-To: References: Message-ID: On Thu, Sep 11, 2008 at 11:03 AM, Tony S Yu wrote: > This may be expected behavior, but I found it surprising. Addition (or > any other simple operation) of two lil sparse matrices returns a csc > sparse matrix. The scipy website suggests that csc and csr matrices > are more efficient than lil matrices for multiplication and inversion > (and I guess for addition too), but this typecasting is still a little > surprising. I was just curious if this is intentional. > > Thanks, > -Tony > > #~~~~~~~~~ > > In [1]: import scipy.sparse as sparse > > In [2]: A = sparse.lil_eye([3, 3]) > > In [3]: A + A > > Out[3]: > <3x3 sparse matrix of type '' > with 3 stored elements (space for 3) > in Compressed Sparse Column format> > That is the intended result (currently). The issue here is that only some sparse formats define arithmetic operations, so those that don't are converted to a type that does. Even if lil_matrix did define addition itself, it would almost certainly be slower than conversion to a "native" format (i.e. one with a lower-level C++ implementation). Furthermore, guaranteeing that other operations, such as A.transpose(), return the same type as A would make csr_matrix.transpose() an O(N) method rather than an O(1) method. The current implementation is nice because inexperienced users will often get better performance in subsequent operations. OTOH, as your case illustrates, it can lead to surprises. You can always do (A + B).tolil() if you want the result in a particular format though. I'm not sure if your version of SciPy supports lil_matrix(A + B), but that works now also. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From bryan at ideotrope.org Thu Sep 11 17:33:58 2008 From: bryan at ideotrope.org (Bryan Keith) Date: Thu, 11 Sep 2008 15:33:58 -0600 (MDT) Subject: [SciPy-user] scipy.test fails: clapack module is empty (Bryan Keith) In-Reply-To: <48C6AC36.6030808@meduni-graz.at> References: <48C6AC36.6030808@meduni-graz.at> Message-ID: <4528.64.78.232.178.1221168838.squirrel@ideotrope.org> Bernardo, I have no idea what to do about this error. I was hoping someone on this list might be able to help... Bryan > Hi Bryan Keith, >> 8. scipy.test fails: clapack module is empty (Bryan Keith) >> > I've got the same error with my installation (opensuse11 via yast --- > science repository), please have a look at the message below. I also > have a "Illegal Instruction" error in the end of the test, which is > worse than your problem. I would like to get rid off this. Any > suggestions? > > Thanks! > > -------------------------- > In [1]: import scipy > > In [2]: scipy.test() > Failed importing scipy.linsolve.umfpack: 'module' object has no > attribute 'umfpack' > Found 9/9 tests for scipy.cluster.tests.test_vq > Found 20/20 tests for scipy.fftpack.tests.test_pseudo_diffs > Found 4/4 tests for scipy.fftpack.tests.test_helper > Found 18/18 tests for scipy.fftpack.tests.test_basic > Found 1/1 tests for scipy.integrate.tests.test_integrate > Found 3/3 tests for scipy.integrate.tests.test_quadrature > Found 10/10 tests for scipy.integrate.tests.test_quadpack > Found 6/6 tests for scipy.tests.test_fitpack > Found 6/6 tests for scipy.tests.test_interpolate > Found 28/28 tests for scipy.io.tests.test_mio > Found 4/4 tests for scipy.io.tests.test_recaster > Found 5/5 tests for scipy.io.tests.test_npfile > Found 13/13 tests for scipy.io.tests.test_mmio > Found 4/4 tests for scipy.io.tests.test_array_import > Found 16/16 tests for scipy.lib.blas.tests.test_blas > Found 128/128 tests for scipy.lib.blas.tests.test_fblas > > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > > Found 42/42 tests for scipy.lib.lapack.tests.test_lapack > Found 4/4 tests for scipy.linalg.tests.test_lapack > '/usr/lib64/python2.5/site-packages/scipy/linalg/fblas.so'> > Found 16/16 tests for scipy.linalg.tests.test_blas > Found 6/6 tests for scipy.linalg.tests.test_iterative > Found 41/41 tests for scipy.linalg.tests.test_basic > Found 128/128 tests for scipy.linalg.tests.test_fblas > Found 7/7 tests for scipy.linalg.tests.test_matfuncs > Found 72/72 tests for scipy.linalg.tests.test_decomp > Failed importing > /usr/lib64/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py: > 'module' object has no attribute 'umfpack' > Found 2/2 tests for scipy.maxentropy.tests.test_maxentropy > Found 3/3 tests for scipy.misc.tests.test_pilutil > Found 399/399 tests for scipy.ndimage.tests.test_ndimage > Found 5/5 tests for scipy.odr.tests.test_odr > Found 4/4 tests for scipy.optimize.tests.test_zeros > Found 1/1 tests for scipy.optimize.tests.test_cobyla > Found 8/8 tests for scipy.optimize.tests.test_optimize > Found 10/10 tests for scipy.optimize.tests.test_nonlin > Found 5/5 tests for scipy.signal.tests.test_signaltools > Found 4/4 tests for scipy.signal.tests.test_wavelets > Found 152/152 tests for scipy.sparse.tests.test_sparse > Found 3/3 tests for scipy.special.tests.test_spfun_stats > Found 342/342 tests for scipy.special.tests.test_basic > Found 10/10 tests for scipy.stats.tests.test_morestats > Found 107/107 tests for scipy.stats.tests.test_stats > Found 73/73 tests for scipy.stats.tests.test_distributions > Found 0/0 tests for scipy.weave.tests.test_c_spec > Found 1/1 tests for scipy.weave.tests.test_ast_tools > Found 9/9 tests for scipy.weave.tests.test_build_tools > Found 2/2 tests for scipy.weave.tests.test_blitz_tools > building extensions here: /home/rocha/.python25_compiled/m11 > Found 1/1 tests for scipy.weave.tests.test_ext_tools > Found 0/0 tests for scipy.weave.tests.test_inline_tools > Found 26/26 tests for scipy.weave.tests.test_catalog > Found 0/0 tests for scipy.weave.tests.test_scxx_sequence > Found 74/74 tests for scipy.weave.tests.test_size_check > Failed importing > /usr/lib64/python2.5/site-packages/scipy/weave/tests/old_test_wx_spec.py: > Could not locate wxPython base directory. > Found 0/0 tests for scipy.weave.tests.test_scxx_dict > Found 0/0 tests for scipy.weave.tests.test_scxx_object > Found 3/3 tests for scipy.weave.tests.test_standard_array_spec > Found 16/16 tests for scipy.weave.tests.test_slice_handler > Failed importing > /usr/lib64/python2.5/site-packages/scipy/weave/tests/test_wx_spec.py: > Could not locate wxPython base directory. > ../usr/lib64/python2.5/site-packages/scipy/cluster/vq.py:477: > UserWarning: One of the clusters is empty. Re-run kmean with a different > initialization. > warnings.warn("One of the clusters is empty. " > exception raised as expected: One of the clusters is empty. Re-run kmean > with a different initialization. > ..........Illegal instruction > > Bernardo M. Rocha > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From mnandris at blueyonder.co.uk Fri Sep 12 07:02:02 2008 From: mnandris at blueyonder.co.uk (Michael) Date: Fri, 12 Sep 2008 12:02:02 +0100 Subject: [SciPy-user] Why doesn't norm_gen have a 'dist' attribute? In-Reply-To: References: Message-ID: <1221217322.6305.6.camel@mik> > Message: 2 > Date: Wed, 10 Sep 2008 23:05:30 -0500 > From: "Robert Kern" > Subject: Re: [SciPy-user] examples of using norm_gen > To: "SciPy Users List" > Message-ID: > <3d375d730809102105v167195ceuf8c0cf9e725c5059 at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > On Wed, Sep 10, 2008 at 19:41, SimonPalmer wrote: > > can anyone point me in the direction of examples of how to use norm/ > > norm_gen from the scipy.stats.distributions module? > > The docstring gives a good overview of its capabilities. Is there > something specific you found confusing or inadequate about it? norm_gen doesn't appear to _generate_ anything, not directly anyway a=d.norm_gen(name='norm',longname='a normal') a is b=d.norm(x,size=n) b is both a and b produce normal distributions that don't 'look' normal - see attached, though this is a bad test since the beta distribution looks totally mangled but is correc pdf1=a.pdf(x,size=n) pdf2=b.dist.pdf(x,size=n) Leaving norm_gen aside for a moment, c.f. the lack of the 'dist' attribute in 'a', one might also ask: what the dist attribute does? Where did it spring from? there's too much padding in the scipy sandwich, so to speak; e.g. the sheer number of ways of creating scipy distributions is slightly baffling; afaik norm_gen is a base class that is not to be used directly... but that's just a guess. That said, there is an 85-100 fold speed-up over using a np.random distribution + np.histogram Is there a guide somewhere explaining how scientific python api's are packaged/structured? import idioms? > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco -------------- next part -------------- A non-text attachment was scrubbed... Name: norm.py Type: text/x-python Size: 489 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From arserlom at gmail.com Fri Sep 12 08:09:32 2008 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Fri, 12 Sep 2008 14:09:32 +0200 Subject: [SciPy-user] Why doesn't norm_gen have a 'dist' attribute? In-Reply-To: <1221217322.6305.6.camel@mik> References: <1221217322.6305.6.camel@mik> Message-ID: Hello Michael. norm_gen is not meant to be used, it is just used inside scipy's code to generate the norm class, which you should use. For example: from scipy.stats import norm norm.rvs(size=n) norm.pdf(x) or d = norm(loc=mean) d.pdf(x) or norm.pdf(x, loc=mean) or whatever you need to do. Hope it helps, Armando. On Fri, Sep 12, 2008 at 1:02 PM, Michael wrote: > > > Message: 2 > > Date: Wed, 10 Sep 2008 23:05:30 -0500 > > From: "Robert Kern" > > Subject: Re: [SciPy-user] examples of using norm_gen > > To: "SciPy Users List" > > Message-ID: > > <3d375d730809102105v167195ceuf8c0cf9e725c5059 at mail.gmail.com> > > Content-Type: text/plain; charset=UTF-8 > > > > On Wed, Sep 10, 2008 at 19:41, SimonPalmer > wrote: > > > can anyone point me in the direction of examples of how to use norm/ > > > norm_gen from the scipy.stats.distributions module? > > > > The docstring gives a good overview of its capabilities. Is there > > something specific you found confusing or inadequate about it? > > norm_gen doesn't appear to _generate_ anything, not directly anyway > > a=d.norm_gen(name='norm',longname='a normal') > a is > b=d.norm(x,size=n) > b is > > both a and b produce normal distributions that don't 'look' normal - see > attached, though this is a bad test since the beta distribution looks > totally mangled but is correc > > pdf1=a.pdf(x,size=n) > pdf2=b.dist.pdf(x,size=n) > > Leaving norm_gen aside for a moment, c.f. the lack of the 'dist' > attribute in 'a', one might also ask: what the dist attribute does? > Where did it spring from? there's too much padding in the scipy > sandwich, so to speak; > > e.g. the sheer number of ways of creating scipy distributions is > slightly baffling; afaik norm_gen is a base class that is not to be used > directly... but that's just a guess. > > That said, there is an 85-100 fold speed-up over using a np.random > distribution + np.histogram > > Is there a guide somewhere explaining how scientific python api's are > packaged/structured? import idioms? > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an enigma, a harmless > > enigma that is made terrible by our own mad attempt to interpret it as > > though it had an underlying truth." > > -- Umberto Eco > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Sep 12 11:17:07 2008 From: josef.pktd at gmail.com (joep) Date: Fri, 12 Sep 2008 08:17:07 -0700 (PDT) Subject: [SciPy-user] **kwds in frozen distribution class Message-ID: I was trying to work on an example for the frozen distribution class, in response to the previous message by Michael. The change in method signature between frozen and not frozen class got me confused: I looked at the information in help(stats.norm.stats) and did not realize that the signature in help(stats.norm(loc = 10, scale = 10).stats) or help(stats.norm(loc = 10, scale = 10)) is different. Given the description it does what it says, but I think it would be easy to make the keyword arguments consistent between frozen and not frozen distribution The frozen distribution class does not take any additional keywords ------------------------------------------------------------------- e.g. >>> from scipy import stats >>> stats.norm.stats(loc = 10, scale = 10, moments='v') array(100.0) >>> stats.norm(loc = 10, scale = 10).stats(moments='v') Traceback (most recent call last): File "", line 1, in ? stats.norm(loc = 10, scale = 10).stats(moments='v') TypeError: stats() got an unexpected keyword argument 'moments' >>> stats.norm(loc = 10, scale = 10, moments='v').stats() array(100.0) line numbers from current trunk (http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/stats/ distributions.py) 101 # Frozen RV class 102 class rv_frozen(object): 121 def stats(self): 122 return self.dist.stats(*self.args,**self.kwds) I think this change should work to accept additional keywords: 121 def stats(self,**kwds): kwds.update(self.kwds) 122 return self.dist.stats(*self.args,**kwds) Josef From josef.pktd at gmail.com Fri Sep 12 15:42:39 2008 From: josef.pktd at gmail.com (joep) Date: Fri, 12 Sep 2008 12:42:39 -0700 (PDT) Subject: [SciPy-user] **kwds in frozen distribution class In-Reply-To: References: Message-ID: <10729024-7252-47f7-83e8-7dc4a28ff4e1@w7g2000hsa.googlegroups.com> problems with **kwds in moment method in distributions ====================================================== Summary: -------- * ``moment`` in frozen continuous distribution with loc and scale keyword arguments raises exception * ``moment`` in frozen discrete distribution accepts the presence of loc and scale, but ignores them * ``stats`` method works correctly, but only for first and second moment >>> scipy.version.version '0.6.0' >>> numpy.version.version '1.1.0' example continuous distribution ------------------------------- >>> stats.gamma(4).stats() (array(4.0), array(4.0)) >>> stats.gamma(4).moment(1) 4 >>> stats.gamma(4).moment(2) 4 moment in frozen distribution with loc and scale keyword arguments raises exception >>> stats.gamma(4,loc = 10, scale = 10).stats() (array(50.0), array(400.0)) >>> stats.gamma(4,loc = 10, scale = 10).moment(1) Traceback (most recent call last): File "", line 1, in ? stats.gamma(4,loc = 10, scale = 10).moment(1) File "C:\Programs\Python24\lib\site-packages\scipy\stats \distributions.py", line 124, in moment return self.dist.moment(n,*self.args,**self.kwds) TypeError: moment() got an unexpected keyword argument 'loc' actually moment does not allow for loc and scale in the not-frozen distribution either: >>> stats.gamma.moment(2,4) 4 >>> stats.gamma.stats(4,loc = 10, scale = 10) (array(50.0), array(400.0)) >>> stats.gamma.moment(2,4,loc = 10, scale = 10) Traceback (most recent call last): File "", line 1, in ? stats.gamma.moment(2,4,loc = 10, scale = 10) TypeError: moment() got an unexpected keyword argument 'loc' check:stats agrees with random sample: >>> rvs=stats.gamma.rvs(4,loc = 10, scale = 10,size=10000) >>> rvs.mean() 50.01717741991262 >>> rvs.var() 400.12936828905185 example: discrete distribution ------------------------------ stats works correctly with or without loc,scale parameters: >>> stats.poisson.stats(4,loc = 10, scale = 10) (array(14.0), array(4.0)) >>> stats.poisson(4,loc = 10, scale = 10).stats() (array(14.0), array(4.0)) >>> rvs=stats.poisson.rvs(4,loc = 10, scale = 10, size=10000) >>> rvs.mean() 13.9985 >>> rvs.var() 3.9602977499998575 >>> stats.poisson.stats(4) (array(4.0), array(4.0)) >>> stats.poisson(4).stats() (array(4.0), array(4.0)) moment in frozen distribution accepts the presence of loc and scale, but ignores them the following are not for the log,scale transformed random variable >>> stats.poisson(4,loc = 10, scale = 10).moment(1) #wrong result 4 >>> stats.poisson(4,loc = 10, scale = 10).moment(2) #wrong result 4 >>> stats.poisson(4,loc = 10, scale = 10).moment(3) #wrong result 4.0 >>> stats.poisson(4,loc = 10, scale = 10).moment(4) #wrong result 52.0 >>> stats.poisson.moment(1,4,loc = 10, scale = 10) #wrong result 4 instead the results are for the untransformed variable >>> stats.poisson(4).moment(1) 4 >>> stats.poisson(4).moment(2) 4 >>> stats.poisson(4).moment(3) 4.0 >>> stats.poisson(4).moment(4) 52.0 just to check result >>> rvs0=stats.poisson.rvs(4,size=10000000) >>> ((rvs0-4)**4).mean() 51.991537899999997 adding loc, scale to moment of already frozen distribution raises exception, which at least is not misleading >>> stats.poisson(4).moment(1,loc = 10, scale = 10) Traceback (most recent call last): File "", line 1, in ? stats.poisson(4).moment(1,loc = 10, scale = 10) TypeError: moment() got an unexpected keyword argument 'loc' Josef From ryanlists at gmail.com Fri Sep 12 17:18:21 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 12 Sep 2008 16:18:21 -0500 Subject: [SciPy-user] butterworth filter In-Reply-To: <48C67C16.9030701@free.fr> References: <48C67C16.9030701@free.fr> Message-ID: So, attached is the plot I get from scipy. It is a high pass filter. It seems reasonable. What does the curve look like from Matlab. Ryan On Tue, Sep 9, 2008 at 8:37 AM, cyril giraudon wrote: > Hi, > > I use scipy 0.6.0 and i try to reproduce the plot of the matlab butter > function web documentation (first response for a google request "matlab > butter example"). > > The matlab code is : > > [z,p,k] = butter(9,300/500,'high'); > [sos,g] = zp2sos(z,p,k); % Convert to SOS form > Hd = dfilt.df2tsos(sos,g); % Create a dfilt object > h = fvtool(Hd); % Plot magnitude response > set(h,'Analysis','freq') % Display frequency response > > > In scipy, I write : > > from scipy.signal import butter, freqz > > from pylab import show, grid, log, plot > > b, a = butter(9, 300./500., 'high') > > fi = freqz(b, a) > > plot(fi[0], 20*log(abs(fi[1]))) > > grid() > > show() > > > > Why are the two filters not the same ? > > > Thanks a lot, > > Cyril. > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: butter_scipy.png Type: image/png Size: 33022 bytes Desc: not available URL: From cournape at gmail.com Sat Sep 13 01:35:14 2008 From: cournape at gmail.com (David Cournapeau) Date: Sat, 13 Sep 2008 14:35:14 +0900 Subject: [SciPy-user] scipy.test fails: clapack module is empty (Bryan Keith) In-Reply-To: <4528.64.78.232.178.1221168838.squirrel@ideotrope.org> References: <48C6AC36.6030808@meduni-graz.at> <4528.64.78.232.178.1221168838.squirrel@ideotrope.org> Message-ID: <5b8d13220809122235w1a5ecc7do57552adbdb9f5f91@mail.gmail.com> On Fri, Sep 12, 2008 at 6:33 AM, Bryan Keith wrote: > Bernardo, > > I have no idea what to do about this error. I was hoping someone on this > list might be able to help... It is likely to be an error with blas/lapack/atlas as built in the repository. Some other people use suse I believe, maybe they will be able to help. cheers, David From f.braennstroem at gmx.de Sat Sep 13 09:39:23 2008 From: f.braennstroem at gmx.de (Fabian Braennstroem) Date: Sat, 13 Sep 2008 15:39:23 +0200 Subject: [SciPy-user] scale arrays with interpolation Message-ID: Hi, in fluid dynamics one often plots velocity distribution scaled by some reference to take a look at self-similarity. E.g. for free jets one can extract the velocity across the plot, which gives two different arrays; an x_array and an y_array, where the x_array corresponds to the position and the y_array to the velocity at this position. So one would get two arrays of the same size: x_array=[ 1,2,3,4,5,6,7,8,9,10] y_array=[ 0,1,1,2,5,2,1,1,1,1] For the visualization of self-similar behavior one scales the y_array with the local maximum (in this case 5) and the x_array with the position, where the velocity is the half of the maximum. This means one has to calculate the half velocity (5/2=2.5) and has to interpolate the position of this value to get the position (in this case it is somewhere between 4 and 5). This half-width position would be used to scale the x_array. At the end one gets: x_array_scaled= [1/half_width_position, 2/half_with_position,...,10/half_width_position] y_array_scaled= [0/local_max,1/local_max,...,1/local_max] Does anyone have a suggestion how to do this? The major problem is how to get the 'half_width_position'. Would be nice, if anyone has an idea! Greetings! Fabian From alex.liberzon at gmail.com Sat Sep 13 16:35:21 2008 From: alex.liberzon at gmail.com (Alex Liberzon) Date: Sat, 13 Sep 2008 22:35:21 +0200 Subject: [SciPy-user] scale arrays with interpolation Message-ID: <48CC2409.1010501@gmail.com> Hi Fabian Fortunately, I'm from the same field so I think I understand what you're talking about :-) If I translate your wish is:: a) find the location and the value of the maximum of the y_array => local_max b) half_width is the x_array value at the position of the maximum of y_array If this is true, then you'd better use numpy arrays and not lists: # starting from your lists: x_array = asarray(x_array).astype('f') # array of floats y_array = asarray(y_array).astype('f') half_width = x_array[argmax(y_array)] scaled_x_array = x_array/half_width scaled_y_array = y_array/max(y_array) plot(scaled_x_array,scaled_y_array,'o') Hope it helps, Alex From f.braennstroem at gmx.de Sun Sep 14 05:29:04 2008 From: f.braennstroem at gmx.de (Fabian Braennstroem) Date: Sun, 14 Sep 2008 11:29:04 +0200 Subject: [SciPy-user] scale arrays with interpolation References: <48CC2409.1010501@gmail.com> Message-ID: Hi Alex, * Alex Liberzon wrote: > Hi Fabian > > Fortunately, I'm from the same field so I think I understand what you're > talking about :-) I hope it was not too confusing... > If I translate your wish is:: > a) find the location and the value of the maximum of the y_array => > local_max > b) half_width is the x_array value at the position of the maximum of y_array > > If this is true, then you'd better use numpy arrays and not lists: > # starting from your lists: > x_array = asarray(x_array).astype('f') # array of floats > y_array = asarray(y_array).astype('f') > > half_width = x_array[argmax(y_array)] > > scaled_x_array = x_array/half_width > scaled_y_array = y_array/max(y_array) > > plot(scaled_x_array,scaled_y_array,'o') Thanks! This is a good way with a lot of elements, but I think, if one does this with just a few elements in the numpy array, one has to do some kind of interpolation, to find the 'exact' half_width!? Regards! Fabian From robince at gmail.com Sun Sep 14 05:38:32 2008 From: robince at gmail.com (Robin) Date: Sun, 14 Sep 2008 10:38:32 +0100 Subject: [SciPy-user] scale arrays with interpolation In-Reply-To: References: <48CC2409.1010501@gmail.com> Message-ID: On Sun, Sep 14, 2008 at 10:29 AM, Fabian Braennstroem wrote: > Thanks! This is a good way with a lot of elements, but I > think, if one does this with just a few elements in the > numpy array, one has to do some kind of interpolation, to > find the 'exact' > half_width!? You can find interpolation functions at numpy.interp and in the scipy.interpolate module. Probably scipy.interpolate.interp1d is what you need, but I'm not sure of the difference between that and numpy.interp. Robin From simon.palmer at gmail.com Sun Sep 14 06:50:45 2008 From: simon.palmer at gmail.com (SimonPalmer) Date: Sun, 14 Sep 2008 03:50:45 -0700 (PDT) Subject: [SciPy-user] step size using optimize.fmin Message-ID: Hi, I have a weird and lumpy N-D function that I am trying to minimize using optimize.fmin. The problem I am having is that the step size that fmin uses is fixed and not quite big enough to move the function value sufficiently far to examine a different "state", so the function terminates (cleanly) presuming it is on a flat surface. I have been looking through the code and I can't see a way of adjusting the step size, which seem to be magic numbers in the code: nonzdelt = 0.05 zdelt = 0.00025 I guess what I am looking for is one of: a) a way to set these to be a slightly larger value b) a callback to be able to adjust them if the convergence criteria are met but nothing has changed (i.e. a flat surface) c) a different minimmisation algorithm which has momentum I am also contemplating d) change my function so that it is somewhat more continuous in nature e) hack a local copy of fmin but I would rather not do either unless I have to. Anyone have any recommendations? Am I reading fmin right? tia Simon From f.braennstroem at gmx.de Sun Sep 14 06:55:21 2008 From: f.braennstroem at gmx.de (Fabian Braennstroem) Date: Sun, 14 Sep 2008 12:55:21 +0200 Subject: [SciPy-user] scale arrays with interpolation References: <48CC2409.1010501@gmail.com> Message-ID: Hi Robin, * Robin wrote: > On Sun, Sep 14, 2008 at 10:29 AM, Fabian Braennstroem > wrote: > >> Thanks! This is a good way with a lot of elements, but I >> think, if one does this with just a few elements in the >> numpy array, one has to do some kind of interpolation, to >> find the 'exact' >> half_width!? > > You can find interpolation functions at numpy.interp and in the > scipy.interpolate module. > Probably scipy.interpolate.interp1d is what you need, but I'm not > sure of the difference between that and numpy.interp. Thanks, this should work! Greetings! Fabian From pav at iki.fi Sun Sep 14 07:34:18 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 14 Sep 2008 11:34:18 +0000 (UTC) Subject: [SciPy-user] step size using optimize.fmin References: Message-ID: Sun, 14 Sep 2008 03:50:45 -0700, SimonPalmer wrote: > Hi, I have a weird and lumpy N-D function that I am trying to minimize > using optimize.fmin. The problem I am having is that the step size that > fmin uses is fixed and not quite big enough to move the function value > sufficiently far to examine a different "state", so the function > terminates (cleanly) presuming it is on a flat surface. Silly thought: maybe you could scale your function so that its characteristic length scale becomes 1? x_scale = 1000 f_scale = 1 def scaled_func(x): return func(x * scale) * f_scale xopt = sp.optimize.fmin(scaled_func, scale*x0) xopt /= scale -- Pauli Virtanen From josef.pktd at gmail.com Sun Sep 14 09:52:27 2008 From: josef.pktd at gmail.com (joep) Date: Sun, 14 Sep 2008 06:52:27 -0700 (PDT) Subject: [SciPy-user] ipython install: egg not python 2.4 compatible Message-ID: Sorry for wrong group: I'm not subscribed to ipython-user: ipython-0.9-py2.4.egg uses features from python 2.5: error messages with easy_install:: C:\Programs\Python24\Scripts\easy_install-script.py -U C:\Josef\work- oth\sort\pypi\ipython-0.9-py2.4.egg Processing ipython-0.9-py2.4.egg creating c:\programs\python24\lib\site-packages\ipython-0.9-py2.4.egg Extracting ipython-0.9-py2.4.egg to c:\programs\python24\lib\site- packages File "c:\programs\python24\lib\site-packages\ipython-0.9-py2.4.egg \IPython\con fig\config.py", line 49 with raw(self): ^ SyntaxError: invalid syntax File "c:\programs\python24\lib\site-packages\ipython-0.9-py2.4.egg \IPython\fro ntend\linefrontendbase.py", line 192 finally: ^ SyntaxError: invalid syntax File "c:\programs\python24\lib\site-packages\ipython-0.9-py2.4.egg \IPython\fro ntend\prefilterfrontend.py", line 207 finally: ^ SyntaxError: invalid syntax SyntaxError: ('future feature with_statement is not defined',) File "c:\programs\python24\lib\site-packages\ipython-0.9-py2.4.egg \IPython\ker nel\tests\test_contexts.py", line 28 with parallel as pr: ^ SyntaxError: invalid syntax File "c:\programs\python24\lib\site-packages\ipython-0.9-py2.4.egg \share\doc\i python\examples\kernel\nwmerge.py", line 48 toadd = (key(item), i, item, itr) if key else (item, i, itr) ^ SyntaxError: invalid syntax Adding ipython 0.9 to easy-install.pth file Installing iptest-script.py script to C:\Programs\Python24\Scripts Installing iptest.exe script to C:\Programs\Python24\Scripts Installing ipythonx-script.py script to C:\Programs\Python24\Scripts Installing ipythonx.exe script to C:\Programs\Python24\Scripts Installing ipcluster-script.py script to C:\Programs\Python24\Scripts Installing ipcluster.exe script to C:\Programs\Python24\Scripts Installing ipython-script.py script to C:\Programs\Python24\Scripts Installing ipython.exe script to C:\Programs\Python24\Scripts Installing pycolor-script.py script to C:\Programs\Python24\Scripts Installing pycolor.exe script to C:\Programs\Python24\Scripts Installing ipcontroller-script.py script to C:\Programs \Python24\Scripts Installing ipcontroller.exe script to C:\Programs\Python24\Scripts Installing ipengine-script.py script to C:\Programs\Python24\Scripts Installing ipengine.exe script to C:\Programs\Python24\Scripts Installed c:\programs\python24\lib\site-packages\ipython-0.9-py2.4.egg Processing dependencies for ipython==0.9 Finished processing dependencies for ipython==0.9 From emmanuelle.gouillart at normalesup.org Sun Sep 14 10:02:17 2008 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Sun, 14 Sep 2008 16:02:17 +0200 Subject: [SciPy-user] 2-D interpolation of irregularly spaced data Message-ID: <20080914140217.GA917@phare.normalesup.org> Hello, I have an irregular 2-D mesh, and 1-D data measured at the vertices of the mesh (the mesh is finer where the data vary more rapidly), and I need to interpolate the data at other (also irregularly spaced) points. To do so, I use the delaunay scikit and its NNInterpolator which can take an iregular mesh. The problem is that I cannot call the interpolator with irregularly spaced points, so that my code is running very slowly now. Here is a minimal example of what I do now (with regular grids and few points for clarity): *** import scikits.delaunay as d def evolve(positions, mesh, values): tri = d.Triangulation(mesh[0], mesh[1]) interpolator = d.NNInterpolator(tri, values) return array([interpolator(x,y) for (x,y) in positions.T]).ravel() #Mesh X, Y = mgrid[-1:1:20j, -1:1:20j] #regular grid for clarity X = X.flatten() Y = Y.flatten() mesh = array([X,Y]) #Values values = Y**2 #Positions positions = mgrid[-0.5:0.5:20j, -0.5:0.5:20j] positions = array([positions[0].flatten(), positions[1].flatten()]) new_values = evolve(positions, mesh, values) *** Any hints about I could accelerate the interpolation? (Usually, I work with meshes of size ~1.e4 and positions of size ~1.e6). Any help will be very welcome! Cheers, Emmanuelle From gael.varoquaux at normalesup.org Sun Sep 14 13:15:55 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 14 Sep 2008 19:15:55 +0200 Subject: [SciPy-user] ipython install: egg not python 2.4 compatible In-Reply-To: References: Message-ID: <20080914171555.GC12842@phare.normalesup.org> On Sun, Sep 14, 2008 at 06:52:27AM -0700, joep wrote: > Sorry for wrong group: I'm not subscribed to ipython-user: This has been reported on the ipython mailing list. There will be a bug-fix release to ipython to sort this out. Ga?l From chiefmurph at comcast.net Sun Sep 14 21:44:25 2008 From: chiefmurph at comcast.net (Dan Murphy) Date: Sun, 14 Sep 2008 18:44:25 -0700 Subject: [SciPy-user] Gaussian quadrature error Message-ID: <0F8253EA348F49F6A3C3A0AEFE044866@GatewayLaptop> I am trying out the integrate.quadrature function on the function f(x)=e**x to the left of the y-axis. If the lower bound in not too negative, I get a reasonable answer, but if the lower bound is too negative, I get 0.0 as the value of the integral. Here is the code: from scipy import * def f(x): return e**x integrate.quadrature(f,-10.0,0.0) # answer is (0.999954600065, 3.14148596026e-010) but integrate.quadrature(f,-1000.0,0.0) # yields (8.35116510531e-090, 8.35116510531e-090) Note that 'val' and 'err' are equal. Is this a bug in quadrature? Thanks. Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Sep 15 02:31:41 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 15 Sep 2008 01:31:41 -0500 Subject: [SciPy-user] 2-D interpolation of irregularly spaced data In-Reply-To: <20080914140217.GA917@phare.normalesup.org> References: <20080914140217.GA917@phare.normalesup.org> Message-ID: <3d375d730809142331o65039239w86b23f5b881af61e@mail.gmail.com> On Sun, Sep 14, 2008 at 09:02, Emmanuelle Gouillart wrote: > Hello, > > I have an irregular 2-D mesh, and 1-D data measured at the > vertices of the mesh (the mesh is finer where the data vary more > rapidly), and I need to interpolate the data at other (also irregularly > spaced) points. To do so, I use the delaunay scikit and its > NNInterpolator which can take an iregular mesh. The problem is that I > cannot call the interpolator with irregularly spaced points, so that my > code is running very slowly now. Here is a minimal example of what I do > now (with regular grids and few points for clarity): > > *** > import scikits.delaunay as d > > def evolve(positions, mesh, values): > tri = d.Triangulation(mesh[0], mesh[1]) > interpolator = d.NNInterpolator(tri, values) > return array([interpolator(x,y) for (x,y) in positions.T]).ravel() NNInterpolator.__call__() can take arrays, not just scalars. For the greatest efficiency, try to make sure adjacent points are close to each other. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From emmanuelle.gouillart at normalesup.org Mon Sep 15 07:17:31 2008 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Mon, 15 Sep 2008 13:17:31 +0200 (CEST) Subject: [SciPy-user] 2-D interpolation of irregularly spaced data In-Reply-To: <3d375d730809142331o65039239w86b23f5b881af61e@mail.gmail.com> References: <20080914140217.GA917@phare.normalesup.org> <3d375d730809142331o65039239w86b23f5b881af61e@mail.gmail.com> Message-ID: <46468.195.68.31.231.1221477451.squirrel@www.normalesup.org> Thank you, it works really fast with arrays! I don't know why I was convinced NNInterpolator.__call__() could only take arrays with regularly spaced values... Thanks a lot, Emmanuelle > NNInterpolator.__call__() can take arrays, not just scalars. For the > greatest efficiency, try to make sure adjacent points are close to > each other. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cyril.giraudon at free.fr Mon Sep 15 08:49:14 2008 From: cyril.giraudon at free.fr (cyril giraudon) Date: Mon, 15 Sep 2008 14:49:14 +0200 Subject: [SciPy-user] butterworth filter In-Reply-To: <48C67C16.9030701@free.fr> References: <48C67C16.9030701@free.fr> Message-ID: <48CE59CA.5070609@free.fr> I didn't understand abscissa differences. in fact, x (normalized frequency) are devided by pi. However, in low frequency, scipy trunsfer function is very low. At 0.2, scipy : -250dB, matlab : -120dB. Is there any explanation ? Thanks a lot, Cyril. -------------- next part -------------- A non-text attachment was scrubbed... Name: butter_2_28.gif Type: image/gif Size: 19261 bytes Desc: not available URL: From dmitrey.kroshko at scipy.org Mon Sep 15 09:18:17 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 15 Sep 2008 16:18:17 +0300 Subject: [SciPy-user] [numerical optimization] OpenOpt release v 0.19 Message-ID: <48CE6099.1060409@scipy.org> Hi all, I'm glad to inform you about new OpenOpt release: v 0.19. http://openopt.blogspot.com/2008/09/openopt-release-019.html OpenOpt is free (license: BSD) numerical optimization with lots of 3rd party solvers connected (some are C or Fortran written) and some own ones, graphic output and lots of other numerical optimization "MUST HAVE" features. For more details see OpenOpt's - homepage: http://scipy.org/scipy/scikits/wiki/OpenOpt - foreword: http://scipy.org/scipy/scikits/wiki/OOForeword Regards, Dmitrey. From robert.kern at gmail.com Mon Sep 15 11:10:16 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 15 Sep 2008 10:10:16 -0500 Subject: [SciPy-user] 2-D interpolation of irregularly spaced data In-Reply-To: <46468.195.68.31.231.1221477451.squirrel@www.normalesup.org> References: <20080914140217.GA917@phare.normalesup.org> <3d375d730809142331o65039239w86b23f5b881af61e@mail.gmail.com> <46468.195.68.31.231.1221477451.squirrel@www.normalesup.org> Message-ID: <3d375d730809150810y6b5fdd46lca695a9fbcac4b52@mail.gmail.com> On Mon, Sep 15, 2008 at 06:17, Emmanuelle Gouillart wrote: > Thank you, it works really fast with arrays! I don't know why I was > convinced NNInterpolator.__call__() could only take arrays with regularly > spaced values... The docstring is out of date and suggests that only regular grids are supported. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From anand.prabhakar.patil at gmail.com Mon Sep 15 11:55:36 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Mon, 15 Sep 2008 16:55:36 +0100 Subject: [SciPy-user] Breaking up an array of indices into contiguous/ strided chunks Message-ID: <2bc7a5a50809150855l1c86c659l9bd2e4528988c239@mail.gmail.com> Hi all, I have an array of indices that I want to use as a slice, but cvxopt sparse matrices don't seem to take fancy slices so I need to take the slice manually. If I just do it element-by-element in a for-loop it's extremely slow. I know that the array of indices is in ascending order, but that's all. Does anyone know an easy way to break it up into strided chunks that are as big as possible? Thanks, Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathieu.dubois at limsi.fr Mon Sep 15 12:13:36 2008 From: mathieu.dubois at limsi.fr (Mathieu Dubois) Date: Mon, 15 Sep 2008 18:13:36 +0200 Subject: [SciPy-user] How to give a name to a figure? Message-ID: <48CE89B0.5010901@limsi.fr> Hi, I'm a beginner in scipy and I have a small problem with figures. Let me explain. I have to plot complicated data so I have a lot of figures. I have set title and axes names. My problem is that the windows are titled with things like 'Figure 1', 'Figure 2' etc. Is it possible to set this name to something more understable? Apparently title() changes the name of the plots inside the figure. My goal is to save (with savefig()) them with a nice name say 'title.png' (where title would be replaced by the title of my figure). Any help would be appreciated. Thanks in advance, Mathieu From robert.kern at gmail.com Mon Sep 15 12:31:16 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 15 Sep 2008 11:31:16 -0500 Subject: [SciPy-user] How to give a name to a figure? In-Reply-To: <48CE89B0.5010901@limsi.fr> References: <48CE89B0.5010901@limsi.fr> Message-ID: <3d375d730809150931p27192044kb5d85985a30ec955@mail.gmail.com> On Mon, Sep 15, 2008 at 11:13, Mathieu Dubois wrote: > Hi, > > I'm a beginner in scipy and I have a small problem with figures. Let me > explain. However, this is a question about matplotlib, not scipy. The mailing list you want is over here: https://lists.sourceforge.net/lists/listinfo/matplotlib-devel -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From anand.prabhakar.patil at gmail.com Mon Sep 15 13:03:02 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Mon, 15 Sep 2008 18:03:02 +0100 Subject: [SciPy-user] Breaking up an array of indices into contiguous/ strided chunks In-Reply-To: <2bc7a5a50809150855l1c86c659l9bd2e4528988c239@mail.gmail.com> References: <2bc7a5a50809150855l1c86c659l9bd2e4528988c239@mail.gmail.com> Message-ID: <2bc7a5a50809151003s2d08d735h896cd6a6db6ce856@mail.gmail.com> On Mon, Sep 15, 2008 at 4:55 PM, Anand Patil < anand.prabhakar.patil at gmail.com> wrote: > Hi all, > I have an array of indices that I want to use as a slice, but cvxopt sparse > matrices don't seem to take fancy slices so I need to take the slice > manually. If I just do it element-by-element in a for-loop it's extremely > slow. I know that the array of indices is in ascending order, but that's > all. Does anyone know an easy way to break it up into strided chunks that > are as big as possible? > I didn't ask the question very well. I'm trying to do x[i1] = y[i2], where i1 and i2 are different but they're both in ascending order and I have reason to believe that they're both regular for long intervals, though the strides will usually be different. I need to break them up into matching, strided chunks. The problem gets weird when, for example, i1 = [1,2,4,5,7,8,10...] i2 = [1,2,3,4,5,6,7...] that is when the stride of i1 'alternates'. In this case the copy could be done efficiently using s1 = [slice(1,?,3), slice(2,?,3)] s2 = [slice(1,?,2), slice(2,?,2)] for k in (1,2): x[s1[k]] = y[s2[k]] and it seems like some code to do this kind of pattern matching would already be available. Maybe numpy even does it? Thanks, Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnuttall at uky.edu Mon Sep 15 12:28:35 2008 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Mon, 15 Sep 2008 12:28:35 -0400 Subject: [SciPy-user] How to give a name to a figure? In-Reply-To: <48CE89B0.5010901@limsi.fr> References: <48CE89B0.5010901@limsi.fr> Message-ID: Mathieu, Its pretty easy. The statements below do what you I want: (of course, you have to import pylab) >>> path = 'c:\\documents and settings\\bnuttall\\desktop\\adair\\' >>> picfile = '%sR%s.png' % (path,str(wellid).rjust(7,'0')) ...snip... (a bunch of statements constructing the figure) >>> pylab.savefig(picfile) In my case, the wellid is a serial integer that uniquely identifies the data set. I have coded similar statements to alter the path name systematically so that related output gets grouped in folders. Brandon Nuttall -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Mathieu Dubois Sent: Monday, September 15, 2008 12:14 PM To: scipy-user at scipy.org Subject: [SciPy-user] How to give a name to a figure? Hi, I'm a beginner in scipy and I have a small problem with figures. Let me explain. I have to plot complicated data so I have a lot of figures. I have set title and axes names. My problem is that the windows are titled with things like 'Figure 1', 'Figure 2' etc. Is it possible to set this name to something more understable? Apparently title() changes the name of the plots inside the figure. My goal is to save (with savefig()) them with a nice name say 'title.png' (where title would be replaced by the title of my figure). Any help would be appreciated. Thanks in advance, Mathieu _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From ndbecker2 at gmail.com Mon Sep 15 13:50:31 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 15 Sep 2008 13:50:31 -0400 Subject: [SciPy-user] [numerical optimization] OpenOpt release v 0.19 References: <48CE6099.1060409@scipy.org> Message-ID: Thanks. One buglet: python setup.py build sudo python setup.py install leaves files in build dir owned by root: scikits.openopt.egg-info/ From f.drossaert at googlemail.com Mon Sep 15 13:45:26 2008 From: f.drossaert at googlemail.com (Francis Drossaert) Date: Mon, 15 Sep 2008 18:45:26 +0100 Subject: [SciPy-user] Help: building scipy+numpy locally on x86_64 running Centos Message-ID: <2c758b440809151045m4eb35129v6c1eecc0e0c1ed5d@mail.gmail.com> Hi everybody, I am trying to install python2.5/scipy/numpy/sympy/matplotlib locally, because of various reasons. I am not root. At home I am using Ubuntu and I am root and everything works as it should. So far I managed to make python2.5, sympy and ipython to run fine on my computer at work, but I am struggling to make numpy+scipy since I am trying to install it locally. The problem is likely the lapack+atlas build. I have followed the build by hand instructions on http://www.scipy.org/Installing_SciPy/Linux. I am not getting any errors during the build and install. However after installing I tried to import numpy in python and received to following error: ImportError: /users/francisd/local/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: zgesdd_ Googling this error, it seems that the error is caused by having a wrong lapack version. I am running Centos 5 (basically Red Hat Enterprise Linux 5) on a x86_64 machine. Does anybody know how I can get the right version, by changing some flags, or whatever? FYI I have added the -m64 flag for the lapack build, but no change. Cheers, Francis From robert.kern at gmail.com Mon Sep 15 13:55:38 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 15 Sep 2008 12:55:38 -0500 Subject: [SciPy-user] [numerical optimization] OpenOpt release v 0.19 In-Reply-To: References: <48CE6099.1060409@scipy.org> Message-ID: <3d375d730809151055u2a6ee1b4k87090a3381392d9f@mail.gmail.com> On Mon, Sep 15, 2008 at 12:50, Neal Becker wrote: > Thanks. One buglet: > > python setup.py build > sudo python setup.py install > > leaves files in build dir owned by root: > scikits.openopt.egg-info/ That's not OpenOpt's fault. If you wish to avoid it, do $ python setup.py build egg_info $ sudo python setup.py install -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From george.dahl at gmail.com Mon Sep 15 14:01:26 2008 From: george.dahl at gmail.com (George Dahl) Date: Mon, 15 Sep 2008 14:01:26 -0400 Subject: [SciPy-user] trouble building scipy on mac os 10.5 intel In-Reply-To: <6b2e0e10809151059o359268acm1760fa62f42cb2a3@mail.gmail.com> References: <6b2e0e10809070018m1dbfcc6esbe13ac5e92e22e82@mail.gmail.com> <6b2e0e10809151059o359268acm1760fa62f42cb2a3@mail.gmail.com> Message-ID: <6b2e0e10809151101k433af21bv44f3609a3e92b8e3@mail.gmail.com> Hi everyone. I get the following result when I run python setup.py build for scipy 0.6.0. I have numpy and it works in python2.5, which is the version of python I want to use scipy from. I have gcc 4.0.1 and gfortran 4.2.1. I have pasted some of the output of python setup.py build below. I don't really know what I am doing, but I would really like to get scipy working and I would appreciate any help anyone can give me! I looked around in the archives with google a bit, but nothing seemed to deal with my situation exactly, hopefully I didn't miss anything. - George scipy-0.6.0$ python setup.py build . . . bnrm2,resid,info = dstoptest2(r,b,bnrm2,tol,info) Constructing wrapper function "cstoptest2"... bnrm2,resid,info = cstoptest2(r,b,bnrm2,tol,info) Constructing wrapper function "zstoptest2"... bnrm2,resid,info = zstoptest2(r,b,bnrm2,tol,info) Wrote C/API module "_iterative" to file "build/src.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/scipy/linalg/iterative/_iterativemodule.c" adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.linsolve._zsuperlu" sources building extension "scipy.linsolve._dsuperlu" sources building extension "scipy.linsolve._csuperlu" sources building extension "scipy.linsolve._ssuperlu" sources building extension "scipy.linsolve.umfpack.__umfpack" sources creating build/src.macosx-10.3-fat-2.5/scipy/linsolve creating build/src.macosx-10.3-fat-2.5/scipy/linsolve/umfpack adding 'scipy/linsolve/umfpack/umfpack.i' to sources. swig: scipy/linsolve/umfpack/umfpack.i swig -python -o build/src.macosx-10.3-fat-2.5/scipy/linsolve/umfpack/_umfpack_wrap.c -outdir build/src.macosx-10.3-fat-2.5/scipy/linsolve/umfpack scipy/linsolve/umfpack/umfpack.i scipy/linsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack.h' scipy/linsolve/umfpack/umfpack.i:193: Error: Unable to find 'umfpack_solve.h' scipy/linsolve/umfpack/umfpack.i:194: Error: Unable to find 'umfpack_defaults.h' scipy/linsolve/umfpack/umfpack.i:195: Error: Unable to find 'umfpack_triplet_to_col.h' scipy/linsolve/umfpack/umfpack.i:196: Error: Unable to find 'umfpack_col_to_triplet.h' scipy/linsolve/umfpack/umfpack.i:197: Error: Unable to find 'umfpack_transpose.h' scipy/linsolve/umfpack/umfpack.i:198: Error: Unable to find 'umfpack_scale.h' scipy/linsolve/umfpack/umfpack.i:200: Error: Unable to find 'umfpack_report_symbolic.h' scipy/linsolve/umfpack/umfpack.i:201: Error: Unable to find 'umfpack_report_numeric.h' scipy/linsolve/umfpack/umfpack.i:202: Error: Unable to find 'umfpack_report_info.h' scipy/linsolve/umfpack/umfpack.i:203: Error: Unable to find 'umfpack_report_control.h' scipy/linsolve/umfpack/umfpack.i:215: Error: Unable to find 'umfpack_symbolic.h' scipy/linsolve/umfpack/umfpack.i:216: Error: Unable to find 'umfpack_numeric.h' scipy/linsolve/umfpack/umfpack.i:225: Error: Unable to find 'umfpack_free_symbolic.h' scipy/linsolve/umfpack/umfpack.i:226: Error: Unable to find 'umfpack_free_numeric.h' scipy/linsolve/umfpack/umfpack.i:248: Error: Unable to find 'umfpack_get_lunz.h' scipy/linsolve/umfpack/umfpack.i:272: Error: Unable to find 'umfpack_get_numeric.h' error: command 'swig' failed with exit status 1 From mathieu.dubois at limsi.fr Mon Sep 15 14:38:22 2008 From: mathieu.dubois at limsi.fr (Mathieu Dubois) Date: Mon, 15 Sep 2008 20:38:22 +0200 Subject: [SciPy-user] How to give a name to a figure? In-Reply-To: References: <48CE89B0.5010901@limsi.fr> Message-ID: <48CEAB9E.5060403@limsi.fr> Hi, Thanks for your help but my problem is not to give a name to the file but to give a name to the figure itself. The reason why I want this is that I have all my figure handles in a list and I want to do a loop like: for fig in fig_list savefig(fig, fig.title, format='png') The question was asked to the matplotlib users list some months ago: http://sourceforge.net/mailarchive/message.php?msg_id=a7f1ef730709101012o20abd37aj116e100d9b105d52%40mail.gmail.com As Robert Kern pointed out this question is matplotlib related and I don't want to pollute scipy list so I will continue discussion on Matplotlib-users list. Thanks again for your help, Mathieu Nuttall, Brandon C wrote: > Mathieu, > > Its pretty easy. The statements below do what you I want: > > (of course, you have to import pylab) > > >>>> path = 'c:\\documents and settings\\bnuttall\\desktop\\adair\\' >>>> picfile = '%sR%s.png' % (path,str(wellid).rjust(7,'0')) >>>> > > ...snip... (a bunch of statements constructing the figure) > > >>>> pylab.savefig(picfile) >>>> > > In my case, the wellid is a serial integer that uniquely identifies the data set. I have coded similar statements to alter the path name systematically so that related output gets grouped in folders. > > Brandon Nuttall > > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Mathieu Dubois > Sent: Monday, September 15, 2008 12:14 PM > To: scipy-user at scipy.org > Subject: [SciPy-user] How to give a name to a figure? > > Hi, > > I'm a beginner in scipy and I have a small problem with figures. Let me > explain. > > I have to plot complicated data so I have a lot of figures. I have set > title and axes names. > > My problem is that the windows are titled with things like 'Figure 1', > 'Figure 2' etc. Is it possible to set this name to something more > understable? Apparently title() changes the name of the plots inside the > figure. > > My goal is to save (with savefig()) them with a nice name say > 'title.png' (where title would be replaced by the title of my figure). > > Any help would be appreciated. > > Thanks in advance, > Mathieu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ndbecker2 at gmail.com Mon Sep 15 14:54:52 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 15 Sep 2008 14:54:52 -0400 Subject: [SciPy-user] [numerical optimization] OpenOpt release v 0.19 References: <48CE6099.1060409@scipy.org> <3d375d730809151055u2a6ee1b4k87090a3381392d9f@mail.gmail.com> Message-ID: Robert Kern wrote: > On Mon, Sep 15, 2008 at 12:50, Neal Becker wrote: >> Thanks. One buglet: >> >> python setup.py build >> sudo python setup.py install >> >> leaves files in build dir owned by root: >> scikits.openopt.egg-info/ > > That's not OpenOpt's fault. If you wish to avoid it, do > > $ python setup.py build egg_info > $ sudo python setup.py install > OK, thanks. Please add to install instructions. From spmcinerney at hotmail.com Mon Sep 15 18:45:11 2008 From: spmcinerney at hotmail.com (Stephen McInerney) Date: Mon, 15 Sep 2008 15:45:11 -0700 Subject: [SciPy-user] SciPy 08 conference videos? In-Reply-To: References: Message-ID: Enthought folks, the slides are up at http://conference.scipy.org/ but can you also put up the videos (unedited is fine, for starters) Thanks, Stephen _________________________________________________________________ Get more out of the Web. Learn 10 hidden secrets of Windows Live. http://windowslive.com/connect/post/jamiethomson.spaces.live.com-Blog-cns!550F681DAD532637!5295.entry?ocid=TXT_TAGLM_WL_domore_092008 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lanceboyle at qwest.net Tue Sep 16 01:00:40 2008 From: lanceboyle at qwest.net (Jerry) Date: Mon, 15 Sep 2008 22:00:40 -0700 Subject: [SciPy-user] butterworth filter In-Reply-To: References: <48C67C16.9030701@free.fr> Message-ID: Just an innocent bystander here, but that plot looks wrong. Could you please re-submit it with decibels on the vertical axis against log frequency? That is the normal way of viewing filter frequency responses. Jerry On Sep 12, 2008, at 2:18 PM, Ryan Krauss wrote: > So, attached is the plot I get from scipy. It is a high pass > filter. It seems reasonable. What does the curve look like from > Matlab. > > Ryan > > On Tue, Sep 9, 2008 at 8:37 AM, cyril giraudon > wrote: > Hi, > > I use scipy 0.6.0 and i try to reproduce the plot of the matlab butter > function web documentation (first response for a google request > "matlab > butter example"). > > The matlab code is : > > [z,p,k] = butter(9,300/500,'high'); > [sos,g] = zp2sos(z,p,k); % Convert to SOS form > Hd = dfilt.df2tsos(sos,g); % Create a dfilt object > h = fvtool(Hd); % Plot magnitude response > set(h,'Analysis','freq') % Display frequency response > > > In scipy, I write : > > from scipy.signal import butter, freqz > > from pylab import show, grid, log, plot > > b, a = butter(9, 300./500., 'high') > > fi = freqz(b, a) > > plot(fi[0], 20*log(abs(fi[1]))) > > grid() > > show() > > > > Why are the two filters not the same ? > > > Thanks a lot, > > Cyril. > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From loniedavid at gmail.com Tue Sep 16 13:06:32 2008 From: loniedavid at gmail.com (David Lonie) Date: Tue, 16 Sep 2008 12:06:32 -0500 Subject: [SciPy-user] Error in nonlinear least squares fit analysis Message-ID: <199bcede0809161006xd29ef45lf732790bd1eb7bf3@mail.gmail.com> I got some help here earlier about finding a function to fit a function to some exponentially increasing data. I have a few questions: a) fmin vs. leastsq: The method I wrote ended up using the fmin() function to minimize the error vector. What is the difference between fmin and leastsq? Is there an advantage to using either? b) Error in the parameters: I'd like to know the precision that the fitted parameters are good to. Basically, I'd like to know that b = 3.456 +/- 0.003 instead of just b = 3.456. leastsq can return a Jacobian matrix -- will pulling out the diagonal elements of this matrix give me the results I want? Or is there a better way? TIA, Dave From s.mientki at ru.nl Tue Sep 16 13:25:58 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Tue, 16 Sep 2008 19:25:58 +0200 Subject: [SciPy-user] butterworth filter In-Reply-To: <48CE59CA.5070609@free.fr> References: <48C67C16.9030701@free.fr> <48CE59CA.5070609@free.fr> Message-ID: <48CFEC26.40505@ru.nl> cyril giraudon wrote: > I didn't understand abscissa differences. > in fact, x (normalized frequency) are devided by pi. > > However, in low frequency, scipy trunsfer function is very low. > At 0.2, scipy : -250dB, matlab : -120dB. As an engineer I would say -120 dB equals -250 dB, so the graphs are equal. I really would like to see your lab, if your interested -250 dB ;-) In Matlab you probably never get to know how the frequency response is calculated, although presumable it's just a copy of "Numerical recipes". At -120 dB, the rounding results might become relevant. It would be interesting to compare the calculated filter coefficients, or the zero/pole values, and maybe even put the Matlab coefficients into Scipy and vise versa. cheers, Stef > > Is there any explanation ? > > Thanks a lot, > > Cyril. > > > > > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Tue Sep 16 15:04:44 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 16 Sep 2008 21:04:44 +0200 Subject: [SciPy-user] Rombouts algorithm for calculating the characteristic polynomial Message-ID: Hi all, Has someone implemented Rombouts algorithm in Python ? Details are available at http://arxiv.org/pdf/math/9804133v1 Cheers, Nils From david at ar.media.kyoto-u.ac.jp Tue Sep 16 22:53:57 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 17 Sep 2008 11:53:57 +0900 Subject: [SciPy-user] butterworth filter In-Reply-To: <48CFEC26.40505@ru.nl> References: <48C67C16.9030701@free.fr> <48CE59CA.5070609@free.fr> <48CFEC26.40505@ru.nl> Message-ID: <48D07145.2010605@ar.media.kyoto-u.ac.jp> Stef Mientki wrote: > As an engineer I would say -120 dB equals -250 dB, so the graphs are equal. > I really would like to see your lab, if your interested -250 dB ;-) > In Matlab you probably never get to know how the frequency response is > calculated, That's not entirely accurate. A lot of matlab functions are implemented in matlab itself. That's the case for butterworth AFAICS (butterworth is a big .m file, suggesting the implementation itself is in matlab). I avoid reading the actual code itself, though, for licensing issues, because I work a lot on scipy/numpy. But if you don't intend to write this code for numpy/scipy, I guess you can read it. > although presumable it's just a copy of "Numerical recipes". IIRC, there is no signal processing code in Numerical recipes. For Butterworth, it should not be too difficult to check, though, because it is a direct implementation of the analog domain filters. That's certainly one of the easiest IIR to implement for anyone familiar with digital signal processing (bilinear transform and co to go into digital domain). Looking at the code, for butterworth, I think the scipy implementation may be a bit too naive in some corner cases. IIRC, you would be better implementing a N order filter by cascading 2n order filters. In a former life, I was really into synthesizer and digital sound processing, and I know that it mattered in fixed point and even floating point (32 bits) implementations, because for synthesizer, you like the corner cases (to make boom boom and make you ear bleed in clubs). cheers, David From john.grosspietsch at gmail.com Wed Sep 17 00:14:44 2008 From: john.grosspietsch at gmail.com (John Grosspietsch) Date: Wed, 17 Sep 2008 04:14:44 +0000 (UTC) Subject: [SciPy-user] butterworth filter References: <48C67C16.9030701@free.fr> <48CE59CA.5070609@free.fr> Message-ID: cyril giraudon free.fr> writes: > > I didn't understand abscissa differences. > in fact, x (normalized frequency) are devided by pi. > > However, in low frequency, scipy trunsfer function is very low. > At 0.2, scipy : -250dB, matlab : -120dB. > > Is there any explanation ? > > Thanks a lot, > > Cyril. > You need log10() not log() to calculate the filter response in decibels. The response is a little distorted near the corner frequency because, I assume, of the bilinear transformation used to map analog filter poles to digital poles. John From s.mientki at ru.nl Wed Sep 17 03:59:26 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 17 Sep 2008 09:59:26 +0200 Subject: [SciPy-user] butterworth filter In-Reply-To: <48D07145.2010605@ar.media.kyoto-u.ac.jp> References: <48C67C16.9030701@free.fr> <48CE59CA.5070609@free.fr> <48CFEC26.40505@ru.nl> <48D07145.2010605@ar.media.kyoto-u.ac.jp> Message-ID: <48D0B8DE.5010006@ru.nl> thanks David, for correcting me. Sorry, I think I generalized a problem I once had with Matlab too much. But anyway such a statement reveals some of the essential details ;-) cheers, Stef David Cournapeau wrote: > Stef Mientki wrote: > >> As an engineer I would say -120 dB equals -250 dB, so the graphs are equal. >> I really would like to see your lab, if your interested -250 dB ;-) >> In Matlab you probably never get to know how the frequency response is >> calculated, >> > > That's not entirely accurate. A lot of matlab functions are implemented > in matlab itself. That's the case for butterworth AFAICS (butterworth is > a big .m file, suggesting the implementation itself is in matlab). I > avoid reading the actual code itself, though, for licensing issues, > because I work a lot on scipy/numpy. But if you don't intend to write > this code for numpy/scipy, I guess you can read it. > > >> although presumable it's just a copy of "Numerical recipes". >> > > IIRC, there is no signal processing code in Numerical recipes. For > Butterworth, it should not be too difficult to check, though, because it > is a direct implementation of the analog domain filters. That's > certainly one of the easiest IIR to implement for anyone familiar with > digital signal processing (bilinear transform and co to go into digital > domain). > > Looking at the code, for butterworth, I think the scipy implementation > may be a bit too naive in some corner cases. IIRC, you would be better > implementing a N order filter by cascading 2n order filters. In a former > life, I was really into synthesizer and digital sound processing, and I > know that it mattered in fixed point and even floating point (32 bits) > implementations, because for synthesizer, you like the corner cases (to > make boom boom and make you ear bleed in clubs). > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. From c.j.lee at tnw.utwente.nl Wed Sep 17 04:20:03 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Wed, 17 Sep 2008 10:20:03 +0200 Subject: [SciPy-user] singular value decomposition Message-ID: <9468F765-C9A0-41C6-9632-B8F8B3A88ADE@tnw.utwente.nl> Hi All, I have code that needs to repeatedly perform an svd on a matrix (2304, 2304) shape. It seems to take forever to do (I haven't timed it, but approximately two hours passed on the first trial (without completion) and I am now running a second to make sure nothing went wrong. Is this because I am not linked to an optimized BLAS/LAPACK library or something else? I have access to mkl. Is it possible to link an already installed scipy to mkl, or would I have to reinstall? Cheers Chris *************************************************** Chris Lee Laser Physics and Nonlinear Optics Group MESA+ Research Institute for Nanotechnology University of Twente Phone: ++31 (0)53 489 3968 fax: ++31 (0)53 489 1102 *************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Sep 17 04:25:31 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 17 Sep 2008 03:25:31 -0500 Subject: [SciPy-user] singular value decomposition In-Reply-To: <9468F765-C9A0-41C6-9632-B8F8B3A88ADE@tnw.utwente.nl> References: <9468F765-C9A0-41C6-9632-B8F8B3A88ADE@tnw.utwente.nl> Message-ID: <3d375d730809170125y3d936573sa02de53a9b258cea@mail.gmail.com> On Wed, Sep 17, 2008 at 03:20, Chris Lee wrote: > Hi All, > I have code that needs to repeatedly perform an svd on a matrix (2304, 2304) > shape. It seems to take forever to do (I haven't timed it, but approximately > two hours passed on the first trial (without completion) and I am now > running a second to make sure nothing went wrong. > Is this because I am not linked to an optimized BLAS/LAPACK library or > something else? Possibly. The same sized problem takes 40s for me using OS X's builtin ATLAS. > I have access to mkl. Is it possible to link an already installed scipy to > mkl, or would I have to reinstall? You would have to reinstall. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Wed Sep 17 04:12:13 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 17 Sep 2008 17:12:13 +0900 Subject: [SciPy-user] singular value decomposition In-Reply-To: <9468F765-C9A0-41C6-9632-B8F8B3A88ADE@tnw.utwente.nl> References: <9468F765-C9A0-41C6-9632-B8F8B3A88ADE@tnw.utwente.nl> Message-ID: <48D0BBDD.7040605@ar.media.kyoto-u.ac.jp> Chris Lee wrote: > Hi All, > > I have code that needs to repeatedly perform an svd on a matrix (2304, > 2304) shape. It seems to take forever to do (I haven't timed it, but > approximately two hours passed on the first trial (without completion) > and I am now running a second to make sure nothing went wrong. > > Is this because I am not linked to an optimized BLAS/LAPACK library or > something else? Have you tested your scipy atlas ? 2 hours sounds awfully long, and this may be due to some bugs caused during a wrong build (for example, errors in the masrhalling of arguments at the C/Fortran interface), cheers, David From c.j.lee at tnw.utwente.nl Wed Sep 17 04:32:52 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Wed, 17 Sep 2008 10:32:52 +0200 Subject: [SciPy-user] singular value decomposition In-Reply-To: <48D0BBDD.7040605@ar.media.kyoto-u.ac.jp> References: <9468F765-C9A0-41C6-9632-B8F8B3A88ADE@tnw.utwente.nl> <48D0BBDD.7040605@ar.media.kyoto-u.ac.jp> Message-ID: <4CDB6AD0-C886-4264-878E-4E55C9F3CFEB@tnw.utwente.nl> Thanks David and Robert, at least I am now sure something is wrong. I will see what libraries I am linked to and see what can be done to fix the problem. Cheers Chris On Sep 17, 2008, at 10:12 AM, David Cournapeau wrote: > Chris Lee wrote: >> Hi All, >> >> I have code that needs to repeatedly perform an svd on a matrix >> (2304, >> 2304) shape. It seems to take forever to do (I haven't timed it, but >> approximately two hours passed on the first trial (without >> completion) >> and I am now running a second to make sure nothing went wrong. >> >> Is this because I am not linked to an optimized BLAS/LAPACK library >> or >> something else? > > Have you tested your scipy atlas ? 2 hours sounds awfully long, and > this > may be due to some bugs caused during a wrong build (for example, > errors > in the masrhalling of arguments at the C/Fortran interface), > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user *************************************************** Chris Lee Laser Physics and Nonlinear Optics Group MESA+ Research Institute for Nanotechnology University of Twente Phone: ++31 (0)53 489 3968 fax: ++31 (0)53 489 1102 *************************************************** From c.j.lee at tnw.utwente.nl Wed Sep 17 05:33:39 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Wed, 17 Sep 2008 11:33:39 +0200 Subject: [SciPy-user] singular value decomposition In-Reply-To: <48D0BBDD.7040605@ar.media.kyoto-u.ac.jp> References: <9468F765-C9A0-41C6-9632-B8F8B3A88ADE@tnw.utwente.nl> <48D0BBDD.7040605@ar.media.kyoto-u.ac.jp> Message-ID: OK, it turns out that if I use a randomly generated matrix I get about a 1 minute svd, which corresponds nicely to the value Robert found. After a bit of sleuthing, I discovered that the zero packing to make the matrix have dimensions that were a power of two were the culprit. With the packing removed everything suddenly started working again. Now I just have to remove all the important bugs (like why I get nonphysical results and noncovergence) :) Thanks for all your help Cheers Chris On Sep 17, 2008, at 10:12 AM, David Cournapeau wrote: > Chris Lee wrote: >> Hi All, >> >> I have code that needs to repeatedly perform an svd on a matrix >> (2304, >> 2304) shape. It seems to take forever to do (I haven't timed it, but >> approximately two hours passed on the first trial (without >> completion) >> and I am now running a second to make sure nothing went wrong. >> >> Is this because I am not linked to an optimized BLAS/LAPACK library >> or >> something else? > > Have you tested your scipy atlas ? 2 hours sounds awfully long, and > this > may be due to some bugs caused during a wrong build (for example, > errors > in the masrhalling of arguments at the C/Fortran interface), > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user *************************************************** Chris Lee Laser Physics and Nonlinear Optics Group MESA+ Research Institute for Nanotechnology University of Twente Phone: ++31 (0)53 489 3968 fax: ++31 (0)53 489 1102 *************************************************** From gael.varoquaux at normalesup.org Wed Sep 17 09:57:14 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 17 Sep 2008 15:57:14 +0200 Subject: [SciPy-user] Error in nonlinear least squares fit analysis In-Reply-To: <199bcede0809161006xd29ef45lf732790bd1eb7bf3@mail.gmail.com> References: <199bcede0809161006xd29ef45lf732790bd1eb7bf3@mail.gmail.com> Message-ID: <20080917135714.GF752@phare.normalesup.org> On Tue, Sep 16, 2008 at 12:06:32PM -0500, David Lonie wrote: > a) fmin vs. leastsq: > The method I wrote ended up using the fmin() function to minimize the > error vector. What is the difference between fmin and leastsq? Is > there an advantage to using either? AFAIK, fmin is a scalar optimizer, where leastsq is a vector optimizer, using an optimized algorithm to minimize the norm of a vector ( http://en.wikipedia.org/wiki/Levenberg-Marquardt_algorithm ). Leastsq will thus be more efficient on this problem set. I am not terribly knowledgeable in this area, so I would appreciate being corrected if I am talking nonsens. Ga?l From bsouthey at gmail.com Wed Sep 17 11:04:49 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 17 Sep 2008 10:04:49 -0500 Subject: [SciPy-user] Error in nonlinear least squares fit analysis In-Reply-To: <20080917135714.GF752@phare.normalesup.org> References: <199bcede0809161006xd29ef45lf732790bd1eb7bf3@mail.gmail.com> <20080917135714.GF752@phare.normalesup.org> Message-ID: <48D11C91.8010506@gmail.com> Gael Varoquaux wrote: > On Tue, Sep 16, 2008 at 12:06:32PM -0500, David Lonie wrote: > >> a) fmin vs. leastsq: >> The method I wrote ended up using the fmin() function to minimize the >> error vector. What is the difference between fmin and leastsq? Is >> there an advantage to using either? >> > > AFAIK, fmin is a scalar optimizer, where leastsq is a vector optimizer, > using an optimized algorithm to minimize the norm of a vector ( > http://en.wikipedia.org/wiki/Levenberg-Marquardt_algorithm ). Leastsq > will thus be more efficient on this problem set. > > I am not terribly knowledgeable in this area, so I would appreciate being > corrected if I am talking nonsens. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > To complete this, fmin uses 'downhill simplex algorithm' (Nelder-Mead Simplex algorithm http://en.wikipedia.org/wiki/Nelder-Mead_method). The big difference is that simplex doesn't use derivatives but Levenberg-Marquardt requires first order derivatives. So obviously you can not use Levenberg-Marquardt if you don't have the derivatives or these are very hard or slow to compute. Levenberg-Marquardt is likely to be faster to converge (but slower because it has to compute derivatives) than using simplex or similar methods. However simplex methods are more likely than other algorithms to find local maxima rather than global maxima so you do need to check for that. Apart from that, you probably are not missing much. I have not dealt with non-linear problems in ages to answer the second part of your question. Basically you need the variance of the estimate but that very much depends on the type of problem you have. Bruce From peridot.faceted at gmail.com Wed Sep 17 12:00:09 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 17 Sep 2008 12:00:09 -0400 Subject: [SciPy-user] Gaussian quadrature error In-Reply-To: <0F8253EA348F49F6A3C3A0AEFE044866@GatewayLaptop> References: <0F8253EA348F49F6A3C3A0AEFE044866@GatewayLaptop> Message-ID: 2008/9/14 Dan Murphy : > I am trying out the integrate.quadrature function on the function f(x)=e**x > to the left of the y-axis. If the lower bound in not too negative, I get a > reasonable answer, but if the lower bound is too negative, I get 0.0 as the > value of the integral. Here is the code: > > > > from scipy import * > > > > def f(x): > > return e**x > > > > integrate.quadrature(f,-10.0,0.0) # answer is (0.999954600065, > 3.14148596026e-010) > > > > but > > > > integrate.quadrature(f,-1000.0,0.0) # yields (8.35116510531e-090, > 8.35116510531e-090) > > > > Note that 'val' and 'err' are equal. Is this a bug in quadrature? No, unfortunately. It is a limitation of numerical quadrature in general. Specifically, no matter how adaptive the algorithm is, it can only base its result on a finite number of sampled points of the function. If these points are all zero to numerical accuracy, then the answer must be zero. So if you imagine those samples are taken at the midpoints of 10 intervals evenly spaced between -1000 and 0, then the rightmost one returns a value of e**(-50), which is as close to zero as makes no nevermind. You might be all right if this were an adaptive scheme and if it used the endpoints, since one endpoint is guaranteed to give you one. But not using the endpoints is a design feature of some numerical integration schemes. The take-home lesson is that you can't just use numerical quadrature systems blindly; you have to know the features and limitations of the particular one you're using. Gaussian quadrature can be very accurate for smooth functions, but it has a very specific domain of applicability. scipy.optimize.quad is a little more general-purpose by intent (and necessarily a little less efficient when Gaussian quadrature will do) but it can be tricked too. A more specific take-home lesson is to try to normalize your problem as much as possible, so that all quantities you feed your integrator are of order unity. Yes, it's a pain to have to handle scale factors yourself, particularly in the normal case when you're solving a family of related problems. But you'll get much more reliable performance. Anne From rob.clewley at gmail.com Wed Sep 17 13:42:50 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 17 Sep 2008 13:42:50 -0400 Subject: [SciPy-user] Error in nonlinear least squares fit analysis In-Reply-To: <48D11C91.8010506@gmail.com> References: <199bcede0809161006xd29ef45lf732790bd1eb7bf3@mail.gmail.com> <20080917135714.GF752@phare.normalesup.org> <48D11C91.8010506@gmail.com> Message-ID: > I have not dealt with non-linear problems in ages to answer the second > part of your question. Basically you need the variance of the estimate > but that very much depends on the type of problem you have. > > Bruce I'm not a great expert on this theory, so some of my explanation might get a bit shaky... But, the variance comes from the Hessian, which is the matrix of second derivatives (i.e., deriv of the Jacobian). This is nasty to compute using forward differencing (loss of significance in the numerics) if you don't have an explicit Jacobian, but a close approximation is usually multiply(transpose(J),J) (this basically comes down to Taylor's theorem, IIRC). The more locally quadratic is your residual function then the larger the values will be, and the variance will be smaller. Small values will mean the function is flatter locally so you aren't getting as tight a fit. Different entries in that matrix are basically measuring the curvature in different cross-sections of the space, I think. -Rob From robert.kern at gmail.com Wed Sep 17 16:08:08 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 17 Sep 2008 15:08:08 -0500 Subject: [SciPy-user] Error in nonlinear least squares fit analysis In-Reply-To: <199bcede0809161006xd29ef45lf732790bd1eb7bf3@mail.gmail.com> References: <199bcede0809161006xd29ef45lf732790bd1eb7bf3@mail.gmail.com> Message-ID: <3d375d730809171308i12f45decv5e6d161c221d88d3@mail.gmail.com> On Tue, Sep 16, 2008 at 12:06, David Lonie wrote: > b) Error in the parameters: > I'd like to know the precision that the fitted parameters are good to. > Basically, I'd like to know that b = 3.456 +/- 0.003 instead of just b > = 3.456. > leastsq can return a Jacobian matrix -- will pulling out the diagonal > elements of this matrix give me the results I want? Or is there a > better way? leastsq(full_output=True) will also return an estimate of the covariance matrix in recent (probably > 0.6) SVN versions of scipy. You might also want to look at scipy.odr. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From loniedavid at gmail.com Wed Sep 17 16:55:49 2008 From: loniedavid at gmail.com (David Lonie) Date: Wed, 17 Sep 2008 15:55:49 -0500 Subject: [SciPy-user] Error in nonlinear least squares fit analysis In-Reply-To: <3d375d730809171308i12f45decv5e6d161c221d88d3@mail.gmail.com> References: <199bcede0809161006xd29ef45lf732790bd1eb7bf3@mail.gmail.com> <3d375d730809171308i12f45decv5e6d161c221d88d3@mail.gmail.com> Message-ID: <199bcede0809171355r37892954n7487612bac17b3f2@mail.gmail.com> > leastsq(full_output=True) will also return an estimate of the > covariance matrix in recent (probably > 0.6) SVN versions of scipy. This sounds like what I'm looking for -- just to make sure I'm looking at this right, the diagonal terms of the covariance matrix represent the variance in the parameters? I.e., for the error in a exponential fit y = coef[0]*e**(coef[1]*x), the error for coef[1] would be the sqrt of leastsq's cov_x[1][1], and I can return the best value of the parameter as: coef[1] +/- sqrt(cov_x[1][1]) ? This may be a simple question, but I'm having great difficulty finding a source that explains the meaning of all of these various matricies :) Thanks for the help, Dave From wagh.utkarsh at gmail.com Thu Sep 18 03:31:44 2008 From: wagh.utkarsh at gmail.com (utkarsh wagh) Date: Thu, 18 Sep 2008 13:01:44 +0530 Subject: [SciPy-user] Regarding Genetic algorithm Message-ID: <7869a7600809180031j282e0221xc21a67b85d540cd5@mail.gmail.com> Hi, Can anyone help me out in using the Genetic algorithm (ga) subpackage in Scipy. If possible can anyone send me the sample codes thank you, -- Utkarsh Wagh IIT Delhi Contact no: +91 9990646707 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Roger.Fearick at uct.ac.za Thu Sep 18 04:44:54 2008 From: Roger.Fearick at uct.ac.za (Roger Fearick) Date: Thu, 18 Sep 2008 10:44:54 +0200 Subject: [SciPy-user] Error in nonlinear least squares fit analysis Message-ID: <48D23126.8130.009D.0@uct.ac.za> Hi all, I'm an occasional user of scipy but this is one of the things I have looked at, so I thought I'd better delurk and comment. > This sounds like what I'm looking for -- just to make sure I'm looking > at this right, the diagonal terms of the covariance matrix represent > the variance in the parameters? I.e., for the error in a exponential > fit y = coef[0]*e**(coef[1]*x), the error for coef[1] would be the s> qrt of leastsq's cov_x[1][1], and I can return the best value of the p> arameter as: > coef[1] +/- sqrt(cov_x[1][1]) ? > This may be a simple question, but I'm having great difficulty finding > a source that explains the meaning of all of these various matricies > :) It's not a simple question, actually. The gory details are in something like Numerical Recipes -- in latest (3rd) edition Sect 15.6 (15.6.5). There is a scipy example at www.phy.uct.ac.za/courses/python/examples/moreexamples.html (Non-linear least squares fit) which shows how to process the output of leastsq. Roger. -- Roger Fearick Department of Physics University of Cape Town From lfriedri at imtek.de Thu Sep 18 05:05:53 2008 From: lfriedri at imtek.de (Lars Friedrich) Date: Thu, 18 Sep 2008 11:05:53 +0200 Subject: [SciPy-user] optimize.fmin_bfgs, vectorial epsilon Message-ID: <48D219F1.6030408@imtek.de> Hello, the docstring of scipy.optimze.fmin_bfgs says that the epsilon parameter can be an ndarray: epsilon : int or ndarray I assume, that it should be possible to pass different epsilons for all the different elements of x. But, if I do so, this causes an error, since in optimize.approx_fprime there are the lines: ei[k] = epsilon grad[k] = (f(*((xk+ei,)+args)) - f0)/epsilon that assume that epsilon is scalar. Is this intended? Why does the docstring say 'int', shouldn't this be 'float'? Besides that: I get 'Warning: Desired error not necessarily achieved due to precision loss' on my optimization problem. Could this be due to the fact that my parameters differ by several orders of magnitude? Do I have to rescale my error function to cope with that? Or should a 'per-parameter-epsilon' be enough? Thanks! Lars -- Dipl.-Ing. Lars Friedrich Bio- and Nano-Photonics Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-K?hler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lfriedri at imtek.de From ajvogel at tuks.co.za Thu Sep 18 08:38:55 2008 From: ajvogel at tuks.co.za (Adolph J. Vogel) Date: Thu, 18 Sep 2008 14:38:55 +0200 Subject: [SciPy-user] Stability of scipy svn In-Reply-To: <48D219F1.6030408@imtek.de> References: <48D219F1.6030408@imtek.de> Message-ID: <200809181438.56732.ajvogel@tuks.co.za> Hi, I want to use some of the new sparse matrix features in the development version of scipy. My question is simple, how stable is the current scipy svn? Are there any show stopper bugs? regards, Adolph -- ______________________________________________ Adolph J. Vogel ajvogel at tuks.co.za BEng(Mech)(Pta) 072 592 5836 012 420 4762 Department Mechanical and Aeronautical Engineering University of Pretoria South Africa _______________________________________________ From wnbell at gmail.com Thu Sep 18 09:57:12 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 18 Sep 2008 09:57:12 -0400 Subject: [SciPy-user] Stability of scipy svn In-Reply-To: <200809181438.56732.ajvogel@tuks.co.za> References: <48D219F1.6030408@imtek.de> <200809181438.56732.ajvogel@tuks.co.za> Message-ID: On Thu, Sep 18, 2008 at 8:38 AM, Adolph J. Vogel wrote: > > I want to use some of the new sparse matrix features in the > development version of scipy. My question is simple, how stable is > the current scipy svn? Are there any show stopper bugs? > I'm not aware of any major bugs in scipy.sparse. If you find one, let us know and we'll fix it immediately. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From loniedavid at gmail.com Thu Sep 18 10:42:25 2008 From: loniedavid at gmail.com (David Lonie) Date: Thu, 18 Sep 2008 09:42:25 -0500 Subject: [SciPy-user] Error in nonlinear least squares fit analysis In-Reply-To: <48D23126.8130.009D.0@uct.ac.za> References: <48D23126.8130.009D.0@uct.ac.za> Message-ID: <199bcede0809180742i4c48a179o6c7dae3af445a853@mail.gmail.com> > There is a scipy example at www.phy.uct.ac.za/courses/python/examples/moreexamples.html > (Non-linear least squares fit) which shows how to process the output of leastsq. This is exactly what I was looking for! Thanks for the help :) Dave From nwagner at iam.uni-stuttgart.de Thu Sep 18 12:19:25 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 18 Sep 2008 18:19:25 +0200 Subject: [SciPy-user] Regarding Genetic algorithm In-Reply-To: <7869a7600809180031j282e0221xc21a67b85d540cd5@mail.gmail.com> References: <7869a7600809180031j282e0221xc21a67b85d540cd5@mail.gmail.com> Message-ID: On Thu, 18 Sep 2008 13:01:44 +0530 "utkarsh wagh" wrote: > Hi, > > Can anyone help me out in using the Genetic algorithm >(ga) subpackage in > Scipy. If possible can anyone send me the sample codes > > thank you, > > -- > Utkarsh Wagh > IIT Delhi > Contact no: +91 9990646707 See http://projects.scipy.org/scipy/scipy/ticket/484 Nils From kianatoufighi at hotmail.com Thu Sep 18 12:53:00 2008 From: kianatoufighi at hotmail.com (Kiana Toufighi) Date: Thu, 18 Sep 2008 12:53:00 -0400 Subject: [SciPy-user] having problems with fortran compiler on Mac OS X 10.5.2 Message-ID: Hi all. I am trying to install SciPy on my PowerMac with OS X 10.5.2 (Leopard) by following the instructions here http://www.scipy.org/Installing_SciPy/Mac_OS_X and I'm getting errors with respect to the fortran compiler. I have personally installed the newest version of the fortran compiler v. 4.2.3 and I obviously have a C compiler. gcc --version: i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5465) Copyright (C) 2005 Free Software Foundation, Inc. gfortran --version: GNU Fortran (GCC) 4.2.3 Copyright (C) 2007 Free Software Foundation, Inc. Now I've tried building scipy by the following commands and here are the errors I get. python setup.py build Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc. python setup.py config_fc --fcompiler=gfortran build error: don't know how to compile Fortran code on platform 'posix' with 'gfortran' compiler. Supported compilers are: compaq,none,absoft,intel,f,gnu,sun,nag,vast,ibm,gnu95,intelv,g95,intele,pg,lahey,compaqv,mips,hpux,intelev,intelem) So then I try one of these listend supported compilers like gnu95 but it's not installed!! I wish Apple would just include a fortran compiler with their dev tools! Any help/advice is apprecaited. Thanks, Kiana _________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From torii at gmx.com Thu Sep 18 14:00:33 2008 From: torii at gmx.com (torii at gmx.com) Date: Thu, 18 Sep 2008 14:00:33 -0400 Subject: [SciPy-user] Width of the gaussian in stats.kde.gaussian_kde ? Message-ID: <20080918181527.297260@gmx.com> Dear scipy users, I used the kernel-density estimate to make some 2D density plots (stats.?kde.?gaussian_kde) and I was very happy with the result. But when I do the same exercise over a much larger area, I completely lost the details I had with my previous analysis... If I understand correctly, this is related to the adaptation of the elementary gaussian to the scale of my dataset which now includes large areas with almost no data. Question: is there a way to control the width of the gaussian in stats.kde.gaussian_kde? or should I switch to another technique? Note: I am both a newbie in python and stats... Anthony -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Sep 18 15:18:44 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 18 Sep 2008 14:18:44 -0500 Subject: [SciPy-user] having problems with fortran compiler on Mac OS X 10.5.2 In-Reply-To: References: Message-ID: <3d375d730809181218sf81bd04i5d34803b571f8aa8@mail.gmail.com> On Thu, Sep 18, 2008 at 11:53, Kiana Toufighi wrote: > Hi all. > > I am trying to install SciPy on my PowerMac with OS X 10.5.2 (Leopard) by > following the instructions here > http://www.scipy.org/Installing_SciPy/Mac_OS_X and I'm getting errors with > respect to the fortran compiler. > > I have personally installed the newest version of the fortran compiler v. > 4.2.3 and I obviously have a C compiler. > gcc --version: > i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5465) > Copyright (C) 2005 Free Software Foundation, Inc. > > gfortran --version: > GNU Fortran (GCC) 4.2.3 > Copyright (C) 2007 Free Software Foundation, Inc. > > Now I've tried building scipy by the following commands and here are the > errors I get. > > python setup.py build > Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.3\nCopyright (C) > 2007 Free Software Foundation, Inc. > > python setup.py config_fc --fcompiler=gfortran build > error: don't know how to compile Fortran code on platform 'posix' with > 'gfortran' compiler. Supported compilers are: > compaq,none,absoft,intel,f,gnu,sun,nag,vast,ibm,gnu95,intelv,g95,intele,pg,lahey,compaqv,mips,hpux,intelev,intelem) > > So then I try one of these listend supported compilers like gnu95 but it's > not installed!! > I wish Apple would just include a fortran compiler with their dev tools! --fcompiler=gnu95 actually refers to gfortran. Can you show us the output you get when you try to use that? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kianatoufighi at hotmail.com Thu Sep 18 15:23:44 2008 From: kianatoufighi at hotmail.com (Kiana Toufighi) Date: Thu, 18 Sep 2008 15:23:44 -0400 Subject: [SciPy-user] having problems with fortran compiler on Mac OS X 10.5.2 In-Reply-To: <3d375d730809181218sf81bd04i5d34803b571f8aa8@mail.gmail.com> References: <3d375d730809181218sf81bd04i5d34803b571f8aa8@mail.gmail.com> Message-ID: Hi Robert, Thanks for the reply. When I try building with python setup.py config_fc --fcompiler=gnu95 build the last few lines of output are: running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib Could not locate executable g77 Could not locate executable f77 Could not locate executable f95 customize Gnu95FCompiler Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the terms of the GNU General Public License.\nFor more information about these matters, see the file named COPYING\n' customize Gnu95FCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the terms of the GNU General Public License.\nFor more information about these matters, see the file named COPYING\n' warning: build_ext: fcompiler=gnu95 is not available. Thanks, K. _________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Thu Sep 18 15:41:40 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 18 Sep 2008 15:41:40 -0400 Subject: [SciPy-user] Width of the gaussian in stats.kde.gaussian_kde ? In-Reply-To: <20080918181527.297260@gmx.com> References: <20080918181527.297260@gmx.com> Message-ID: 2008/9/18 : > I used the kernel-density estimate to make some 2D density plots > (stats.kde.gaussian_kde) and I was very happy with the result. > > But when I do the same exercise over a much larger area, I completely lost > the details I had with my previous analysis... If I understand correctly, > this is related to the adaptation of the elementary gaussian to the scale of > my dataset which now includes large areas with almost no data. > > Question: is there a way to control the width of the gaussian in > stats.kde.gaussian_kde? or should I switch to another technique? > > Note: I am both a newbie in python and stats... Kernel-density estimates approximate the distribution of your data by placing a copy of the kernel at each data point. Scipy's gaussian_kde uses multidimensional gaussians as the kernel. It provides various features, including reasonably efficient evaluation, integration over boxes and against gaussians and other gaussian KDEs, and most relevantly, automatic selection of the covariance matrix of the kernel. There are a number of different ways to do this automatic selection in the statistical literature, and the one implemented in gaussian_kde is appropriate for a unimodal distribution: it uses the covariance matrix of your data, scaled by a factor depending on the dimensionality and number of data points. If, however, you have a data set consisting of several narrow widely-separated peaks, this will give a needlessly blurry estimate. A more powerful standard tool for choosing the variance in one dimension is to use a "plug-in estimator". The idea is to try to choose the variance that minimizes the estimated mean-squared error. The mean squared error comes from two things: the randomness of the original sampling, and the smoothing by the gaussians. If you choose a very broad kernel, you smooth out almost all the noise, but you have lots of mean-squared error because you've smoothed away any features the distribution really had. On the other hand, if you use a very narrow kernel, you haven't smoothed the distribution much at all, but now the randomness of your points wrecks things. So you have to select some optimal kernel. Ideally, you could run a numerical minimization by evaluating the mean squared error for each trial variance. Unfortunately this requires you to know the true distribution. That's where the "plug-in estimator" comes in: you choose some crude way to estimate the distribution, and use this crude estimate in your mean-squared error calculations to get the best variance. You then use this best variance as your kernel width. What does this have to do with you? Well, unfortunately, plug-in estimators are not implemented in scipy, probably in part because it's difficult enough in one dimension, and a real horror in several. You could try to implement it (after doing some reading in the statistical literature) but I suspect that's not how you want to address the problem. I suggest you choose a "representative" part of your data that consists of a single peak, feed it into gaussian_kde, and extract the variance. Then create a gaussian_kde for the whole data set using that same variance. You can optionally fudge the covariance as necessary in between. The first part, extracting the covariance from a gaussian_kde instance, should be easy: as soon as you've constructed the gaussian_kde it is stored in the covariance attribute. The second part, building a gaussian_kde instance with set covariance, is going to require a little hacking. Here's how I'd do it (untested): class gaussian_kde_set_covariance(scipy.stats.gaussian_kde): def __init__(self, dataset, covariance): self.covariance = covariance scipy.stats.gaussian_kde.__init__(self, dataset) def _compute_covariance(self): self.inv_cov = linalg.inv(self.covariance) self._norm_factor = sqrt(linalg.det(2*pi*self.covariance)) * self.n This creates a derived class from gaussian_kde which does everything the same way except for how it is constructed and how it computes its covariance matrix (i.e. it doesn't, but it does compute the various other matrices it needs). Good luck, Anne From robert.kern at gmail.com Thu Sep 18 16:11:22 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 18 Sep 2008 15:11:22 -0500 Subject: [SciPy-user] having problems with fortran compiler on Mac OS X 10.5.2 In-Reply-To: References: <3d375d730809181218sf81bd04i5d34803b571f8aa8@mail.gmail.com> Message-ID: <3d375d730809181311n543469ecs4b4b8ef41d295f7a@mail.gmail.com> On Thu, Sep 18, 2008 at 14:23, Kiana Toufighi wrote: > Hi Robert, > > Thanks for the reply. When I try building with python setup.py config_fc > --fcompiler=gnu95 build > the last few lines of output are: > > running build_clib > customize UnixCCompiler > customize UnixCCompiler using build_clib > Could not locate executable g77 > Could not locate executable f77 > Could not locate executable f95 > customize Gnu95FCompiler > Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.3\nCopyright (C) > 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, > to the extent permitted by law.\nYou may redistribute copies of GNU > Fortran\nunder the terms of the GNU General Public License.\nFor more > information about these matters, see the file named COPYING\n' What version of numpy do you have? I use this version of gfortran on 10.5.2, too, with SVN numpy, but it looks like numpy 1.1.1 has the same version-processing code. numpy 1.2.0 will be out very shortly, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Thu Sep 18 17:23:50 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 18 Sep 2008 17:23:50 -0400 Subject: [SciPy-user] scipy build fails on Linux-x86-64 with custom ATLAS Message-ID: Hi folks, I am having an issue building scipy from SVN. I rolled my own ATLAS and installed it under /usr/local/atlas, and passed the ATLAS environment variable. I also told it to use gfortran as opposed to g77 as that's what was used for ATLAS. The compilation error is as follows: --------------- /usr/bin/gfortran -Wall -Wall -shared build/temp.linux-x86_64-2.5/ scipy/integrate/_odepackmodule.o -L/usr/local/atlas/lib -Lbuild/ temp.linux-x86_64-2.5 -lodepack -llinpack_lite -lmach -lptf77blas - lptcblas -latlas -lgfortran -o build/lib.linux-x86_64-2.5/scipy/ integrate/_odepack.so /usr/bin/ld: /usr/local/atlas/lib/libptf77blas.a(dscal.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/atlas/lib/libptf77blas.a: could not read symbols: Bad value collect2: ld returned 1 exit status /usr/bin/ld: /usr/local/atlas/lib/libptf77blas.a(dscal.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/atlas/lib/libptf77blas.a: could not read symbols: Bad value collect2: ld returned 1 exit status error: Command "/usr/bin/gfortran -Wall -Wall -shared build/ temp.linux-x86_64-2.5/scipy/integrate/_odepackmodule.o -L/usr/local/ atlas/lib -Lbuild/temp.linux-x86_64-2.5 -lodepack -llinpack_lite - lmach -lptf77blas -lptcblas -latlas -lgfortran -o build/lib.linux- x86_64-2.5/scipy/integrate/_odepack.so" failed with exit status 1 --------------- I notice it's complaining about -fPIC, except that when I configured ATLAS, I specifically told it that every compiler should use -fPIC: ../configure -m 2400 -b 64 -D c -DPentiumCPS=2400 --prefix=/usr/ local/atlas --with-netlib-lapack=../../lapack-3.1.1/lapack_LINUX.a - Fa acg '-fPIC' That last option, if I am reading the docs correctly, should take care of it, yet SciPy's build still fails. Does anyone have any idea why? Thanks, David From robince at gmail.com Thu Sep 18 17:31:19 2008 From: robince at gmail.com (Robin) Date: Thu, 18 Sep 2008 22:31:19 +0100 Subject: [SciPy-user] scipy build fails on Linux-x86-64 with custom ATLAS In-Reply-To: References: Message-ID: On Thu, Sep 18, 2008 at 10:23 PM, David Warde-Farley wrote: > ../configure -m 2400 -b 64 -D c -DPentiumCPS=2400 --prefix=/usr/ > local/atlas --with-netlib-lapack=../../lapack-3.1.1/lapack_LINUX.a - > Fa acg '-fPIC' I've always used -Fa alg '-fPIC' - this is also suggested on the wiki page: http://scipy.org/Installing_SciPy/Linux Perhaps this could make a difference? I think acg is only C compilers, while the l includes fortran as well - since the error your getting is coming from a Fortran library I think it's likely this could fix it. Cheers Robin From bblais at bryant.edu Thu Sep 18 17:33:35 2008 From: bblais at bryant.edu (Brian Blais) Date: Thu, 18 Sep 2008 17:33:35 -0400 Subject: [SciPy-user] delays and odeint Message-ID: Hello, I am trying to do some dynamics systems modeling, specifically modeling population dynamics. One piece of the model involves a delay, where you have something like: dy=x-delay(x,10) where a population, y, has an amount x added to it and an amount of x, from 10 time steps before, exiting. In this way the population, y, had members adding to it that last in population y for 10 time steps and then exit. Is there a good way of representing this in such a way that I can still use odeint to solve the resulting diff- eqs? I could write an Euler-method myself, and represent the delay as a list which I push from one end and pop from the other, but that seems a little hacky. odeint uses an adaptive step-size, so that solution wouldn't work. I wasn't sure if there already existed a tool for helping with this sort of thing. thanks, Brian Blais -- Brian Blais bblais at bryant.edu http://web.bryant.edu/~bblais -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Thu Sep 18 17:56:05 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 18 Sep 2008 17:56:05 -0400 Subject: [SciPy-user] scipy build fails on Linux-x86-64 with custom ATLAS In-Reply-To: References: Message-ID: On 18-Sep-08, at 5:31 PM, Robin wrote: > On Thu, Sep 18, 2008 at 10:23 PM, David Warde-Farley > wrote: >> ../configure -m 2400 -b 64 -D c -DPentiumCPS=2400 --prefix=/ >> usr/ >> local/atlas --with-netlib-lapack=../../lapack-3.1.1/lapack_LINUX.a - >> Fa acg '-fPIC' > > I've always used -Fa alg '-fPIC' - this is also suggested on the > wiki page: > http://scipy.org/Installing_SciPy/Linux > > Perhaps this could make a difference? I think acg is only C compilers, > while the l includes fortran as well - since the error your getting is > coming from a Fortran library I think it's likely this could fix it. I just figured that out myself. As usual, one character makes all the difference. :( That, and I didn't know such a wiki page existed. Oh well. Problem solved. Thanks for the quick reply, David From gdahl at cs.toronto.edu Thu Sep 18 20:26:01 2008 From: gdahl at cs.toronto.edu (gdahl at cs.toronto.edu) Date: Thu, 18 Sep 2008 20:26:01 -0400 (EDT) Subject: [SciPy-user] can't build scipy on mac os 10.5.5 intel Message-ID: <27496.67.212.30.212.1221783961.squirrel@webmail.cs.toronto.edu> Hi everyone. I am not sure if my other email account is working so I apologize if this already hit the list. I get the error below when I run python setup.py build with the latest SVN version of scipy. I have numpy (latest svn) and it works in python2.5, which is the version of python I want to use scipy from. I have gcc 4.0.1 and gfortran 4.2.1. This is the command I ran: sudo python setup.py build_src build_clib --fcompiler=gnu95 build_ext --fcompiler=gnu95 build I have pasted some of the output of python setup.py build below. I don't really know what I am doing, but I would really like to get scipy working and I would appreciate any help anyone can give me! I looked around in the archives with google a bit, but nothing seemed to deal with my situation exactly, hopefully I didn't miss anything. - George $ sudo python setup.py build_src build_clib --fcompiler=gnu95 build_ext --fcompiler=gnu95 build . . . building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources adding 'scipy/sparse/linalg/dsolve/umfpack/umfpack.i' to sources. swig: scipy/sparse/linalg/dsolve/umfpack/umfpack.i swig -python -o build/src.macosx-10.3-i386-2.5/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c -outdir build/src.macosx-10.3-i386-2.5/scipy/sparse/linalg/dsolve/umfpack scipy/sparse/linalg/dsolve/umfpack/umfpack.i scipy/sparse/linalg/dsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:193: Error: Unable to find 'umfpack_solve.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:194: Error: Unable to find 'umfpack_defaults.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:195: Error: Unable to find 'umfpack_triplet_to_col.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:196: Error: Unable to find 'umfpack_col_to_triplet.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:197: Error: Unable to find 'umfpack_transpose.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:198: Error: Unable to find 'umfpack_scale.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:200: Error: Unable to find 'umfpack_report_symbolic.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:201: Error: Unable to find 'umfpack_report_numeric.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:202: Error: Unable to find 'umfpack_report_info.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:203: Error: Unable to find 'umfpack_report_control.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:215: Error: Unable to find 'umfpack_symbolic.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:216: Error: Unable to find 'umfpack_numeric.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:225: Error: Unable to find 'umfpack_free_symbolic.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:226: Error: Unable to find 'umfpack_free_numeric.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:248: Error: Unable to find 'umfpack_get_lunz.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:272: Error: Unable to find 'umfpack_get_numeric.h' error: command 'swig' failed with exit status 1 From torii at gmx.com Thu Sep 18 22:44:09 2008 From: torii at gmx.com (Anthony) Date: Fri, 19 Sep 2008 02:44:09 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Width_of_the_gaussian_in_stats=2Ekde=2Egau?= =?utf-8?q?ssian=5Fkde_=3F?= References: <20080918181527.297260@gmx.com> Message-ID: Thank you so much for taking the time to explain all this things. Your suggestion makes sense but I am not sure I can implement it... I'll try it though. Cheers, Anthony From rob.clewley at gmail.com Fri Sep 19 00:32:52 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Fri, 19 Sep 2008 00:32:52 -0400 Subject: [SciPy-user] delays and odeint In-Reply-To: References: Message-ID: > I am trying to do some dynamics systems modeling, specifically modeling > population dynamics. One piece of the model involves a delay, where you > have something like: > dy=x-delay(x,10) If you just want to solve the problem, I actually recommend using the XPPAUT program. It's great for that kind of thing, is easy to install, and even has a graphical interface. Unfortunately I can't say my own python dynamics package has support for delayed systems yet, although I think PyDDE looks good if you must have a python solution. -Rob From mnandris at blueyonder.co.uk Fri Sep 19 05:58:50 2008 From: mnandris at blueyonder.co.uk (Michael) Date: Fri, 19 Sep 2008 10:58:50 +0100 Subject: [SciPy-user] How to get a ppf for scipy.stats.beta Message-ID: <1221818330.6174.2.camel@mik> Hi list, i am trying to use the percent point function to sample from the cdf for a beta distribution beta.ppf() returns an array instead of x; also you cannot create a ppf using the same idiom as beta.cdf(x,a,b,size=n) How do i go about getting the ppf for a beta dist? thanks in advance - much hairpulling on this Michael from scipy.stats import norm from scipy.stats import beta from scipy import linspace print norm.cdf(1.2) # 0.884930329778 print norm.ppf(0.884930329778) # 1.2 a=2 b=7 n=10e3 x=linspace(0,1,n) cdf=beta.cdf(x,a,b,size=n) print cdf[2371] # 0.6 B=beta(x,a,b,size=n) print B.cdf(2371) print B.ppf(0.6) # <-- should be 2371? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From massimo.sandal at unibo.it Fri Sep 19 08:00:02 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Fri, 19 Sep 2008 14:00:02 +0200 Subject: [SciPy-user] Regarding Genetic algorithm In-Reply-To: References: <7869a7600809180031j282e0221xc21a67b85d540cd5@mail.gmail.com> Message-ID: <48D39442.4080905@unibo.it> Nils Wagner wrote: > On Thu, 18 Sep 2008 13:01:44 +0530 > "utkarsh wagh" wrote: >> Hi, >> >> Can anyone help me out in using the Genetic algorithm >> (ga) subpackage in >> Scipy. If possible can anyone send me the sample codes >> >> thank you, >> >> -- >> Utkarsh Wagh >> IIT Delhi >> Contact no: +91 9990646707 > > See > > http://projects.scipy.org/scipy/scipy/ticket/484 Sorry but I fail to understand how this page can help. m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From david.huard at gmail.com Fri Sep 19 08:52:31 2008 From: david.huard at gmail.com (David Huard) Date: Fri, 19 Sep 2008 08:52:31 -0400 Subject: [SciPy-user] How to get a ppf for scipy.stats.beta In-Reply-To: <1221818330.6174.2.camel@mik> References: <1221818330.6174.2.camel@mik> Message-ID: <91cf711d0809190552t2f6f530es3273fe5d2e805f7a@mail.gmail.com> Hi Michael, calling beta(a, b) will generate a "frozen distribution", that is one whose parameters are fixed by you: B = beta(a, b) # note that you should't pass x at this stage. B.pdf(x) B.cdf(x) Now if you want to sample from the Beta, simply use the rvs method: B.rvs(1000) You can also simply do beta.rvs(a,b,size=1000) HTH, David On Fri, Sep 19, 2008 at 5:58 AM, Michael wrote: > Hi list, > > i am trying to use the percent point function to sample from the cdf for > a beta distribution > > beta.ppf() returns an array instead of x; also you cannot create a ppf > using the same idiom as beta.cdf(x,a,b,size=n) > > How do i go about getting the ppf for a beta dist? > > thanks in advance - much hairpulling on this > > Michael > > > from scipy.stats import norm > from scipy.stats import beta > from scipy import linspace > > print norm.cdf(1.2) # 0.884930329778 > print norm.ppf(0.884930329778) # 1.2 > > a=2 > b=7 > n=10e3 > x=linspace(0,1,n) > > cdf=beta.cdf(x,a,b,size=n) > print cdf[2371] # 0.6 > > B=beta(x,a,b,size=n) > > print B.cdf(2371) > print B.ppf(0.6) # <-- should be 2371? > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Sep 19 09:40:31 2008 From: josef.pktd at gmail.com (joep) Date: Fri, 19 Sep 2008 06:40:31 -0700 (PDT) Subject: [SciPy-user] How to get a ppf for scipy.stats.beta In-Reply-To: <91cf711d0809190552t2f6f530es3273fe5d2e805f7a@mail.gmail.com> References: <1221818330.6174.2.camel@mik> <91cf711d0809190552t2f6f530es3273fe5d2e805f7a@mail.gmail.com> Message-ID: <7925319f-150c-42c2-81e0-83576892e404@w7g2000hsa.googlegroups.com> Scaling is done through the ``scale`` keyword, not through ``size``, ``size`` refers to sample size for rvs I think what you want is this: Use distribution function directly, not frozen: >>> beta.cdf(2371,a,b,scale=n) array(0.59995766129911532) >>> beta.ppf(0.6,a,b,scale=n) array(2371.1617428411146) Freeze the distribution at the distribution parameters, but not at the value (x) at which you want to evaluate cdf, ppf,.. In your case the x was fixed at the initially defined values >>> beta(a,b,scale=n).cdf(2371) array(0.59995766129911532) >>> beta(a,b,scale=n).ppf(0.6) array(2371.1617428411146) >>> B = beta(a,b,scale=n) >>> B.cdf(2371) array(0.59995766129911532) >>> B.ppf(0.6) array(2371.1617428411146) Josef On Sep 19, 8:52?am, "David Huard" wrote: > Hi Michael, > > calling beta(a, b) will generate a "frozen distribution", that is one whose > parameters are fixed by you: > > B = beta(a, b) ? # note that you should't pass x at this stage. > > B.pdf(x) > B.cdf(x) > > Now if you want to sample from the Beta, simply use the rvs method: > > B.rvs(1000) > > You can also simply do > > beta.rvs(a,b,size=1000) > > HTH, > > David > > On Fri, Sep 19, 2008 at 5:58 AM, Michael wrote: > > Hi list, > > > i am trying to use the percent point function to sample from the cdf for > > a beta distribution > > > beta.ppf() returns an array instead of x; also you cannot create a ppf > > using the same idiom as beta.cdf(x,a,b,size=n) > > > How do i go about getting the ppf for a beta dist? > > > thanks in advance - much hairpulling on this > > > Michael > > > from scipy.stats import norm > > from scipy.stats import beta > > from scipy import linspace > > > print norm.cdf(1.2) ? ? ? ? ? ? # 0.884930329778 > > print norm.ppf(0.884930329778) ?# 1.2 > > > a=2 > > b=7 > > n=10e3 > > x=linspace(0,1,n) > > > cdf=beta.cdf(x,a,b,size=n) > > print cdf[2371] ? ? ? ? ? ? ? ? # 0.6 > > > B=beta(x,a,b,size=n) > > > print B.cdf(2371) > > print B.ppf(0.6) ? ? ? ? ? ? ? ?# <-- should be 2371? > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-u... at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user From lroubeyrie at limair.asso.fr Fri Sep 19 04:14:49 2008 From: lroubeyrie at limair.asso.fr (Lionel Roubeyrie) Date: Fri, 19 Sep 2008 10:14:49 +0200 Subject: [SciPy-user] Timeseries Cannot specify output type twice Message-ID: <48D35F79.6050106@limair.asso.fr> Hi all, trying to use the last SVN version of the TimeSeries module, I run directly into problem on a simple test like this one : #################################### >> import numpy as np >>> import scikits.timeseries as ts >>> data = np.random.uniform(-100,100,600) >>> today = ts.now('B') >>> series = ts.time_series(data, dtype=np.float_, freq='B', start_date=today-600) >>> series Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.5/site-packages/scikits/timeseries/tseries.py", line 715, in __repr__ return desc_short % {'data': str(self._series), File "/usr/lib/python2.5/site-packages/scikits/timeseries/tseries.py", line 485, in _get_series return self.view(MaskedArray) File "/usr/lib/python2.5/site-packages/scikits/timeseries/tseries.py", line 471, in view output = MaskedArray.view(self, dtype=dtype, type=type) ValueError: Cannot specify output type twice. >>> np.__version__ '1.1.1' >>> ts.__version__ '0.67.0.dev-r1404' #################################### Is it a know problem or this problem comes from my installation? Thanks for your help -- Lionel Roubeyrie - lroubeyrie at limair.asso.fr Charg? d'?tudes et de maintenance LIMAIR - la Surveillance de l'Air en Limousin http://www.limair.asso.fr From pgmdevlist at gmail.com Fri Sep 19 12:18:20 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 19 Sep 2008 12:18:20 -0400 Subject: [SciPy-user] Timeseries Cannot specify output type twice In-Reply-To: <48D35F79.6050106@limair.asso.fr> References: <48D35F79.6050106@limair.asso.fr> Message-ID: <200809191218.21054.pgmdevlist@gmail.com> On Friday 19 September 2008 04:14:49 Lionel Roubeyrie wrote: > Hi all, > trying to use the last SVN version of the TimeSeries module, I run > directly into problem on a simple test like this one : That's a known problem, sorry about that. You'll need numpy 1.2.1 (I know, 1.2.0 iss not even released yet...), or the latest SVN. Alternatively, just replace line 471 of tseries.py with >>>?output = MaskedArray.view(self, dtype) I'll commit a workaround today or tomorrow. Thanks for your patience. P. From nwagner at iam.uni-stuttgart.de Fri Sep 19 14:01:41 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 19 Sep 2008 20:01:41 +0200 Subject: [SciPy-user] scikits.ann Message-ID: Hi all, I cannot find a component "ann" in the trac http://scipy.org/scipy/scikits/ Where can I report a bug ? python setup.py install running install Checking .pth file support in /usr/local/lib64/python2.5/site-packages /usr/bin/python -E -c pass TEST PASSED: /usr/local/lib64/python2.5/site-packages appears to support .pth files running bdist_egg running egg_info running build_src building extension "scikits.ann._ANN" sources creating build creating build/src.linux-x86_64-2.5 creating build/src.linux-x86_64-2.5/scikits creating build/src.linux-x86_64-2.5/scikits/ann swig++: scikits/ann/ANN.i swig -python -c++ -Iscikits/ann -I/usr/local/include -o build/src.linux-x86_64-2.5/scikits/ann/ANN_wrap.cpp -outdir build/src.linux-x86_64-2.5/scikits/ann scikits/ann/ANN.i building data_files sources writing requirements to scikits.ann.egg-info/requires.txt writing scikits.ann.egg-info/PKG-INFO writing namespace_packages to scikits.ann.egg-info/namespace_packages.txt writing top-level names to scikits.ann.egg-info/top_level.txt writing dependency_links to scikits.ann.egg-info/dependency_links.txt writing manifest file 'scikits.ann.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py creating build/lib.linux-x86_64-2.5 creating build/lib.linux-x86_64-2.5/scikits copying ./scikits/__init__.py -> build/lib.linux-x86_64-2.5/scikits creating build/lib.linux-x86_64-2.5/scikits/ann copying scikits/ann/info.py -> build/lib.linux-x86_64-2.5/scikits/ann copying scikits/ann/__init__.py -> build/lib.linux-x86_64-2.5/scikits/ann copying scikits/ann/setup.py -> build/lib.linux-x86_64-2.5/scikits/ann copying scikits/ann/version.py -> build/lib.linux-x86_64-2.5/scikits/ann copying build/src.linux-x86_64-2.5/scikits/ann/ANN.py -> build/lib.linux-x86_64-2.5/scikits/ann running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'scikits.ann._ANN' extension compiling C++ sources C compiler: g++ -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC creating build/temp.linux-x86_64-2.5 creating build/temp.linux-x86_64-2.5/build creating build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5 creating build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scikits creating build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scikits/ann compile options: '-Iscikits/ann -I/usr/local/include -I/usr/local/lib64/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c' g++: build/src.linux-x86_64-2.5/scikits/ann/ANN_wrap.cpp build/src.linux-x86_64-2.5/scikits/ann/ANN_wrap.cpp:2563:25: error: ANN/ANN.h: No such file or directory In file included from build/src.linux-x86_64-2.5/scikits/ann/ANN_wrap.cpp:2564: scikits/ann/kdtree.h:22:21: error: ANN/ANN.h: No such file or directory scikits/ann/kdtree.h:101:3: warning: no newline at end of file scikits/ann/kdtree.h:27: error: ?ANNpointArray? does not name a type scikits/ann/kdtree.h:28: error: ISO C++ forbids declaration of ?ANNkd_tree? with no type scikits/ann/kdtree.h:28: error: expected ?;? before ?*? token scikits/ann/kdtree.h:43: error: ?ANN_KD_SUGGEST? was not declared in this scope scikits/ann/kdtree.h: In constructor ?_kdtree::_kdtree(double*, int, int, int, int)?: scikits/ann/kdtree.h:44: error: ?pts? was not declared in this scope scikits/ann/kdtree.h:44: error: ?annAllocPts? was not declared in this scope scikits/ann/kdtree.h:53: error: ?tree? was not declared in this scope scikits/ann/kdtree.h:53: error: expected type-specifier before ?ANNkd_tree? scikits/ann/kdtree.h:53: error: expected `;' before ?ANNkd_tree? scikits/ann/kdtree.h: In destructor ?_kdtree::~_kdtree()?: scikits/ann/kdtree.h:57: error: ?pts? was not declared in this scope scikits/ann/kdtree.h:57: error: ?annDeallocPts? was not declared in this scope scikits/ann/kdtree.h:58: error: ?tree? was not declared in this scope scikits/ann/kdtree.h: In member function ?void _kdtree::_knn2(double*, int, int, int, int, int*, int, int, double*, double) const?: scikits/ann/kdtree.h:71: error: ?tree? was not declared in this scope scikits/ann/kdtree.h: In member function ?const char* _kdtree::stringRep(bool) const?: scikits/ann/kdtree.h:98: error: ?tree? was not declared in this scope scikits/ann/kdtree.h:98: error: ?ANNtrue? was not declared in this scope scikits/ann/kdtree.h:98: error: ?ANNfalse? was not declared in this scope build/src.linux-x86_64-2.5/scikits/ann/ANN_wrap.cpp:2563:25: error: ANN/ANN.h: No such file or directory In file included from build/src.linux-x86_64-2.5/scikits/ann/ANN_wrap.cpp:2564: scikits/ann/kdtree.h:22:21: error: ANN/ANN.h: No such file or directory scikits/ann/kdtree.h:101:3: warning: no newline at end of file scikits/ann/kdtree.h:27: error: ?ANNpointArray? does not name a type scikits/ann/kdtree.h:28: error: ISO C++ forbids declaration of ?ANNkd_tree? with no type scikits/ann/kdtree.h:28: error: expected ?;? before ?*? token scikits/ann/kdtree.h:43: error: ?ANN_KD_SUGGEST? was not declared in this scope scikits/ann/kdtree.h: In constructor ?_kdtree::_kdtree(double*, int, int, int, int)?: scikits/ann/kdtree.h:44: error: ?pts? was not declared in this scope scikits/ann/kdtree.h:44: error: ?annAllocPts? was not declared in this scope scikits/ann/kdtree.h:53: error: ?tree? was not declared in this scope scikits/ann/kdtree.h:53: error: expected type-specifier before ?ANNkd_tree? scikits/ann/kdtree.h:53: error: expected `;' before ?ANNkd_tree? scikits/ann/kdtree.h: In destructor ?_kdtree::~_kdtree()?: scikits/ann/kdtree.h:57: error: ?pts? was not declared in this scope scikits/ann/kdtree.h:57: error: ?annDeallocPts? was not declared in this scope scikits/ann/kdtree.h:58: error: ?tree? was not declared in this scope scikits/ann/kdtree.h: In member function ?void _kdtree::_knn2(double*, int, int, int, int, int*, int, int, double*, double) const?: scikits/ann/kdtree.h:71: error: ?tree? was not declared in this scope scikits/ann/kdtree.h: In member function ?const char* _kdtree::stringRep(bool) const?: scikits/ann/kdtree.h:98: error: ?tree? was not declared in this scope scikits/ann/kdtree.h:98: error: ?ANNtrue? was not declared in this scope scikits/ann/kdtree.h:98: error: ?ANNfalse? was not declared in this scope error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -Iscikits/ann -I/usr/local/include -I/usr/local/lib64/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c build/src.linux-x86_64-2.5/scikits/ann/ANN_wrap.cpp -o build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scikits/ann/ANN_wrap.o" failed with exit status 1 Nils From matthieu.brucher at gmail.com Fri Sep 19 14:23:11 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 19 Sep 2008 20:23:11 +0200 Subject: [SciPy-user] scikits.ann In-Reply-To: References: Message-ID: 2008/9/19 Nils Wagner : > Hi all, > > I cannot find a component "ann" in the trac > http://scipy.org/scipy/scikits/ > > Where can I report a bug ? Did you install ANN first ? Matthieu -- French PhD student Information System Engineer Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From barrywark at gmail.com Fri Sep 19 16:25:06 2008 From: barrywark at gmail.com (Barry Wark) Date: Fri, 19 Sep 2008 13:25:06 -0700 Subject: [SciPy-user] scikits.ann In-Reply-To: References: Message-ID: On Fri, Sep 19, 2008 at 11:01 AM, Nils Wagner wrote: > Hi all, > > I cannot find a component "ann" in the trac > http://scipy.org/scipy/scikits/ > > Where can I report a bug ? Nils, Some documentation is available at http://scipy.org/scipy/scikits/wiki/AnnWrapper. Let me know if that doesn't do the trick for you. Barry From nwagner at iam.uni-stuttgart.de Sat Sep 20 03:28:38 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 20 Sep 2008 09:28:38 +0200 Subject: [SciPy-user] scikits.ann In-Reply-To: References: Message-ID: On Fri, 19 Sep 2008 20:23:11 +0200 "Matthieu Brucher" wrote: > 2008/9/19 Nils Wagner : >> Hi all, >> >> I cannot find a component "ann" in the trac >> http://scipy.org/scipy/scikits/ >> >> Where can I report a bug ? > > Did you install ANN first ? > Sorry, I missed that. Now it works for me. Thank you very much ! Nils From contact at pythonxy.com Sat Sep 20 11:31:01 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sat, 20 Sep 2008 17:31:01 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.1.0 Message-ID: <48D51735.4050102@pythonxy.com> Hi all, As you may already know, Python(x,y) is a free scientific-oriented Python Distribution based on Qt and Eclipse providing a self-consistent scientific development environment. Release 2.1.0 is now available on http://www.pythonxy.com. (Full Edition, Basic Edition, Light Edition, Custom Edition and Update) *Major update* o Python(x,y) installation is now fully customizable: install only what you need! (dependencies are handled by the installer) (http://www.pythonxy.com/plugins.php) o Python(x,y) is now designed for (instead of compatible with) Windows Vista as well as for Windows XP (http://www.pythonxy.com/_tools/img.php?lang=&img=//_images/Interactive%20computing.png) Version 2.1.0 (09-19-2008) * Added: o Windows installer: fully customizable installation thanks to the new plugin-based installer - now you can install only what you need among all available Python(x,y) plugins (Python, Eclipse and Others) * Updated: o xy 1.0.6 (Installed plugins detection) o IPython 0.9.1 o Parallel Python 1.5.6 * Corrected: o Issue 14: Windows Vista / default Eclipse workspace folder was incorrect if "My Documents" folder has been moved from its default location o Issue 16: PyQt4 / plugins not found in Qt Designer Regards, Pierre Raybaut From massimo.sandal at unibo.it Sun Sep 21 14:46:43 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Sun, 21 Sep 2008 20:46:43 +0200 Subject: [SciPy-user] scipy.genetic In-Reply-To: <1221849174.7186.3.camel@mik> References: <1221849174.7186.3.camel@mik> Message-ID: <48D69693.2060809@unibo.it> Hi Michael, Michael wrote: > Hi Massimo, > > thanks for raising this on the list: i have been hunting around for ga > examples for a while. > > to get hold of the genetic examples: > > 0) install subversion aka svn > 1) download the learn repository > svn co http://svn.scipy.org/svn/scikits/trunk/learn learn > the examples are in ~/learn/scikits/learn/machine/ga > > each directory can be individually installed (i think) > > e.g. maybe by 'python setup.py build' if you want to build it in place > > hope that helps Thanks a lot -the localization of the example files was the one I was missing. m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Mon Sep 22 10:08:34 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 22 Sep 2008 16:08:34 +0200 Subject: [SciPy-user] Proceedings of the SciPy conference. Message-ID: <20080922140834.GA5840@phare.normalesup.org> The SciPy conference proceedings are finally available online: http://conference.scipy.org/proceedings/SciPy2008 . I hope you enjoy them. I find it great to have this set of excellent articles talking about works done with, or for, Python in science. For me, it is a reference to remember what was said at the conference. I hope it can also be interesting for people who were not present at the conference. I apologize for being so slow at publishing them. In addition to the round trip between authors and editors taking a while, I have been travelling back home and spent way too much time last week finishing off administrative duties in the US. Ga?l From stefan at sun.ac.za Mon Sep 22 10:21:43 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 22 Sep 2008 16:21:43 +0200 Subject: [SciPy-user] Proceedings of the SciPy conference. In-Reply-To: <20080922140834.GA5840@phare.normalesup.org> References: <20080922140834.GA5840@phare.normalesup.org> Message-ID: <9457e7c80809220721k1068f1c4rf0c4a3fde02733ff@mail.gmail.com> 2008/9/22 Gael Varoquaux : > The SciPy conference proceedings are finally available online: > http://conference.scipy.org/proceedings/SciPy2008 . Gael, thank you very much for all the time you invested in this publication. I know you spent many nights hacking TeX, beating the processing pipeline into shape, reading, coordinating and encouraging. Your efforts did not go by unseen! Regards St?fan From gael.varoquaux at normalesup.org Mon Sep 22 10:26:40 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 22 Sep 2008 16:26:40 +0200 Subject: [SciPy-user] Proceedings of the SciPy conference. In-Reply-To: <9457e7c80809220721k1068f1c4rf0c4a3fde02733ff@mail.gmail.com> References: <20080922140834.GA5840@phare.normalesup.org> <9457e7c80809220721k1068f1c4rf0c4a3fde02733ff@mail.gmail.com> Message-ID: <20080922142640.GB18576@phare.normalesup.org> On Mon, Sep 22, 2008 at 04:21:43PM +0200, St?fan van der Walt wrote: > Gael, thank you very much for all the time you invested in this > publication. I know you spent many nights hacking TeX, beating the > processing pipeline into shape, reading, coordinating and encouraging. > Your efforts did not go by unseen! Thank you for your kind words St?fan. And also thanks for your help with the webapp. It was a pleasure making the ugliest and most effective hack of my life with you (sorry for all the gunk I left in there). Ga?l From prabhu at aero.iitb.ac.in Mon Sep 22 10:21:54 2008 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Mon, 22 Sep 2008 19:51:54 +0530 Subject: [SciPy-user] Proceedings of the SciPy conference. In-Reply-To: <9457e7c80809220721k1068f1c4rf0c4a3fde02733ff@mail.gmail.com> References: <20080922140834.GA5840@phare.normalesup.org> <9457e7c80809220721k1068f1c4rf0c4a3fde02733ff@mail.gmail.com> Message-ID: <48D7AA02.9090709@aero.iitb.ac.in> St?fan van der Walt wrote: > 2008/9/22 Gael Varoquaux : >> The SciPy conference proceedings are finally available online: >> http://conference.scipy.org/proceedings/SciPy2008 . > > Gael, thank you very much for all the time you invested in this > publication. I know you spent many nights hacking TeX, beating the > processing pipeline into shape, reading, coordinating and encouraging. > Your efforts did not go by unseen! I second that! Thanks a ton Ga?l for doing such a great job! cheers, prabhu From Karl.Young at ucsf.edu Mon Sep 22 12:27:49 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Mon, 22 Sep 2008 09:27:49 -0700 Subject: [SciPy-user] Proceedings of the SciPy conference. In-Reply-To: <48D7AA02.9090709@aero.iitb.ac.in> References: <20080922140834.GA5840@phare.normalesup.org> <9457e7c80809220721k1068f1c4rf0c4a3fde02733ff@mail.gmail.com> <48D7AA02.9090709@aero.iitb.ac.in> Message-ID: <48D7C785.5000707@ucsf.edu> Prabhu Ramachandran wrote: >St?fan van der Walt wrote: > > >>2008/9/22 Gael Varoquaux : >> >> >>>The SciPy conference proceedings are finally available online: >>>http://conference.scipy.org/proceedings/SciPy2008 . >>> >>> >>Gael, thank you very much for all the time you invested in this >>publication. I know you spent many nights hacking TeX, beating the >>processing pipeline into shape, reading, coordinating and encouraging. >> Your efforts did not go by unseen! >> >> > >I second that! Thanks a ton Ga?l for doing such a great job! > > > I have to chime in too; thanks for the all the hours and all the great work Gael; much appreciated ! -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From david at ar.media.kyoto-u.ac.jp Mon Sep 22 13:37:19 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 23 Sep 2008 02:37:19 +0900 Subject: [SciPy-user] Talkbox 0.1: a scikits for signal processing processing with a focus on audio Message-ID: <48D7D7CF.3030704@ar.media.kyoto-u.ac.jp> Hi there, A quick email to announce the first release of talkbox, a new scikit of mine. It aims at providing missing features in scipy for signal processing, with a focus on audio (speech and music mostly). The 0.1 is mainly a refactoring of some random code I got lying around on my machine: http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/talkbox/index.html Code available on scikits svn. For this 0.1 release: - code for linear prediction coding (LPC) computation: it solves the Yule Walker equations, both with the straightforward system inversion and Levinson Durbin. A C implementation of Levinson Durbin for real, double autocorrelation is available: it should be reasonably fast (should be at least comparable to matlab lpc implementation I think). - methods for spectral estimation: basic periodogram is implemented. cheers, David From rmay31 at gmail.com Mon Sep 22 14:36:55 2008 From: rmay31 at gmail.com (Ryan May) Date: Mon, 22 Sep 2008 13:36:55 -0500 Subject: [SciPy-user] Talkbox 0.1: a scikits for signal processing processing with a focus on audio In-Reply-To: <48D7D7CF.3030704@ar.media.kyoto-u.ac.jp> References: <48D7D7CF.3030704@ar.media.kyoto-u.ac.jp> Message-ID: <48D7E5C7.7010006@gmail.com> David Cournapeau wrote: > Hi there, > > A quick email to announce the first release of talkbox, a new scikit > of mine. It aims at providing missing features in scipy for signal > processing, with a focus on audio (speech and music mostly). The 0.1 is > mainly a refactoring of some random code I got lying around on my machine: > > http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/talkbox/index.html > > Code available on scikits svn. For this 0.1 release: > > - code for linear prediction coding (LPC) computation: it solves the > Yule Walker equations, both with the straightforward system inversion > and Levinson Durbin. A C implementation of Levinson Durbin for real, > double autocorrelation is available: it should be reasonably fast > (should be at least comparable to matlab lpc implementation I think). > - methods for spectral estimation: basic periodogram is implemented. Interesting stuff. Given that some of this is generally quite useful across the board (Levinson Durbin, periodogram), I'm hoping that at some point it would be possible to push this into mainline SciPy (after a period of showing stability and utility)? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From david at ar.media.kyoto-u.ac.jp Mon Sep 22 14:57:22 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 23 Sep 2008 03:57:22 +0900 Subject: [SciPy-user] Talkbox 0.1: a scikits for signal processing processing with a focus on audio In-Reply-To: <48D7E5C7.7010006@gmail.com> References: <48D7D7CF.3030704@ar.media.kyoto-u.ac.jp> <48D7E5C7.7010006@gmail.com> Message-ID: <48D7EA92.70708@ar.media.kyoto-u.ac.jp> Ryan May wrote: > > Interesting stuff. Given that some of this is generally quite useful > across the board (Levinson Durbin, periodogram), I'm hoping that at some > point it would be possible to push this into mainline SciPy (after a > period of showing stability and utility)? > yes, that's one stated goal, and the reason why I use BSD code only (I could go faster using for example fftw for a lot of dct and co). The plan was to get 1.0 to implement most of the things I need for signal processing available in matlab (general), and then for 2.0, more specialized stuff (MDCT, mfcc, etc...). cheers, David From travis at enthought.com Mon Sep 22 16:18:17 2008 From: travis at enthought.com (Travis Vaught) Date: Mon, 22 Sep 2008 15:18:17 -0500 Subject: [SciPy-user] ANN: Enthought Python Distribution 4.0.300 Beta 2 available Message-ID: <75D8561B-D264-4F32-AE0B-75CF0C074600@enthought.com> Greetings, We've recently posted the second beta release of the Enthought Python Distribution (EPD) for our upcoming general release of version 4.0.300 with Python 2.5. You may download the beta from here: http://www.enthought.com/products/epdbeta.php Please feel free to test it out and provide feedback on the EPD Trac instance: https://svn.enthought.com/epd You can check out the release notes here: http://www.enthought.com/products/epdbetareleasenotes.php About EPD --------- The Enthought Python Distribution (EPD) is a "kitchen-sink-included" distribution of the Python? Programming Language, including over 60 additional tools and libraries. The EPD bundle includes NumPy, SciPy, IPython, 2D and 3D visualization, database adapters, and a lot of other tools right out of the box. http://www.enthought.com/products/epd.php It is currently available as a single-click installer for Windows XP (x86), Mac OS X (a universal binary for Intel 10.4 and above) and RedHat EL3 (x86 and amd64). EPD is free for academic use. An annual Subscription and installation support are available for individual commercial use (http://www.enthought.com/products/epddownload.php ). An Enterprise Subscription with support for particular deployment environments is also available for commercial purchase (http://www.enthought.com/products/enterprise.php ). The Beta versions of EPD are available for indefinite free trial. Thanks, Travis From pitfall66 at freenet.de Tue Sep 23 08:22:48 2008 From: pitfall66 at freenet.de (Unknown) Date: Tue, 23 Sep 2008 14:22:48 +0200 Subject: [SciPy-user] vstack / hstack question n dim array without value Message-ID: <1222172568.13358.11.camel@localhost.whnet> Hello I am programming this for the Machine Learning Course. I am studying Computer Science. My Question is can I somehow get rid of the "indices_buffer". Means is there a easy way to initialize n dim arrays without values ? I also came up with filling the array with a colum of zeros and deleting this colum later, but not so elegant in my opinion either. specifically this code: indices_buffer=[] if (indices_buffer==[]): indices_buffer=tmp else: indices_buffer=hstack((indices_buffer,tmp)) -------------------------------------------- data 0.3 0.4 0.4 0.2 0.1 0.3 0.3 0.2 0.4 ... cls 1 1 2 ... def splitcrossval(data,nFold,cls): indices_buffer=[] (rows,columns)=data.shape classes=unique(cls) for i in classes: indices=where(cls==i)[0] tmp=hsplit((permutation(indices)),nFold) if (indices_buffer==[]): indices_buffer=tmp else: indices_buffer=hstack((indices_buffer,tmp)) return indices_buffer From sebastian at sipsolutions.net Tue Sep 23 08:47:15 2008 From: sebastian at sipsolutions.net (Sebastian Stephan Berg) Date: Tue, 23 Sep 2008 14:47:15 +0200 Subject: [SciPy-user] vstack / hstack question n dim array without value In-Reply-To: <1222172568.13358.11.camel@localhost.whnet> References: <1222172568.13358.11.camel@localhost.whnet> Message-ID: <1222174035.10406.2.camel@sebook> I think you looking for this?: In [6]: a = empty((3,0)) # Zero size array ... In [7]: b = ones((3,27)) In [8]: hstack((a, b)) # Works Sebastian From cohen at slac.stanford.edu Tue Sep 23 10:22:05 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 23 Sep 2008 16:22:05 +0200 Subject: [SciPy-user] Regarding Genetic algorithm In-Reply-To: <48D39442.4080905@unibo.it> References: <7869a7600809180031j282e0221xc21a67b85d540cd5@mail.gmail.com> <48D39442.4080905@unibo.it> Message-ID: <48D8FB8D.4010408@slac.stanford.edu> hi, I think that Nils's point is that ga is not in scipy anymore, but in scikits, and the track url tells you how to get it : it is in the learn package of scikits. hth, Johann massimo sandal wrote: > Nils Wagner wrote: >> On Thu, 18 Sep 2008 13:01:44 +0530 >> "utkarsh wagh" wrote: >>> Hi, >>> >>> Can anyone help me out in using the Genetic algorithm (ga) >>> subpackage in >>> Scipy. If possible can anyone send me the sample codes >>> >>> thank you, >>> >>> -- >>> Utkarsh Wagh >>> IIT Delhi >>> Contact no: +91 9990646707 >> >> See >> >> http://projects.scipy.org/scipy/scipy/ticket/484 > > Sorry but I fail to understand how this page can help. > > m. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cohen at slac.stanford.edu Tue Sep 23 10:22:38 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 23 Sep 2008 16:22:38 +0200 Subject: [SciPy-user] scipy.genetic In-Reply-To: <48D69693.2060809@unibo.it> References: <1221849174.7186.3.camel@mik> <48D69693.2060809@unibo.it> Message-ID: <48D8FBAE.4070607@slac.stanford.edu> note that the examples.py start with a 'from scipy import ga', which I think is obsolete. Johann massimo sandal wrote: > Hi Michael, > > Michael wrote: >> Hi Massimo, >> >> thanks for raising this on the list: i have been hunting around for ga >> examples for a while. >> >> to get hold of the genetic examples: >> >> 0) install subversion aka svn >> 1) download the learn repository svn co >> http://svn.scipy.org/svn/scikits/trunk/learn learn >> the examples are in ~/learn/scikits/learn/machine/ga >> >> each directory can be individually installed (i think) >> e.g. maybe by 'python setup.py build' if you want to build it in place >> >> hope that helps > > Thanks a lot -the localization of the example files was the one I was > missing. > > m. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lee.will at gmail.com Tue Sep 23 10:37:23 2008 From: lee.will at gmail.com (Will Lee) Date: Tue, 23 Sep 2008 09:37:23 -0500 Subject: [SciPy-user] pyloess in scipy? Message-ID: <7f03db650809230737s3217e5bfnac836e154a9986c8@mail.gmail.com> I'm looking to update scipy to something closer to the trunk and I've discovered that the pyloess.py package under sandbox is missing. Searching the mailing list gives me an impression that many modules are now moved to scikits. Can somebody point me to where this package is after the refactoring? Thanks, Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From williams at astro.ox.ac.uk Tue Sep 23 10:23:28 2008 From: williams at astro.ox.ac.uk (Michael Williams) Date: Tue, 23 Sep 2008 15:23:28 +0100 Subject: [SciPy-user] Installation from source on OS X: 'NoneType' object has no attribute 'link_shared_object' Message-ID: <4B321E75-C644-48DD-96ED-71AEF9FE9CB3@astro.ox.ac.uk> Hi, I'm having trouble installing scipy-0.6.0.tar.gz according to the instructions on http://www.scipy.org/Installing_SciPy/Mac_OS_X. When I run "python setup.py build_src build_clib --fcompiler=gnu95 build_ext --fcompiler=gnu95 build" I get the following error (email continues below): Traceback (most recent call last): File "setup.py", line 53, in setup_package() File "setup.py", line 45, in setup_package configuration=configuration ) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_ext.py", line 121, in run self.build_extensions() File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/command/build_ext.py", line 416, in build_extensions self.build_extension(ext) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_ext.py", line 312, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' Relevant version information follows: asosx40:~$ gfortran --version GNU Fortran (GCC) 4.3.0 20070810 (experimental) asosx40:~$ gcc --version i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5484) asosx40:~$ python --version Python 2.5.1 (This is the stock OS X 10.5 Python installation.) I get the same error when I try this with the current scipy from svn. I get a different error when I try to build using g77 by doing "python setup.py build_src build_clib --fcompiler=gnu build_ext -- fcompiler=gnu build": creating build/lib.macosx-10.5-i386-2.5/scipy/fftpack /usr/local/bin/g77 -g -Wall -undefined dynamic_lookup -bundle build/ temp.macosx-10.5-i386-2.5/build/src.macosx-10.5-i386-2.5/scipy/fftpack/ _fftpackmodule.o build/temp.macosx-10.5-i386-2.5/scipy/fftpack/src/ zfft.o build/temp.macosx-10.5-i386-2.5/scipy/fftpack/src/drfft.o build/ temp.macosx-10.5-i386-2.5/scipy/fftpack/src/zrfft.o build/ temp.macosx-10.5-i386-2.5/scipy/fftpack/src/zfftnd.o build/ temp.macosx-10.5-i386-2.5/build/src.macosx-10.5-i386-2.5/ fortranobject.o -L/usr/local/lib/gcc/i686-apple-darwin8.8.1/3.4.0 - Lbuild/temp.macosx-10.5-i386-2.5 -ldfftpack -lg2c -lcc_dynamic -o build/lib.macosx-10.5-i386-2.5/scipy/fftpack/_fftpack.so ld: library not found for -lcc_dynamic collect2: ld returned 1 exit status ld: library not found for -lcc_dynamic collect2: ld returned 1 exit status error: Command "/usr/local/bin/g77 -g -Wall -undefined dynamic_lookup - bundle build/temp.macosx-10.5-i386-2.5/build/src.macosx-10.5-i386-2.5/ scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.5-i386-2.5/scipy/ fftpack/src/zfft.o build/temp.macosx-10.5-i386-2.5/scipy/fftpack/src/ drfft.o build/temp.macosx-10.5-i386-2.5/scipy/fftpack/src/zrfft.o build/temp.macosx-10.5-i386-2.5/scipy/fftpack/src/zfftnd.o build/ temp.macosx-10.5-i386-2.5/build/src.macosx-10.5-i386-2.5/ fortranobject.o -L/usr/local/lib/gcc/i686-apple-darwin8.8.1/3.4.0 - Lbuild/temp.macosx-10.5-i386-2.5 -ldfftpack -lg2c -lcc_dynamic -o build/lib.macosx-10.5-i386-2.5/scipy/fftpack/_fftpack.so" failed with exit status 1 I would be very grateful for any suggestions! Thanks, -- Michael Williams http://www-astro.physics.ox.ac.uk/~williams/ From pgmdevlist at gmail.com Tue Sep 23 11:09:35 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 23 Sep 2008 11:09:35 -0400 Subject: [SciPy-user] pyloess in scipy? In-Reply-To: <7f03db650809230737s3217e5bfnac836e154a9986c8@mail.gmail.com> References: <7f03db650809230737s3217e5bfnac836e154a9986c8@mail.gmail.com> Message-ID: <200809231109.35988.pgmdevlist@gmail.com> On Tuesday 23 September 2008 10:37:23 Will Lee wrote: > I'm looking to update scipy to something closer to the trunk and I've > discovered that the pyloess.py package under sandbox is missing. Searching > the mailing list gives me an impression that many modules are now moved to > scikits. Can somebody point me to where this package is after the > refactoring? AFAIK, nobody ported pyloess to scikits. I still have the sources of the sandbox version lurking on my hard drive. I could send them to you if you're interested. I could also try to create a specific scikits, depending on whether other people request it (and if it's not a problem for anybody). Let me know. From millman at berkeley.edu Tue Sep 23 12:45:45 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 23 Sep 2008 09:45:45 -0700 Subject: [SciPy-user] pyloess in scipy? In-Reply-To: <200809231109.35988.pgmdevlist@gmail.com> References: <7f03db650809230737s3217e5bfnac836e154a9986c8@mail.gmail.com> <200809231109.35988.pgmdevlist@gmail.com> Message-ID: On Tue, Sep 23, 2008 at 8:09 AM, Pierre GM wrote: > On Tuesday 23 September 2008 10:37:23 Will Lee wrote: >> I'm looking to update scipy to something closer to the trunk and I've >> discovered that the pyloess.py package under sandbox is missing. Searching >> the mailing list gives me an impression that many modules are now moved to >> scikits. Can somebody point me to where this package is after the >> refactoring? > > AFAIK, nobody ported pyloess to scikits. I still have the sources of the > sandbox version lurking on my hard drive. I could send them to you if you're > interested. I could also try to create a specific scikits, depending on > whether other people request it (and if it's not a problem for anybody). Let > me know. I branched scipy before removing the sandbox: http://projects.scipy.org/scipy/scipy/browser/branches/sandbox Here is pyloess: http://projects.scipy.org/scipy/scipy/browser/branches/sandbox/scipy/sandbox/pyloess -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From pgmdevlist at gmail.com Tue Sep 23 13:03:05 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 23 Sep 2008 13:03:05 -0400 Subject: [SciPy-user] pyloess in scipy? In-Reply-To: References: <7f03db650809230737s3217e5bfnac836e154a9986c8@mail.gmail.com> <200809231109.35988.pgmdevlist@gmail.com> Message-ID: <200809231303.06494.pgmdevlist@gmail.com> On Tuesday 23 September 2008 12:45:45 Jarrod Millman wrote: > Here is pyloess: > http://projects.scipy.org/scipy/scipy/browser/branches/sandbox/scipy/sandbo >x/pyloess Jarrod, Thanks a lot. Would it be useful to clean up the package and make it a proper scikits ? From robert.kern at gmail.com Tue Sep 23 13:39:48 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Sep 2008 12:39:48 -0500 Subject: [SciPy-user] Installation from source on OS X: 'NoneType' object has no attribute 'link_shared_object' In-Reply-To: <4B321E75-C644-48DD-96ED-71AEF9FE9CB3@astro.ox.ac.uk> References: <4B321E75-C644-48DD-96ED-71AEF9FE9CB3@astro.ox.ac.uk> Message-ID: <3d375d730809231039t239263e1wcf67da11c650c9b9@mail.gmail.com> On Tue, Sep 23, 2008 at 09:23, Michael Williams wrote: > Hi, > > I'm having trouble installing scipy-0.6.0.tar.gz according to the > instructions on http://www.scipy.org/Installing_SciPy/Mac_OS_X. > > When I run "python setup.py build_src build_clib --fcompiler=gnu95 > build_ext --fcompiler=gnu95 build" I get the following error (email > continues below): Can you show us more of the build log? Particularly the parts during the Fortran compiler detection. What version of numpy are you using? > I get a different error when I try to build using g77 by doing "python > setup.py build_src build_clib --fcompiler=gnu build_ext -- > fcompiler=gnu build": Yeah, you just can't use g77 with a Universal Python. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From williams at astro.ox.ac.uk Tue Sep 23 14:22:37 2008 From: williams at astro.ox.ac.uk (Michael Williams) Date: Tue, 23 Sep 2008 19:22:37 +0100 Subject: [SciPy-user] Installation from source on OS X: 'NoneType' object has no attribute 'link_shared_object' In-Reply-To: <3d375d730809231039t239263e1wcf67da11c650c9b9@mail.gmail.com> References: <4B321E75-C644-48DD-96ED-71AEF9FE9CB3@astro.ox.ac.uk> <3d375d730809231039t239263e1wcf67da11c650c9b9@mail.gmail.com> Message-ID: Hi Robert, thanks very much. On 23 Sep 2008, at 18:39, Robert Kern wrote: > Can you show us more of the build log? Particularly the parts during > the Fortran compiler detection. What version of numpy are you using? numpy is version 1.1.1. I'm very confused because I'm getting a different error message when I install scipy. I'm pretty sure I haven't done anything that has changed my setup. To confirm that this new error message is real I removed my site-packages directory and ran "easy_install numpy". The last 100 lines of the build log, which seem to include the detection of the Fortran compiler, are below. The full log is here: http://rafb.net/p/Sue0Wv11.html . Hope this helps. Thanks, -- Mike running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler using build_clib building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno- second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second- underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno- second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.5-i386-2.5 creating build/temp.macosx-10.5-i386-2.5/scipy creating build/temp.macosx-10.5-i386-2.5/scipy/fftpack creating build/temp.macosx-10.5-i386-2.5/scipy/fftpack/dfftpack compile options: '-c' gfortran:f77: scipy/fftpack/dfftpack/dcosqb.f gfortran:f77: scipy/fftpack/dfftpack/dcosqf.f gfortran:f77: scipy/fftpack/dfftpack/dcosqi.f gfortran:f77: scipy/fftpack/dfftpack/dcost.f gfortran:f77: scipy/fftpack/dfftpack/dcosti.f gfortran:f77: scipy/fftpack/dfftpack/dfftb.f gfortran:f77: scipy/fftpack/dfftpack/dfftb1.f gfortran:f77: scipy/fftpack/dfftpack/dfftf.f gfortran:f77: scipy/fftpack/dfftpack/dfftf1.f gfortran:f77: scipy/fftpack/dfftpack/dffti.f gfortran:f77: scipy/fftpack/dfftpack/dffti1.f scipy/fftpack/dfftpack/dffti1.f: In function ?dffti1?: scipy/fftpack/dfftpack/dffti1.f:11: warning: ?ntry? may be used uninitialized in this function gfortran:f77: scipy/fftpack/dfftpack/dsinqb.f gfortran:f77: scipy/fftpack/dfftpack/dsinqf.f gfortran:f77: scipy/fftpack/dfftpack/dsinqi.f gfortran:f77: scipy/fftpack/dfftpack/dsint.f gfortran:f77: scipy/fftpack/dfftpack/dsint1.f gfortran:f77: scipy/fftpack/dfftpack/dsinti.f gfortran:f77: scipy/fftpack/dfftpack/zfftb.f gfortran:f77: scipy/fftpack/dfftpack/zfftb1.f gfortran:f77: scipy/fftpack/dfftpack/zfftf.f gfortran:f77: scipy/fftpack/dfftpack/zfftf1.f gfortran:f77: scipy/fftpack/dfftpack/zffti.f gfortran:f77: scipy/fftpack/dfftpack/zffti1.f scipy/fftpack/dfftpack/zffti1.f: In function ?zffti1?: scipy/fftpack/dfftpack/zffti1.f:11: warning: ?ntry? may be used uninitialized in this function ar: adding 23 object files to build/temp.macosx-10.5-i386-2.5/ libdfftpack.a ranlib:@ build/temp.macosx-10.5-i386-2.5/libdfftpack.a building 'linpack_lite' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno- second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second- underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno- second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.5-i386-2.5/scipy/integrate creating build/temp.macosx-10.5-i386-2.5/scipy/integrate/linpack_lite compile options: '-c' gfortran:f77: scipy/integrate/linpack_lite/dgbfa.f gfortran:f77: scipy/integrate/linpack_lite/dgbsl.f gfortran:f77: scipy/integrate/linpack_lite/dgefa.f gfortran:f77: scipy/integrate/linpack_lite/dgesl.f gfortran:f77: scipy/integrate/linpack_lite/dgtsl.f ar: adding 5 object files to build/temp.macosx-10.5-i386-2.5/ liblinpack_lite.a ranlib:@ build/temp.macosx-10.5-i386-2.5/liblinpack_lite.a building 'mach' library using additional config_fc from setup script for fortran compiler: {'noopt': ('scipy/integrate/setup.py', 1)} customize Gnu95FCompiler compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno- second-underscore -fPIC Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second- underscore -fPIC Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno- second-underscore -Wall -fno-second-underscore -fPIC creating build/temp.macosx-10.5-i386-2.5/scipy/integrate/mach compile options: '-c' gfortran:f77: scipy/integrate/mach/d1mach.f gfortran:f77: scipy/integrate/mach/i1mach.f gfortran:f77: scipy/integrate/mach/r1mach.f gfortran:f77: scipy/integrate/mach/xerror.f scipy/integrate/mach/xerror.f:1.40: SUBROUTINE XERROR(MESS,NMESS,L1,L2) 1 Warning: Unused dummy argument 'l2' at (1) scipy/integrate/mach/xerror.f:1.37: SUBROUTINE XERROR(MESS,NMESS,L1,L2) 1 Warning: Unused dummy argument 'l1' at (1) ar: adding 4 object files to build/temp.macosx-10.5-i386-2.5/libmach.a ranlib:@ build/temp.macosx-10.5-i386-2.5/libmach.a building 'quadpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno- second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second- underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno- second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.5-i386-2.5/scipy/integrate/quadpack compile options: '-c' gfortran:f77: scipy/integrate/quadpack/dqag.f gfortran:f77: scipy/integrate/quadpack/dqage.f gfortran:f77: scipy/integrate/quadpack/dqagi.f gfortran:f77: scipy/integrate/quadpack/dqagie.f scipy/integrate/quadpack/dqagie.f: In function ?dqagie?: scipy/integrate/quadpack/dqagie.f:154: warning: ?small? may be used uninitialized in this function scipy/integrate/quadpack/dqagie.f:153: warning: ?ertest? may be used uninitialized in this function scipy/integrate/quadpack/dqagie.f:152: warning: ?erlarg? may be used uninitialized in this function scipy/integrate/quadpack/dqagie.f:151: warning: ?correc? may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqagp.f gfortran:f77: scipy/integrate/quadpack/dqagpe.f scipy/integrate/quadpack/dqagpe.f: In function ?dqagpe?: scipy/integrate/quadpack/dqagpe.f:196: warning: ?k? may be used uninitialized in this function scipy/integrate/quadpack/dqagpe.f:191: warning: ?correc? may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqags.f gfortran:f77: scipy/integrate/quadpack/dqagse.f scipy/integrate/quadpack/dqagse.f: In function ?dqagse?: scipy/integrate/quadpack/dqagse.f:153: warning: ?small? may be used uninitialized in this function scipy/integrate/quadpack/dqagse.f:152: warning: ?ertest? may be used uninitialized in this function scipy/integrate/quadpack/dqagse.f:151: warning: ?erlarg? may be used uninitialized in this function scipy/integrate/quadpack/dqagse.f:150: warning: ?correc? may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqawc.f gfortran:f77: scipy/integrate/quadpack/dqawce.f gfortran:f77: scipy/integrate/quadpack/dqawf.f gfortran:f77: scipy/integrate/quadpack/dqawfe.f scipy/integrate/quadpack/dqawfe.f: In function ?dqawfe?: scipy/integrate/quadpack/dqawfe.f:203: warning: ?ll? may be used uninitialized in this function scipy/integrate/quadpack/dqawfe.f:200: warning: ?drl? may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqawo.f gfortran:f77: scipy/integrate/quadpack/dqawoe.f scipy/integrate/quadpack/dqawoe.f: In function ?dqawoe?: scipy/integrate/quadpack/dqawoe.f:208: warning: ?ertest? may be used uninitialized in this function scipy/integrate/quadpack/dqawoe.f:207: warning: ?erlarg? may be used uninitialized in this function scipy/integrate/quadpack/dqawoe.f:206: warning: ?correc? may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqaws.f gfortran:f77: scipy/integrate/quadpack/dqawse.f gfortran:f77: scipy/integrate/quadpack/dqc25c.f gfortran:f77: scipy/integrate/quadpack/dqc25f.f scipy/integrate/quadpack/dqc25f.f: In function ?dqc25f?: scipy/integrate/quadpack/dqc25f.f:103: warning: ?m? may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqc25s.f gfortran:f77: scipy/integrate/quadpack/dqcheb.f gfortran:f77: scipy/integrate/quadpack/dqelg.f scipy/integrate/quadpack/dqelg.f: In function ?dqelg?: scipy/integrate/quadpack/dqelg.f:1: internal compiler error: vector VEC(tree,base) index domain error, in build_classic_dist_vector_1 at tree-data-ref.c:2725 Please submit a full bug report, with preprocessed source if appropriate. See for instructions. scipy/integrate/quadpack/dqelg.f: In function ?dqelg?: scipy/integrate/quadpack/dqelg.f:1: internal compiler error: vector VEC(tree,base) index domain error, in build_classic_dist_vector_1 at tree-data-ref.c:2725 Please submit a full bug report, with preprocessed source if appropriate. See for instructions. error: Command "/usr/local/bin/gfortran -Wall -ffixed-form -fno-second- underscore -fPIC -O3 -funroll-loops -c -c scipy/integrate/quadpack/ dqelg.f -o build/temp.macosx-10.5-i386-2.5/scipy/integrate/quadpack/ dqelg.o" failed with exit status 1 From robert.kern at gmail.com Tue Sep 23 14:53:21 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Sep 2008 13:53:21 -0500 Subject: [SciPy-user] Installation from source on OS X: 'NoneType' object has no attribute 'link_shared_object' In-Reply-To: References: <4B321E75-C644-48DD-96ED-71AEF9FE9CB3@astro.ox.ac.uk> <3d375d730809231039t239263e1wcf67da11c650c9b9@mail.gmail.com> Message-ID: <3d375d730809231153p43290a4bu10cdf119908c8830@mail.gmail.com> On Tue, Sep 23, 2008 at 13:22, Michael Williams wrote: > Hi Robert, > > thanks very much. > > On 23 Sep 2008, at 18:39, Robert Kern wrote: >> Can you show us more of the build log? Particularly the parts during >> the Fortran compiler detection. What version of numpy are you using? > > numpy is version 1.1.1. I'm very confused because I'm getting a > different error message when I install scipy. I'm pretty sure I > haven't done anything that has changed my setup. To confirm that this > new error message is real I removed my site-packages directory and ran > "easy_install numpy". > > The last 100 lines of the build log, which seem to include the > detection of the Fortran compiler, are below. The full log is here: http://rafb.net/p/Sue0Wv11.html > . Unfortunately, now it looks like you are dealing with a bug in gfortran. Can you downgrade to something earlier than 4.3.0? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From andreas.matthias at gmail.com Tue Sep 23 16:46:16 2008 From: andreas.matthias at gmail.com (Andreas Matthias) Date: Tue, 23 Sep 2008 22:46:16 +0200 Subject: [SciPy-user] reading character arrays from fortran file Message-ID: I'm trying to read a character array from a binary fortran file and the following code seems to work: from scipy.io import fopen fd1 = fopen.fopen('fort', permission='rb', format='n') print fd1.fort_read(0, dtype='str') But then I get this deprecation warning saying that I should use npfile instead of fopen. Unfortunately, I don't get npfile to do the same as the code above. I tried the following which does not work: from scipy.io import npfile fd2 = npfile('fort') print fd2.read_array(dt='str', shape=-1) What's the correct way to do it with npfile? Ciao Andreas From kdsudac at yahoo.com Tue Sep 23 18:28:56 2008 From: kdsudac at yahoo.com (Keith Suda-Cederquist) Date: Tue, 23 Sep 2008 15:28:56 -0700 (PDT) Subject: [SciPy-user] Pickling Large (Image) Arrays Message-ID: <568345.52718.qm@web54305.mail.re2.yahoo.com> Hi All, I'm doing some image processing on some rather large images (2000x2000 pixels and each pixel has 16 bits) so the file comes in at 7-8 MB. During the image processing I convert the image to a 64-bit float numpy array and do a bunch of operations on the image. In certain cases (where tests fail), I'd like to save all the data to a file to take a look at later and debug. I need to keep the size of this file as small as possible. I'm thinking of writing some code that will round pixel values to an 8-bit unsigned integer and then pickle the data to a file. Is this the a good approach? Can anyone suggest a better approach? Will this actually succeed in reducing the file size, or will I just be wasting my time? Thanks, Keith -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Tue Sep 23 19:13:16 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 23 Sep 2008 19:13:16 -0400 Subject: [SciPy-user] Pickling Large (Image) Arrays In-Reply-To: <568345.52718.qm@web54305.mail.re2.yahoo.com> References: <568345.52718.qm@web54305.mail.re2.yahoo.com> Message-ID: 2008/9/23 Keith Suda-Cederquist : > I'm doing some image processing on some rather large images (2000x2000 > pixels and each pixel has 16 bits) so the file comes in at 7-8 MB. During > the image processing I convert the image to a 64-bit float numpy array and > do a bunch of operations on the image. > > In certain cases (where tests fail), I'd like to save all the data to a file > to take a look at later and debug. I need to keep the size of this file as > small as possible. > > I'm thinking of writing some code that will round pixel values to an 8-bit > unsigned integer and then pickle the data to a file. Is this the a good > approach? Can anyone suggest a better approach? Will this actually succeed > in reducing the file size, or will I just be wasting my time? First of all, the current release of numpy includes a native file format that is fairly efficient, fast, and portable. If you can, it's probably better to use that than pickles. But by itself it won't save all that much space: almost all the space in either format is taken up by the pixel array. If you convert the pixel array to one with dtype uint8 or unit16, you'll use one byte per pixel instead of eight. You do of course lose information this way, and if this obscures why the test is failing, it will be quite frustrating. If the data is very compressible, you could look into using the Python Imaging Library to save it in some compressed image format, though this will almost certainly lose even more information. Anne From mscipy at googlemail.com Wed Sep 24 03:57:43 2008 From: mscipy at googlemail.com (Saber Mbarek) Date: Wed, 24 Sep 2008 07:57:43 +0000 Subject: [SciPy-user] importing pylab?? Message-ID: <90ec20e0809240057h64f91941t6a246678a098be43@mail.gmail.com> Hi, I installed scipy a few days ago on my laptop (with os linux debian), but while importing the pylab module from the (installed) python-matplotlib I got the following error: ------------------------------------------------------------------------------------------------------------- Python 2.5.2 (r252:60911, Aug 8 2008, 09:22:44) [GCC 4.3.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> import numpy >>> import pylab Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.5/site-packages/pylab.py", line 1, in from matplotlib.pylab import * File "/usr/lib/python2.5/site-packages/matplotlib/__init__.py", line 128, in from rcsetup import defaultParams, validate_backend, validate_toolbar File "/usr/lib/python2.5/site-packages/matplotlib/rcsetup.py", line 18, in from matplotlib.colors import is_color_like File "/usr/lib/python2.5/site-packages/matplotlib/colors.py", line 39, in import matplotlib.cbook as cbook File "/usr/lib/python2.5/site-packages/matplotlib/cbook.py", line 14, in preferredencoding = locale.getpreferredencoding() File "/usr/lib/python2.5/locale.py", line 514, in getpreferredencoding setlocale(LC_CTYPE, "") File "/usr/lib/python2.5/locale.py", line 478, in setlocale return _setlocale(category, locale) locale.Error: unsupported locale setting ------------------------------------------------------------------------------------------------------------ Could you please help me ? Best regards, Saber -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Sep 24 04:14:20 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Sep 2008 03:14:20 -0500 Subject: [SciPy-user] importing pylab?? In-Reply-To: <90ec20e0809240057h64f91941t6a246678a098be43@mail.gmail.com> References: <90ec20e0809240057h64f91941t6a246678a098be43@mail.gmail.com> Message-ID: <3d375d730809240114m65fc8195n9857ab4c6d590395@mail.gmail.com> On Wed, Sep 24, 2008 at 02:57, Saber Mbarek wrote: > Hi, > > I installed scipy a few days ago on my laptop (with os linux debian), but > while importing the pylab module from the (installed) python-matplotlib I > got the following error: You want the matplotlib list over here: https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jr at sun.ac.za Wed Sep 24 10:05:01 2008 From: jr at sun.ac.za (Johann Rohwer) Date: Wed, 24 Sep 2008 16:05:01 +0200 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats Message-ID: <200809241605.01575.jr@sun.ac.za> Hi It seems that the default implementation of std and var differs between numpy/scipy and scipy.stats, in that numpy/scipy is using the "biased" formulation (i.e. dividing by N) whereas scipy.stats is using the "unbiased" formulation (dividing by N-1) by default. Is this intentional (it could be potentially confusing...)? I realise that the "biased" version can be accessed in sp.stats with a kwarg, but what is the reason for two different implementations of the function(s)? In [30]: a Out[30]: array([ 1., 2., 3., 2., 3., 1.]) In [31]: np.std(a) Out[31]: 0.81649658092772603 In [32]: sp.std(a) Out[32]: 0.81649658092772603 In [33]: sp.stats.std(a) Out[33]: 0.89442719099991586 In [34]: sp.stats.std(a, bias=True) Out[34]: 0.81649658092772603 Same for np.var vs scipy.stats.var Johann From pgmdevlist at gmail.com Wed Sep 24 12:05:02 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 24 Sep 2008 12:05:02 -0400 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: <200809241605.01575.jr@sun.ac.za> References: <200809241605.01575.jr@sun.ac.za> Message-ID: <200809241205.02637.pgmdevlist@gmail.com> Johann, You can also get the unbiased estimates with numpy by setting the optional parameter ddof=1. >>> a=np.array([ 1., ?2., ?3., ?2., ?3., ?1.]) >>> np.std(a) 0.81649658092772603 >>> np.std(a, ddof=1) 0.89442719099991586 I think the default to biased estimates was kept for backward compatibility. From josef.pktd at gmail.com Wed Sep 24 12:19:37 2008 From: josef.pktd at gmail.com (joep) Date: Wed, 24 Sep 2008 09:19:37 -0700 (PDT) Subject: [SciPy-user] .typecode in scipy.stats Message-ID: <9e6ca619-3ec2-415e-ac18-5bca546a81f6@c58g2000hsc.googlegroups.com> ``.typecode`` seems to be depreciated for a long time, but there is still one left in \scipy\stats\_support.py (is still in current trunk) see traceback Josef >>> stats.itemfreq(freq) Traceback (most recent call last): File "", line 1, in ? stats.itemfreq(freq) File "c:\programs\python24\lib\site-packages\scipy\stats\stats.py", line 954, in itemfreq scores = _support.unique(a) File "c:\programs\python24\lib\site-packages\scipy\stats \_support.py", line 51, in unique if inarray.typecode() != 'O': # not an Object array AttributeError: 'numpy.ndarray' object has no attribute 'typecode' From oliphant at enthought.com Wed Sep 24 13:18:17 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 24 Sep 2008 12:18:17 -0500 Subject: [SciPy-user] .typecode in scipy.stats In-Reply-To: <9e6ca619-3ec2-415e-ac18-5bca546a81f6@c58g2000hsc.googlegroups.com> References: <9e6ca619-3ec2-415e-ac18-5bca546a81f6@c58g2000hsc.googlegroups.com> Message-ID: <48DA7659.4070804@enthought.com> joep wrote: > ``.typecode`` seems to be depreciated for a long time, but there is > still one left in \scipy\stats\_support.py (is still in current trunk) > > Thanks for the bug report. Fixed in r4741. -Travis From aisaac at american.edu Wed Sep 24 15:29:29 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 24 Sep 2008 15:29:29 -0400 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: <200809241205.02637.pgmdevlist@gmail.com> References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> Message-ID: <48DA9519.1000203@american.edu> On 9/24/2008 12:05 PM Pierre GM apparently wrote: > I think the default to biased estimates was kept for backward compatibility. It is still a problem that scip.var and scipy.stats.var behave differently (and even have a different signature). What is the way forward? An opening suggestion: unify the signature, let ``bias`` be a deprecated way to to set ``ddof``, and warn users of scipy.stats.var (or std) if they do not set ``ddof``. Alan Isaac From bjracine at glosten.com Wed Sep 24 16:00:30 2008 From: bjracine at glosten.com (Benjamin J. Racine) Date: Wed, 24 Sep 2008 13:00:30 -0700 Subject: [SciPy-user] Quick Reference Card Message-ID: <8C2B20C4348091499673D86BF10AB6761ACA8097@clipper.glosten.local> I'd like to make a python/numpy/scipy equivalent of the MATLAB quick reference: http://www.math.umd.edu/~jeo/matlab_quickref.pdf I think that this could be an improvement upon: http://mathesaurus.sourceforge.net/matlab-python-xref.pdf and go a long ways in putting newcomers on their feet. Any thoughts or advice? Ben R. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Wed Sep 24 16:13:14 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 24 Sep 2008 22:13:14 +0200 Subject: [SciPy-user] Quick Reference Card In-Reply-To: <8C2B20C4348091499673D86BF10AB6761ACA8097@clipper.glosten.local> References: <8C2B20C4348091499673D86BF10AB6761ACA8097@clipper.glosten.local> Message-ID: <20080924201314.GJ7690@phare.normalesup.org> On Wed, Sep 24, 2008 at 01:00:30PM -0700, Benjamin J. Racine wrote: > I'd like to make a python/numpy/scipy equivalent of the MATLAB quick > reference: > [1]http://www.math.umd.edu/~jeo/matlab_quickref.pdf > I think that this could be an improvement upon: > [2]http://mathesaurus.sourceforge.net/matlab-python-xref.pdf > and go a long ways in putting newcomers on their feet. > Any thoughts or advice? Go for it! It would be great. And if you can link it from the wiki, people are likely to find it, and appreciate it. Ga?l From peridot.faceted at gmail.com Wed Sep 24 16:35:45 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 24 Sep 2008 16:35:45 -0400 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: <48DA9519.1000203@american.edu> References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> Message-ID: 2008/9/24 Alan G Isaac : > On 9/24/2008 12:05 PM Pierre GM apparently wrote: >> I think the default to biased estimates was kept for backward compatibility. > > It is still a problem that scip.var and scipy.stats.var > behave differently (and even have a different signature). > What is the way forward? > > An opening suggestion: > unify the signature, let ``bias`` be a deprecated way to > to set ``ddof``, and warn users of scipy.stats.var (or std) > if they do not set ``ddof``. How about (possibly in addition to your suggestion) deprecating the re-exporting of numpy functions inside scipy? People often seem to ask about whether they should be using the scipy "version" or the numpy "version" of some function, when in fact it's just a re-exporting of the name. This still leaves the question of an inconsistency between scipy and numpy, for which I think your suggestion is a reasonable solution. Anne From stef.mientki at gmail.com Wed Sep 24 17:11:31 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Wed, 24 Sep 2008 23:11:31 +0200 Subject: [SciPy-user] Quick Reference Card In-Reply-To: <20080924201314.GJ7690@phare.normalesup.org> References: <8C2B20C4348091499673D86BF10AB6761ACA8097@clipper.glosten.local> <20080924201314.GJ7690@phare.normalesup.org> Message-ID: <48DAAD03.10906@gmail.com> >> Any thoughts or advice? >> > > Go for it! It would be great. And if you can link it from the wiki, > people are likely to find it, and appreciate it. > +1 (btw I like the Python 2.4 reference card of Laurent Pointal a lot: http://www.digilife.be/quickreferences/QRC/Python%202.4%20Quick%20Reference%20Card.pdf on the other hand for a complex package as Scipy++ it would be more valuable, if more than 1 person could contribute. cheers, Stef From matthew.brett at gmail.com Wed Sep 24 20:20:44 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 24 Sep 2008 17:20:44 -0700 Subject: [SciPy-user] pyloess in scipy? In-Reply-To: <200809231303.06494.pgmdevlist@gmail.com> References: <7f03db650809230737s3217e5bfnac836e154a9986c8@mail.gmail.com> <200809231109.35988.pgmdevlist@gmail.com> <200809231303.06494.pgmdevlist@gmail.com> Message-ID: <1e2af89e0809241720j7522933asf7042aa69e0aa944@mail.gmail.com> Hi, On Tue, Sep 23, 2008 at 10:03 AM, Pierre GM wrote: > On Tuesday 23 September 2008 12:45:45 Jarrod Millman wrote: >> Here is pyloess: >> http://projects.scipy.org/scipy/scipy/browser/branches/sandbox/scipy/sandbo >>x/pyloess > > Jarrod, > Thanks a lot. Would it be useful to clean up the package and make it a proper > scikits ? I'm attaching a slightly modified version of the pure python biopython implementation. However, I don't know whether it (or the original) works correctly. This in hope that someone who knows this stuff better than me is interested. Biopython implementation here: http://cvs.biopython.org/cgi-bin/viewcvs/viewcvs.cgi/biopython/Bio/Statistics/lowess.py?cvsroot=biopython I was expecting this to work: n = 100 x = np.arange(n) y = np.random.normal(size=(n,)) yest = lowess(x, y) but it returns a LinAlgError (as does the unmodified original). I see that R has a more complex algorithm, based on the original fortran. For reference, I have attached the file (lowess.doc) that they worked from, that used to be in the R source. Any thoughts? Matthew -------------- next part -------------- A non-text attachment was scrubbed... Name: lowess.doc Type: application/msword Size: 20671 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lowess.py Type: text/x-python Size: 3444 bytes Desc: not available URL: From cournape at gmail.com Thu Sep 25 00:35:18 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 25 Sep 2008 13:35:18 +0900 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: <48DA9519.1000203@american.edu> References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> Message-ID: <5b8d13220809242135w7174b0f3l81cf81bddbd3ed89@mail.gmail.com> On Thu, Sep 25, 2008 at 4:29 AM, Alan G Isaac wrote: > An opening suggestion: > unify the signature, let ``bias`` be a deprecated way to > to set ``ddof``, and warn users of scipy.stats.var (or std) > if they do not set ``ddof``. The problem is that there is another discrepancy between numpy.var and scipy.stats.var: the axis argument is 0 in scipy.stats, None in numpy. So if we want to be really compatible, we can't just add a new argument and deprecate the old one; we have to deprecate the current signature, and change it later. I suggested some time ago to deprecate scipy.stats current signature for 0.7, and set the new one in 0.8. If that's fine with you, we could do that. I don't feel confortable changing a function in scipy.stats (because it is not "my" module), but OTOH, nobody reacted last time we had this discussion, so maybe we should just do it. David From stefan at sun.ac.za Thu Sep 25 03:48:36 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 25 Sep 2008 09:48:36 +0200 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> Message-ID: <9457e7c80809250048r5bec3240x35d25c8c791ee728@mail.gmail.com> 2008/9/24 Anne Archibald : > How about (possibly in addition to your suggestion) deprecating the > re-exporting of numpy functions inside scipy? People often seem to ask > about whether they should be using the scipy "version" or the numpy > "version" of some function, when in fact it's just a re-exporting of > the name. I'm all in favour of that suggestion. Cheers St?fan From stefan at sun.ac.za Thu Sep 25 03:50:28 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 25 Sep 2008 09:50:28 +0200 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: <5b8d13220809242135w7174b0f3l81cf81bddbd3ed89@mail.gmail.com> References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> <5b8d13220809242135w7174b0f3l81cf81bddbd3ed89@mail.gmail.com> Message-ID: <9457e7c80809250050u15ca8517o727f87936236da78@mail.gmail.com> 2008/9/25 David Cournapeau : > I suggested some time ago to deprecate scipy.stats current signature > for 0.7, and set the new one in 0.8. If that's fine with you, we could > do that. I don't feel confortable changing a function in scipy.stats > (because it is not "my" module), but OTOH, nobody reacted last time we > had this discussion, so maybe we should just do it. Yes, do it. If someone complains, that's why we have SVN. The SciPy API is not (and cannot be) frozen -- it is still too immature, so let's get it up to scratch ASAP. Cheers St?fan From pav at iki.fi Thu Sep 25 04:38:19 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 25 Sep 2008 08:38:19 +0000 (UTC) Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> Message-ID: Wed, 24 Sep 2008 16:35:45 -0400, Anne Archibald wrote: > 2008/9/24 Alan G Isaac : >> On 9/24/2008 12:05 PM Pierre GM apparently wrote: >>> I think the default to biased estimates was kept for backward >>> compatibility. >> >> It is still a problem that scip.var and scipy.stats.var behave >> differently (and even have a different signature). What is the way >> forward? >> >> An opening suggestion: >> unify the signature, let ``bias`` be a deprecated way to to set >> ``ddof``, and warn users of scipy.stats.var (or std) if they do not set >> ``ddof``. > > How about (possibly in addition to your suggestion) deprecating the > re-exporting of numpy functions inside scipy? People often seem to ask > about whether they should be using the scipy "version" or the numpy > "version" of some function, when in fact it's just a re-exporting of the > name. The opposite direction would be completely removing `var` from scipy.stats. Is there a reason why the function is reimplemented in scipy? There's probably need eg. for float -> complex casting sqrt(), but I don't clearly see why there are two variants of `var`. Personally, I'd prefer not to have the same function reimplemented in two places, unless there is a clear need for it. I think there are more examples of duplication / signature mismatches in scipy vs. numpy that could be cleaned up a bit, at least in scipy.linalg. -- Pauli Virtanen From jr at sun.ac.za Thu Sep 25 07:39:10 2008 From: jr at sun.ac.za (Johann Rohwer) Date: Thu, 25 Sep 2008 13:39:10 +0200 Subject: [SciPy-user] =?iso-8859-1?q?Inconsistent_standard_deviation_and_v?= =?iso-8859-1?q?ariance=09implementation_in_scipy_vs=2E_scipy=2Esta?= =?iso-8859-1?q?ts?= In-Reply-To: References: <200809241605.01575.jr@sun.ac.za> Message-ID: <200809251339.10790.jr@sun.ac.za> On Thursday, 25 September 2008, Pauli Virtanen wrote: > The opposite direction would be completely removing `var` from > scipy.stats. Is there a reason why the function is reimplemented in > scipy? There's probably need eg. for float -> complex casting > sqrt(), but I don't clearly see why there are two variants of > `var`. > > Personally, I'd prefer not to have the same function reimplemented > in two places, unless there is a clear need for it. I think there > are more examples of duplication / signature mismatches in scipy > vs. numpy that could be cleaned up a bit, at least in scipy.linalg. I agree that duplicate implementations of the same function are confusing. However, within numpy itself there is further inconsistency, in that np.var and np.std use the "ddof" kwarg, whereas np.cov uses the "bias" kwarg (as do sp.stats.std and sp.stats.var). Also, default normalisation in np.cov is by N-1 (unbiased) wheres in np.std and np.var the default is by N (unbiased). Johann From mscipy at googlemail.com Thu Sep 25 08:09:37 2008 From: mscipy at googlemail.com (Saber Mbarek) Date: Thu, 25 Sep 2008 12:09:37 +0000 Subject: [SciPy-user] importing pylab?? In-Reply-To: <3d375d730809240114m65fc8195n9857ab4c6d590395@mail.gmail.com> References: <90ec20e0809240057h64f91941t6a246678a098be43@mail.gmail.com> <3d375d730809240114m65fc8195n9857ab4c6d590395@mail.gmail.com> Message-ID: <90ec20e0809250509j5d8e521p9235260f422b0afa@mail.gmail.com> the import error of the pylab module was related to my locale settings and not to matplotlib. thank you for your help Saber -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Sep 25 09:12:56 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 25 Sep 2008 09:12:56 -0400 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: <9457e7c80809250048r5bec3240x35d25c8c791ee728@mail.gmail.com> References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> <9457e7c80809250048r5bec3240x35d25c8c791ee728@mail.gmail.com> Message-ID: <48DB8E58.2050304@american.edu> > 2008/9/24 Anne Archibald : >> How about (possibly in addition to your suggestion) deprecating the >> re-exporting of numpy functions inside scipy? People often seem to ask >> about whether they should be using the scipy "version" or the numpy >> "version" of some function, when in fact it's just a re-exporting of >> the name. On 9/25/2008 3:48 AM St?fan van der Walt apparently wrote: > I'm all in favour of that suggestion. I am not taking a strong position other than to say user convenience should matter, but the following really seems adequate to me: >>> help(sp.var) Help on function var in module numpy.core.fromnumeric: Cheers, Alan Isaac From millman at berkeley.edu Thu Sep 25 15:51:50 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 25 Sep 2008 12:51:50 -0700 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: <9457e7c80809250050u15ca8517o727f87936236da78@mail.gmail.com> References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> <5b8d13220809242135w7174b0f3l81cf81bddbd3ed89@mail.gmail.com> <9457e7c80809250050u15ca8517o727f87936236da78@mail.gmail.com> Message-ID: On Thu, Sep 25, 2008 at 12:50 AM, St?fan van der Walt wrote: > 2008/9/25 David Cournapeau : >> I suggested some time ago to deprecate scipy.stats current signature >> for 0.7, and set the new one in 0.8. If that's fine with you, we could >> do that. I don't feel confortable changing a function in scipy.stats >> (because it is not "my" module), but OTOH, nobody reacted last time we >> had this discussion, so maybe we should just do it. > > Yes, do it. If someone complains, that's why we have SVN. The SciPy > API is not (and cannot be) frozen -- it is still too immature, so > let's get it up to scratch ASAP. I also don't think it is absolutely necessary to have a deprecation release. The code is officially labeled beta and is currently not regularly released. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From peridot.faceted at gmail.com Thu Sep 25 18:30:06 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 25 Sep 2008 18:30:06 -0400 Subject: [SciPy-user] Documentation Message-ID: Hi, I've just written a description of how scipy's smoothing splines work (because a friend is publishing a paper that used them). I had to go look up the original research papers, because there isn't really a good description in the python documentation, or even in the FITPACK source. It seems like it would be valuable to have such a description, but I'm not sure where it should go: duplicated in every one of scipy's spline-fitting functions? at the module level? in some automatic docstring transmogrifier that puts it in all relevant docstrings without requiring source code duplication? Anne """ The splines constructed by this code are described in a number of papers (see below). Summarizing based on Dierckx 1982, the goal is to find a spline for which the "smoothness" (sum of squares of jumps in the highest derivative at the joins) is as small as possible given that the (weighted) sum of squares of residuals is S. This can be achieved by choosing one knot per data point, but in the interests of efficiency and smoothness, the algorithm attempts to select the smallest set of knots that can achieve this balance. For any given set of knots, by adjusting the relative importance of smoothness and good fitting the algorithm can go from the least-squares polynomial of degree K (totally smooth) to the least-squares spline (big jumps at each knot). The code tries to find the smallest set of knots for which the least-squares spline gives you residuals better than S, then adjusts the relative importance of smoothing and quality-of-fit to make the curve smoother until the residuals are exactly S. It finds this best set of knots by starting with the bare minimum of knots - a one-segment spline - and checking whether the least-squares spline (with no smoothness constraints) fits the data as well as S. If not, then it subdivides the interval by introducing a knot. The subdivided spline is again checked to see whether it can fit the data with sum of squares of residuals no worse than S. If not, the worst-fitting subintervals are subdivided again. This is repeated until there are enough knots to make it possible to fit with residuals no worse than S. When this is finally achieved - if necessary by having a knot at every data point so the spline interpolates them - the spline is adjusted to make it smoother but the fit worse until a spline is obtained with sum of squares of residuals exactly S. The procedure does not attempt to produce the strictly minimal set of knots, but it does stop introducing new knots as soon as possible. The process is described in detail in: Dierckx P. : An Algorithm for Smoothing, Differentiation and Integ- ration Of Experimental Data Using Spline Functions, J.Comp.Appl.Maths 1 (1975) 165-184. Dierckx P. : A Fast Algorithm for Smoothing Data on a Rectangular Grid While Using Spline Functions, Siam J.Numer.Anal. 19 (1982) 1286-1304. Dierckx P. : An Improved Algorithm for Curve Fitting with Spline Functions, Report TW54, Dept. Computer Science, K.U. Leuven, 1981. Dierckx P. : Curve and Surface Fitting with Splines, Monographs on Numerical Analysis, Oxford University Press, 1993. The code in scipy is a python wrapper of the FITPACK routines written by: P. Dierckx Dept. Computer Science, K.U. Leuven Celestijnenlaan 200a, B-3001 Heverlee, Belgium. e-mail : Paul.Dierckx at cs.kuleuven.ac.be """ From robert.kern at gmail.com Thu Sep 25 18:57:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Sep 2008 17:57:34 -0500 Subject: [SciPy-user] Documentation In-Reply-To: References: Message-ID: <3d375d730809251557y6ab4aef1vf7bea52db17665cf@mail.gmail.com> On Thu, Sep 25, 2008 at 17:30, Anne Archibald wrote: > Hi, > > I've just written a description of how scipy's smoothing splines work > (because a friend is publishing a paper that used them). I had to go > look up the original research papers, because there isn't really a > good description in the python documentation, or even in the FITPACK > source. It seems like it would be valuable to have such a description, > but I'm not sure where it should go: duplicated in every one of > scipy's spline-fitting functions? at the module level? in some > automatic docstring transmogrifier that puts it in all relevant > docstrings without requiring source code duplication? We could probably put it in the module docstring and then say "See the docstring for scipy.interpolate.fitpack2 for details about the algorithm." in the individual functions. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Thu Sep 25 19:01:25 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 25 Sep 2008 16:01:25 -0700 Subject: [SciPy-user] Documentation In-Reply-To: <3d375d730809251557y6ab4aef1vf7bea52db17665cf@mail.gmail.com> References: <3d375d730809251557y6ab4aef1vf7bea52db17665cf@mail.gmail.com> Message-ID: On Thu, Sep 25, 2008 at 3:57 PM, Robert Kern wrote: > We could probably put it in the module docstring and then say "See the > docstring for scipy.interpolate.fitpack2 for details about the > algorithm." in the individual functions. +1 -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From craigmaloney at cmu.edu Thu Sep 25 21:46:47 2008 From: craigmaloney at cmu.edu (Craig Maloney) Date: Thu, 25 Sep 2008 21:46:47 -0400 Subject: [SciPy-user] Tester import errors Message-ID: I'm also getting annoying: ImportError: cannot import name Tester Which can be defeated by going into the installed .py files and killing the "import Tester" line and subsequent tests. ----------------------------------------------- Contact info: http://www.ce.cmu.edu/~maloney2/ From robert.kern at gmail.com Thu Sep 25 22:01:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Sep 2008 21:01:18 -0500 Subject: [SciPy-user] Tester import errors In-Reply-To: References: Message-ID: <3d375d730809251901s5cd25021vb64a351a99a5b560@mail.gmail.com> On Thu, Sep 25, 2008 at 20:46, Craig Maloney wrote: > I'm also getting annoying: > > ImportError: cannot import name Tester > > Which can be defeated by going into the installed .py files and > killing the "import Tester" line and subsequent tests. Can you provide more context? What versions of numpy and scipy do you have? What tests are you running? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From craigmaloney at cmu.edu Thu Sep 25 21:44:19 2008 From: craigmaloney at cmu.edu (Craig Maloney) Date: Thu, 25 Sep 2008 21:44:19 -0400 Subject: [SciPy-user] can't import scipy.sparse.linalg Message-ID: <6705CF56-540D-419F-9152-DB07B7422893@cmu.edu> Hi all. I just built the latest svn version and installed. I can import, e.g. scipy.sparse.sparsetools, but cannot import scipy.sparse.linalg. No obvious error messages during the build. Anyone else have this problem? Looks like there was some reasonably heavy developer activity in this directory last week. (I'm now trying again with svn revision 4736... before last week's changes) Thanks, Craig ----------------------------------------------- Contact info: http://www.ce.cmu.edu/~maloney2/ From wnbell at gmail.com Thu Sep 25 22:46:00 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 25 Sep 2008 22:46:00 -0400 Subject: [SciPy-user] can't import scipy.sparse.linalg In-Reply-To: <6705CF56-540D-419F-9152-DB07B7422893@cmu.edu> References: <6705CF56-540D-419F-9152-DB07B7422893@cmu.edu> Message-ID: On Thu, Sep 25, 2008 at 9:44 PM, Craig Maloney wrote: > Hi all. > > I just built the latest svn version and installed. > > I can import, e.g. scipy.sparse.sparsetools, but cannot import > scipy.sparse.linalg. No obvious error messages during the build. > > Anyone else have this problem? > > Looks like there was some reasonably heavy developer activity in this > directory last week. > > (I'm now trying again with svn revision 4736... before last week's > changes) > Hi Craig, I just tested r4741 and everything seems to work. Can you delete your build directory and site-packages/scipy directory and try again? -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From david at ar.media.kyoto-u.ac.jp Thu Sep 25 23:53:24 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 26 Sep 2008 12:53:24 +0900 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> <5b8d13220809242135w7174b0f3l81cf81bddbd3ed89@mail.gmail.com> <9457e7c80809250050u15ca8517o727f87936236da78@mail.gmail.com> Message-ID: <48DC5CB4.30202@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > > I also don't think it is absolutely necessary to have a deprecation > release. The code is officially labeled beta and is currently not > regularly released. > Yes, but in that case, it would be pretty bad to be caught by it. Deprecating does not cost us anything (well, it cost me a couple of minutes), and cost a lot to users. I *hated* it when numpy changed its axis argument before the 1. release, it took me hours to track it down everywhere in my code (missing argument is different: you know that something is wrong right away). I would prefer not inflicting this to other people. Not in my name, at least :) cheers, David From david at ar.media.kyoto-u.ac.jp Thu Sep 25 23:55:48 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 26 Sep 2008 12:55:48 +0900 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> <5b8d13220809242135w7174b0f3l81cf81bddbd3ed89@mail.gmail.com> <9457e7c80809250050u15ca8517o727f87936236da78@mail.gmail.com> Message-ID: <48DC5D44.6000008@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > On Thu, Sep 25, 2008 at 12:50 AM, St?fan van der Walt wrote: >> 2008/9/25 David Cournapeau : >>> I suggested some time ago to deprecate scipy.stats current signature >>> for 0.7, and set the new one in 0.8. If that's fine with you, we could >>> do that. I don't feel confortable changing a function in scipy.stats >>> (because it is not "my" module), but OTOH, nobody reacted last time we >>> had this discussion, so maybe we should just do it. >> Yes, do it. If someone complains, that's why we have SVN. The SciPy >> API is not (and cannot be) frozen -- it is still too immature, so >> let's get it up to scratch ASAP. I added a Deprecation warning with the correct function to use in numpy (with alternative arguments) for the following: - mean - median - std - var - cov - corrcoeff AFAICS, all functionality of any of those is available in numpy (contrary to what the comment says, I guess they are vastly out of date), cheers, David From cournape at gmail.com Fri Sep 26 00:19:00 2008 From: cournape at gmail.com (David Cournapeau) Date: Fri, 26 Sep 2008 13:19:00 +0900 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> Message-ID: <5b8d13220809252119ya7b7b28p400d914792b129a8@mail.gmail.com> On Thu, Sep 25, 2008 at 5:35 AM, Anne Archibald wrote: > How about (possibly in addition to your suggestion) deprecating the > re-exporting of numpy functions inside scipy? People often seem to ask > about whether they should be using the scipy "version" or the numpy > "version" of some function, when in fact it's just a re-exporting of > the name. Yes, it would be nice. What do other people think about deprecating all the numpy re-export in scipy ? It would be nice to do for 0.7 (e.g. in 0.7, deprecated, in 0.8, removed). cheers, David From jh at physics.ucf.edu Fri Sep 26 01:15:54 2008 From: jh at physics.ucf.edu (jh at physics.ucf.edu) Date: Fri, 26 Sep 2008 01:15:54 -0400 Subject: [SciPy-user] Documentation (Anne Archibald) In-Reply-To: (scipy-user-request@scipy.org) References: Message-ID: Anne writes: > It seems like it would be valuable to have such a description, > but I'm not sure where it should go: duplicated in every one of > scipy's spline-fitting functions? at the module level? in some > automatic docstring transmogrifier that puts it in all relevant > docstrings without requiring source code duplication? First, thanks for researching and writing this! This is going to become a general issue, particularly for scipy, where there may be need for many of these in certain modules. What we choose for this one should scale to having many of these per sub-module in some cases, and just one or none in others. I would think that the first one or two per sub-module could go in the sub-module's docstring as you suggest, but after that we should separate all of them into either a .doc sub-sub-module for that sub-module (as now exists for numpy itself) or a function called doc_splines (in this case) that contains only the docstring and "pass". Either way, those pages should be referenced in the sub-module docstring so people know they exist. If we do the doc-function, which is admittedly a hack but which may be more convenient than having to do yet another import, the one-line description should say something appropriate to identify it in lists of functions, something like "documentation for spline implementation". If we could make np.info() import anything it needs to in order to access the named docstring, it would make life easier and the .doc sub-sub-module more attractive, since you wouldn't have to import it. In this case, np.info(sp.foo.doc.spline) would import sp.foo.doc automatically. Whatever is decided, a description should go in the doc standards. --jh-- From s.mientki at ru.nl Fri Sep 26 04:09:08 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 26 Sep 2008 10:09:08 +0200 Subject: [SciPy-user] Documentation In-Reply-To: References: Message-ID: <48DC98A4.6030805@ru.nl> Anne Archibald wrote: > Hi, > > I've just written a description of how scipy's smoothing splines work > (because a friend is publishing a paper that used them). I had to go > look up the original research papers, because there isn't really a > good description in the python documentation, or even in the FITPACK > source. It seems like it would be valuable to have such a description, > but I'm not sure where it should go: duplicated in every one of > scipy's spline-fitting functions? at the module level? in some > automatic docstring transmogrifier that puts it in all relevant > docstrings without requiring source code duplication? > > Anne > Somewhat more documentation would be welcome. But should we limit ourself to these doc-strings, which will undoubtedly be a huge amount of work, even to get a small part of Scipy documented. Coming from MatLab, I always found the help (if you know the right keywords) very good, it not only described the use of a function, but also fave a lot of information about the background. As I'm in the middle of creating a Matlab like environment in Python, and the following ideas are now playing through my head: - the help function shows of course the docstring - the users personal notes about this function (probably stored in html or simple-rtf) - if the help for this function is asked for the first time, the program will search the web. The websearch is done on a number of sites defined by the user, and each first search the user can select a number of sites from this list. For the current subject and each selected website he can also specify of the search has to be performed each time. An example of these sites: = wikipedia = some Scipy site (Enthought?) = google = MatWorks / Wolfram or is this unethical ? The found links are opened in a tabbed browser and also stored in the user personal notes. - the found pages can also be automatically copied to the users own domain (how many links sti;; work after a year ?) So users collects slowly all the information he's interested in, and can organize the information in his own way. If I dream further, the information collected by each user is automatically gathered somewhere on the web, organized by it's popularity, and thereby forming a good starting point for each newbie. cheers, Stef Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. From williams at astro.ox.ac.uk Fri Sep 26 18:55:35 2008 From: williams at astro.ox.ac.uk (Michael Williams) Date: Fri, 26 Sep 2008 23:55:35 +0100 Subject: [SciPy-user] Installation from source on OS X: 'NoneType' object has no attribute 'link_shared_object' In-Reply-To: <3d375d730809231153p43290a4bu10cdf119908c8830@mail.gmail.com> References: <4B321E75-C644-48DD-96ED-71AEF9FE9CB3@astro.ox.ac.uk> <3d375d730809231039t239263e1wcf67da11c650c9b9@mail.gmail.com> <3d375d730809231153p43290a4bu10cdf119908c8830@mail.gmail.com> Message-ID: <20080926225534.GA6402@astro.ox.ac.uk> On Tue, Sep 23, 2008 at 01:53:21PM -0500, Robert Kern wrote: > Unfortunately, now it looks like you are dealing with a bug in > gfortran. Can you downgrade to something earlier than 4.3.0? With difficulty (hence the delayed reply). I'm trying to install without user privileges and I'm not experienced with gcc installation. That being said, I seem to have succeeded. I installed gfortran 4.4.0 from the HPC on OS X page into ~/usr/local (rather than /usr/local), added it to my shell's path and ran setup.py with the usual gnu95 options. It seemed to build and install fine. (Does anyone less ignorant than I am about compilers and libraries know if I need to keep the contents of ~/usr/local around to use scipy?) I can import cleanly (Deprecation Warning about NumpyTest notwithstanding). scipy.test runs 1848 tests with one failure (check_dot). Last time I installed scipy I think most people were not too worried by errors in scipy.test() on OS X. Is that still the case? Thanks again, -- Mike From robert.kern at gmail.com Fri Sep 26 19:14:21 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 26 Sep 2008 18:14:21 -0500 Subject: [SciPy-user] Installation from source on OS X: 'NoneType' object has no attribute 'link_shared_object' In-Reply-To: <20080926225534.GA6402@astro.ox.ac.uk> References: <4B321E75-C644-48DD-96ED-71AEF9FE9CB3@astro.ox.ac.uk> <3d375d730809231039t239263e1wcf67da11c650c9b9@mail.gmail.com> <3d375d730809231153p43290a4bu10cdf119908c8830@mail.gmail.com> <20080926225534.GA6402@astro.ox.ac.uk> Message-ID: <3d375d730809261614j7696ced5je18889036ea67fa1@mail.gmail.com> On Fri, Sep 26, 2008 at 17:55, Michael Williams wrote: > On Tue, Sep 23, 2008 at 01:53:21PM -0500, Robert Kern wrote: >> Unfortunately, now it looks like you are dealing with a bug in >> gfortran. Can you downgrade to something earlier than 4.3.0? > > With difficulty (hence the delayed reply). I'm trying to install without > user privileges and I'm not experienced with gcc installation. > > That being said, I seem to have succeeded. I installed gfortran 4.4.0 > from the HPC on OS X page into ~/usr/local (rather than /usr/local), > added it to my shell's path and ran setup.py with the usual gnu95 > options. It seemed to build and install fine. (Does anyone less ignorant > than I am about compilers and libraries know if I need to keep the > contents of ~/usr/local around to use scipy?) Yes. gfortran will link against the shared libraries there unless if you do a fairly complicated dance to make it link against the static libraries. I very much recommend against using the binaries from HPC. They release binaries for buggy bleeding-edge versions of gfortran, and don't keep previous versions around. I have had *much* more luck with the binaries over here: http://r.research.att.com/tools/ Since you don't have admin privileges, you need to do a little command line work instead of just being able to use the installer. Mount gfortran-4.2.3.dmg. At the terminal: # Assuming you want it in ~/usr/local/ $ cd ~/ $ pax -zr < /Volumes/GNU\ Fortran\ 4.2.3/gfortran.pkg/Contents/Archive.pax.gz Now you should have everything unpacked into ~/usr/local/. But if everything is currently working for you, I would probably leave well enough alone. > I can import cleanly (Deprecation Warning about NumpyTest > notwithstanding). scipy.test runs 1848 tests with one failure > (check_dot). Last time I installed scipy I think most people were not > too worried by errors in scipy.test() on OS X. Is that still the case? You're probably not using that function, so it's fine in that sense. But it is a bona fide bug in 0.6.0 and has been fixed on the trunk. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From williams at astro.ox.ac.uk Fri Sep 26 20:02:37 2008 From: williams at astro.ox.ac.uk (Michael Williams) Date: Sat, 27 Sep 2008 01:02:37 +0100 Subject: [SciPy-user] Installation from source on OS X: 'NoneType' object has no attribute 'link_shared_object' In-Reply-To: <3d375d730809261614j7696ced5je18889036ea67fa1@mail.gmail.com> References: <4B321E75-C644-48DD-96ED-71AEF9FE9CB3@astro.ox.ac.uk> <3d375d730809231039t239263e1wcf67da11c650c9b9@mail.gmail.com> <3d375d730809231153p43290a4bu10cdf119908c8830@mail.gmail.com> <20080926225534.GA6402@astro.ox.ac.uk> <3d375d730809261614j7696ced5je18889036ea67fa1@mail.gmail.com> Message-ID: <5B3D66A5-B141-4DE7-99B3-A53F5EEE00E8@astro.ox.ac.uk> On 27 Sep 2008, at 00:14, Robert Kern wrote: > I very much recommend against using the binaries from HPC. They > release binaries for buggy bleeding-edge versions of gfortran, and > don't keep previous versions around. Thanks for the heads-up. I'm not experienced with Fortran. > I have had *much* more luck with > the binaries over here: > > http://r.research.att.com/tools/ > > Since you don't have admin privileges, you need to do a little command > line work instead of just being able to use the installer. Mount > gfortran-4.2.3.dmg. At the terminal: That procedure to install the compiler worked fine, and scipy itself seems to have built cleanly using it. Thanks very much! -- Mike From matthias.blaicher at googlemail.com Sat Sep 27 10:58:51 2008 From: matthias.blaicher at googlemail.com (Matthias Blaicher) Date: Sat, 27 Sep 2008 16:58:51 +0200 Subject: [SciPy-user] scipy/numpy on CentOS 5/ Dependency problem Message-ID: <261cc8ff0809270758p30f1f259r2481497402539c2d@mail.gmail.com> Hello, I want to install SciPy and numpy on a CentOS 5 system. It contains a fresh install with up-to-date packages. I use the "official" Suse Build repos by DavidCournapeau. It fails with a conflict between refblas3 and blas. [root at localhost yum.repos.d]# yum install python-numpy python-scipy Loading "fastestmirror" plugin Loading mirror speeds from cached hostfile * home_ashigabou: download.opensuse.org * extras: mirrors.tummy.com * updates: dds.gina.alaska.edu * base: mirror.hmc.edu * addons: mirrors.tummy.com extras 100% |=========================| 1.1 kB 00:00 updates 100% |=========================| 951 B 00:00 base 100% |=========================| 1.1 kB 00:00 addons 100% |=========================| 951 B 00:00 Setting up Install Process Parsing package install arguments Resolving Dependencies --> Running transaction check ---> Package python-scipy.i386 0:0.6.0-2.1 set to be updated --> Processing Dependency: libblas.so.3 for package: python-scipy --> Processing Dependency: libblas.so.3 for package: python-scipy --> Processing Dependency: liblapack.so.3 for package: python-scipy --> Processing Dependency: libgfortran.so.1 for package: python-scipy --> Processing Dependency: liblapack.so.3 for package: python-scipy ---> Package python-numpy.i386 0:1.2.0-1.1 set to be updated --> Processing Dependency: gcc-gfortran for package: python-numpy --> Processing Dependency: refblas3 for package: python-numpy --> Processing Dependency: lapack3 < 3.1 for package: python-numpy --> Running transaction check ---> Package gcc-gfortran.i386 0:4.1.2-42.el5 set to be updated --> Processing Dependency: gcc = 4.1.2-42.el5 for package: gcc-gfortran ---> Package blas.i386 0:3.0-37.el5 set to be updated ---> Package lapack3.i386 0:3.0-19.1 set to be updated ---> Package lapack.i386 0:3.0-37.el5 set to be updated ---> Package refblas3.i386 0:3.0-11.1 set to be updated ---> Package libgfortran.i386 0:4.1.2-42.el5 set to be updated --> Running transaction check ---> Package gcc.i386 0:4.1.2-42.el5 set to be updated --> Processing Dependency: libgomp.so.1 for package: gcc --> Processing Dependency: glibc-devel >= 2.2.90-12 for package: gcc --> Processing Dependency: libgomp = 4.1.2-42.el5 for package: gcc --> Running transaction check ---> Package glibc-devel.i386 0:2.5-24 set to be updated --> Processing Dependency: glibc-headers for package: glibc-devel --> Processing Dependency: glibc-headers = 2.5-24 for package: glibc-devel ---> Package libgomp.i386 0:4.1.2-42.el5 set to be updated --> Running transaction check ---> Package glibc-headers.i386 0:2.5-24 set to be updated --> Processing Dependency: kernel-headers for package: glibc-headers --> Processing Dependency: kernel-headers >= 2.2.1 for package: glibc-headers --> Running transaction check ---> Package kernel-headers.i386 0:2.6.18-92.1.13.el5 set to be updated --> Processing Conflict: refblas3 conflicts blas --> Finished Dependency Resolution Error: refblas3 conflicts with blas Is there anything I'm doing wrong here, as I don't want to compile all dependencies by myself.. Sincerly, Matthias Blaicher From matthieu.brucher at gmail.com Sat Sep 27 11:03:55 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 27 Sep 2008 17:03:55 +0200 Subject: [SciPy-user] scipy/numpy on CentOS 5/ Dependency problem In-Reply-To: <261cc8ff0809270758p30f1f259r2481497402539c2d@mail.gmail.com> References: <261cc8ff0809270758p30f1f259r2481497402539c2d@mail.gmail.com> Message-ID: Hi, Start by installing refblas3, refblas3-dev, lapack3 and lapack3-dev. Last time I had this issue, installing the dependencies by hand first solved it. Matthieu 2008/9/27 Matthias Blaicher : > Hello, > > I want to install SciPy and numpy on a CentOS 5 system. It contains a > fresh install with up-to-date packages. I use the "official" Suse > Build repos by DavidCournapeau. > > It fails with a conflict between refblas3 and blas. > > [root at localhost yum.repos.d]# yum install python-numpy python-scipy > Loading "fastestmirror" plugin > Loading mirror speeds from cached hostfile > * home_ashigabou: download.opensuse.org > * extras: mirrors.tummy.com > * updates: dds.gina.alaska.edu > * base: mirror.hmc.edu > * addons: mirrors.tummy.com > extras 100% |=========================| 1.1 kB 00:00 > updates 100% |=========================| 951 B 00:00 > base 100% |=========================| 1.1 kB 00:00 > addons 100% |=========================| 951 B 00:00 > Setting up Install Process > Parsing package install arguments > Resolving Dependencies > --> Running transaction check > ---> Package python-scipy.i386 0:0.6.0-2.1 set to be updated > --> Processing Dependency: libblas.so.3 for package: python-scipy > --> Processing Dependency: libblas.so.3 for package: python-scipy > --> Processing Dependency: liblapack.so.3 for package: python-scipy > --> Processing Dependency: libgfortran.so.1 for package: python-scipy > --> Processing Dependency: liblapack.so.3 for package: python-scipy > ---> Package python-numpy.i386 0:1.2.0-1.1 set to be updated > --> Processing Dependency: gcc-gfortran for package: python-numpy > --> Processing Dependency: refblas3 for package: python-numpy > --> Processing Dependency: lapack3 < 3.1 for package: python-numpy > --> Running transaction check > ---> Package gcc-gfortran.i386 0:4.1.2-42.el5 set to be updated > --> Processing Dependency: gcc = 4.1.2-42.el5 for package: gcc-gfortran > ---> Package blas.i386 0:3.0-37.el5 set to be updated > ---> Package lapack3.i386 0:3.0-19.1 set to be updated > ---> Package lapack.i386 0:3.0-37.el5 set to be updated > ---> Package refblas3.i386 0:3.0-11.1 set to be updated > ---> Package libgfortran.i386 0:4.1.2-42.el5 set to be updated > --> Running transaction check > ---> Package gcc.i386 0:4.1.2-42.el5 set to be updated > --> Processing Dependency: libgomp.so.1 for package: gcc > --> Processing Dependency: glibc-devel >= 2.2.90-12 for package: gcc > --> Processing Dependency: libgomp = 4.1.2-42.el5 for package: gcc > --> Running transaction check > ---> Package glibc-devel.i386 0:2.5-24 set to be updated > --> Processing Dependency: glibc-headers for package: glibc-devel > --> Processing Dependency: glibc-headers = 2.5-24 for package: glibc-devel > ---> Package libgomp.i386 0:4.1.2-42.el5 set to be updated > --> Running transaction check > ---> Package glibc-headers.i386 0:2.5-24 set to be updated > --> Processing Dependency: kernel-headers for package: glibc-headers > --> Processing Dependency: kernel-headers >= 2.2.1 for package: glibc-headers > --> Running transaction check > ---> Package kernel-headers.i386 0:2.6.18-92.1.13.el5 set to be updated > --> Processing Conflict: refblas3 conflicts blas > --> Finished Dependency Resolution > Error: refblas3 conflicts with blas > > Is there anything I'm doing wrong here, as I don't want to compile all > dependencies by myself.. > > Sincerly, > > Matthias Blaicher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Information System Engineer Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From contact at pythonxy.com Sat Sep 27 12:09:20 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sat, 27 Sep 2008 18:09:20 +0200 Subject: [SciPy-user] [ Python(x,y) ] Next release Message-ID: <48DE5AB0.7000106@pythonxy.com> Hi all, As you may already know, Python(x,y) is a free scientific-oriented Python Distribution based on Qt and Eclipse providing a self-consistent scientific development environment. Release 2.1.0 has introduced a new plugin system allowing easy update (http://www.pythonxy.com/standard.php) and fully customizable installation: http://www.pythonxy.com/_tools/img.php?lang=1&img=/_images/Windows%20Installer/03b.png http://www.pythonxy.com/screenshots.php?dir=/Windows%20Installer Third-party plugins are now available (http://www.pythonxy.com/additional.php) - please do not hesitate to contribute and create your own thanks to the SDK. From now on, releases will be scheduled monthly. However, next release will somehow be available continously because future plugins will be available separately. For example, NumPy 1.2.0, VTK 5.2.0 or ETS 3.0.2 are already available at http://www.pythonxy.com/additional.php. Regards, Pierre Raybaut From contact at pythonxy.com Sun Sep 28 14:11:23 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sun, 28 Sep 2008 20:11:23 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.1.1 Message-ID: <48DFC8CB.8080009@pythonxy.com> Hi all, As you may already know, Python(x,y) is a free scientific-oriented Python Distribution based on Qt and Eclipse providing a self-consistent scientific development environment. Release 2.1.1 is now available on http://www.pythonxy.com. (Full Edition, Basic Edition, Light Edition, Custom Edition and Update) Changes history Version 2.1.1 (09-28-2008) * Added: o Sphinx 0.4.2 o nose 0.10.3 * Updated: o NumPy 1.2.0 o VTK 5.2.0 o Enthought Tool Suite 3.0.2 o GDCM 2.0.9 o setuptools 0.6c9 o xy 1.0.6.1 o IPython 0.9.1.1 o Console 2.0.140.4 * Corrected: o Issues 18, 19, 20 (see 'Issues' section on Python(x,y) Google Code website) Regards, Pierre Raybaut From gary.pajer at gmail.com Sun Sep 28 15:05:12 2008 From: gary.pajer at gmail.com (Gary Pajer) Date: Sun, 28 Sep 2008 15:05:12 -0400 Subject: [SciPy-user] NumpyTest warning after upgrade Message-ID: <88fe22a0809281205p11155ff8l2204ffa5d63011ff@mail.gmail.com> I've upgraded numpy and scipy after quite some time. I think I had numpy 1.0.4 and scipy 5.2, but I'm not 100% sure. I'm on winxp, python 2.5.1, and I installed using the windows exe installers on scipy.org I've poked around the scipy-users and numpy-users lists, and learned what I already know: NumpyTest is deprecated. How can I fix this? regards, Gary --------------------------------------------------------------------------------------------------------- C:\Documents and Settings\Gary\My Documents>ipython Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] Type "copyright", "credits" or "license" for more information. IPython 0.9.1 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: import numpy In [2]: import scipy c:\python25\lib\site-packages\scipy\misc\__init__.py:25: DeprecationWarning: NumpyTest will be removed in the next release; please upda te your code to use nose or unittest test = NumpyTest().test In [3]: numpy.__version__ Out[3]: '1.2.0' In [4]: scipy.__version__ Out[4]: '0.6.0' ------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.mcintyre at gmail.com Sun Sep 28 15:36:50 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Sun, 28 Sep 2008 12:36:50 -0700 Subject: [SciPy-user] NumpyTest warning after upgrade In-Reply-To: <88fe22a0809281205p11155ff8l2204ffa5d63011ff@mail.gmail.com> References: <88fe22a0809281205p11155ff8l2204ffa5d63011ff@mail.gmail.com> Message-ID: <1d36917a0809281236r5042b66fxddee186e7ec405d2@mail.gmail.com> On Sun, Sep 28, 2008 at 12:05 PM, Gary Pajer wrote: > I've upgraded numpy and scipy after quite some time. I think I had numpy > 1.0.4 and scipy 5.2, but I'm not 100% sure. > I'm on winxp, python 2.5.1, and I installed using the windows exe installers > on scipy.org > > I've poked around the scipy-users and numpy-users lists, and learned what I > already know: NumpyTest is deprecated. How can I fix this? > > regards, > Gary Hi Gary, Some changes were made to the testing framework in NumPy 1.2, but SciPy 0.6 doesn't include the changes needed to work with NumPy 1.2 without raising some deprecation warnings (I think this my understanding of the versions here is correct, somebody please correct me if I'm wrong). As far as I know, SciPy 0.6 should still work, but it will just complain a bit. :) I'm not sure what the timeline is for releasing the next SciPy. In the meantime, if you just want to patch up your local install, you should be able to change code that looks like this in scipy/misc/__init__.py (and probably several other places): from numpy.testing import NumpyTest test = NumpyTest().test to this: from numpy.testing import Tester test = Tester().test I suppose you could switch to NumPy 1.1 or try using the SciPy from svn if you'd like to avoid making local tweaks. Hope this helps, Alan From gary.pajer at gmail.com Sun Sep 28 18:01:34 2008 From: gary.pajer at gmail.com (Gary Pajer) Date: Sun, 28 Sep 2008 18:01:34 -0400 Subject: [SciPy-user] NumpyTest warning after upgrade In-Reply-To: <1d36917a0809281236r5042b66fxddee186e7ec405d2@mail.gmail.com> References: <88fe22a0809281205p11155ff8l2204ffa5d63011ff@mail.gmail.com> <1d36917a0809281236r5042b66fxddee186e7ec405d2@mail.gmail.com> Message-ID: <88fe22a0809281501mf6f5891s604e9e21f6d637ca@mail.gmail.com> On Sun, Sep 28, 2008 at 3:36 PM, Alan McIntyre wrote: > On Sun, Sep 28, 2008 at 12:05 PM, Gary Pajer wrote: > > I've upgraded numpy and scipy after quite some time. I think I had numpy > > 1.0.4 and scipy 5.2, but I'm not 100% sure. > > I'm on winxp, python 2.5.1, and I installed using the windows exe > installers > > on scipy.org > > > > I've poked around the scipy-users and numpy-users lists, and learned what > I > > already know: NumpyTest is deprecated. How can I fix this? > > > > regards, > > Gary > > Hi Gary, > > Some changes were made to the testing framework in NumPy 1.2, but > SciPy 0.6 doesn't include the changes needed to work with NumPy 1.2 > without raising some deprecation warnings (I think this my > understanding of the versions here is correct, somebody please correct > me if I'm wrong). As far as I know, SciPy 0.6 should still work, but > it will just complain a bit. :) I'm not sure what the timeline is for > releasing the next SciPy. > > In the meantime, if you just want to patch up your local install, you > should be able to change code that looks like this in > scipy/misc/__init__.py (and probably several other places): > > from numpy.testing import NumpyTest > test = NumpyTest().test > > to this: > > from numpy.testing import Tester > test = Tester().test > > I suppose you could switch to NumPy 1.1 or try using the SciPy from > svn if you'd like to avoid making local tweaks. > > Hope this helps, > Alan OK ... now I see that numpy is new, and scipy hasn't caught up. I know this happens from time to time. I haven't been paying attention. Thanks. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From didier.rano at gmail.com Sun Sep 28 18:23:49 2008 From: didier.rano at gmail.com (didier rano) Date: Sun, 28 Sep 2008 18:23:49 -0400 Subject: [SciPy-user] Import problem numpy.ma Message-ID: Hi all, I am using numpy in trunk subversion (revision 5872). I have an import problem with "import numpy.ma" => "ImportError: No module named ma" I am using macos X 10.5. I don't know dependences with this module. Thanks for your help Didier Rano -------------- next part -------------- An HTML attachment was scrubbed... URL: From joao.q.fonseca at gmail.com Mon Sep 29 11:31:46 2008 From: joao.q.fonseca at gmail.com (=?ISO-8859-1?Q?Jo=E3o_Quinta_da_Fonseca?=) Date: Mon, 29 Sep 2008 16:31:46 +0100 Subject: [SciPy-user] scipy workflow Message-ID: <0191F372-8348-42DC-AB33-61A2833157A3@gmail.com> I have given up Matlab and I it feels great to use scipy for my scientific computing needs. However, I find that I use scipy just like I used matlab, with ipython as my terminal. Although this works OK, some things are a little frustrating: updated modules need to be reimported after modification, running scripts requires the run command etc., but this is to be expected because scipy is not matlbab. Now I don't want the matlab way necessarily and I am happy to learn a new way to do things. I guess what I would like to do is do things the python way, which I think means writing my code as classes with the _main_ bit at the end etc., but I am not sure whether that is the best way and if so, what is the best way to learn it. Does any one feel like sharing what their workflow with scipy is? Any tips for me? Thanks, Joao From pav at iki.fi Mon Sep 29 11:42:11 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 29 Sep 2008 15:42:11 +0000 (UTC) Subject: [SciPy-user] scipy workflow References: <0191F372-8348-42DC-AB33-61A2833157A3@gmail.com> Message-ID: Mon, 29 Sep 2008 16:31:46 +0100, Jo?o Quinta da Fonseca wrote: > I have given up Matlab and I it feels great to use scipy for my > scientific computing needs. However, I find that I use scipy just like I > used matlab, with ipython as my terminal. Although this works OK, some > things are a little frustrating: updated modules need to be reimported > after modification, running scripts requires the run command etc., but > this is to be expected because scipy is not matlbab. Now I don't want > the matlab way necessarily and I am happy to learn a new way to do > things. I guess what I would like to do is do things the python way, > which I think means writing my code as classes with the _main_ bit at > the end etc., but I am not sure whether that is the best way and if so, > what is the best way to learn it. Does any one feel like sharing what > their workflow with scipy is? Any tips for me? My workflow consists of two parts: 1) Long-running scripts 2) Interactive use For 1), I typically write programs that have main() and which take parameters on the command line. No code is at the module level, everything is in functions or classes. For 2), I use Ipython with ipy_autoreload and matplotlib, which gets quite close to Matlab-like usage. I can easily import routines from the script files when needed, and the autoreload takes care of reloading them for me. The usual caveats for reloading modules apply, but most of the time they are not a problem. -- Pauli Virtanen From massimo.sandal at unibo.it Mon Sep 29 12:07:12 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 29 Sep 2008 18:07:12 +0200 Subject: [SciPy-user] scipy workflow In-Reply-To: <0191F372-8348-42DC-AB33-61A2833157A3@gmail.com> References: <0191F372-8348-42DC-AB33-61A2833157A3@gmail.com> Message-ID: <48E0FD30.1020505@unibo.it> Jo?o Quinta da Fonseca wrote: > I have given up Matlab and I it feels great to use scipy for my > scientific computing needs. However, I find that I use scipy just like > I used matlab, with ipython as my terminal. Although this works OK, > some things are a little frustrating: updated modules need to be > reimported after modification, running scripts requires the run > command etc., but this is to be expected because scipy is not matlbab. > Now I don't want the matlab way necessarily and I am happy to learn a > new way to do things. I guess what I would like to do is do things the > python way, which I think means writing my code as classes with the > _main_ bit at the end etc., but I am not sure whether that is the best > way and if so, what is the best way to learn it. > Does any one feel like sharing what their workflow with scipy is? Any > tips for me? Apart from the data analysis software where I use scipy as a library, I practically only write scripts, without interactive use. I feel there is more control this way. I don't see much purpose in interactive use. However, you can also write procedural scripts (without classes etc.) in Python... no need to delve into OO programming if you don't need to (although I like OO personally). m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From mhearne at usgs.gov Mon Sep 29 12:14:01 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Mon, 29 Sep 2008 10:14:01 -0600 Subject: [SciPy-user] scipy workflow In-Reply-To: <48E0FD30.1020505@unibo.it> References: <0191F372-8348-42DC-AB33-61A2833157A3@gmail.com> <48E0FD30.1020505@unibo.it> Message-ID: <48E0FEC9.8000209@usgs.gov> My main two uses of ipython are for: 1) Testing code at the command line that I'm trying to write in a script. Regular expressions, for example, are not something I can code in my head. 2) Using the debugger (run -d script.py). This is a huge help for me. The debugger interaction is not as good as Matlab's - for example, if you want to set a breakpoint in a module contained in a different file than your "main", there's a lot of typing. It would eventually be nice to have a graphical debugger, where you could click on the line of code to set a breakpoint. I use "reset" in ipython to re-import modules after modifying the code. The only annoying thing about that is getting prompted (yes, I really want to blow away all of my variables!) massimo sandal wrote: > Jo?o Quinta da Fonseca wrote: >> I have given up Matlab and I it feels great to use scipy for my >> scientific computing needs. However, I find that I use scipy just >> like I used matlab, with ipython as my terminal. Although this works >> OK, some things are a little frustrating: updated modules need to >> be reimported after modification, running scripts requires the run >> command etc., but this is to be expected because scipy is not matlbab. >> Now I don't want the matlab way necessarily and I am happy to learn >> a new way to do things. I guess what I would like to do is do things >> the python way, which I think means writing my code as classes with >> the _main_ bit at the end etc., but I am not sure whether that is >> the best way and if so, what is the best way to learn it. >> Does any one feel like sharing what their workflow with scipy is? >> Any tips for me? > > Apart from the data analysis software where I use scipy as a library, > I practically only write scripts, without interactive use. > > I feel there is more control this way. I don't see much purpose in > interactive use. > > However, you can also write procedural scripts (without classes etc.) > in Python... no need to delve into OO programming if you don't need to > (although I like OO personally). > > m. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ From barrywark at gmail.com Mon Sep 29 18:37:55 2008 From: barrywark at gmail.com (Barry Wark) Date: Mon, 29 Sep 2008 15:37:55 -0700 Subject: [SciPy-user] scipy workflow In-Reply-To: <48E0FEC9.8000209@usgs.gov> References: <0191F372-8348-42DC-AB33-61A2833157A3@gmail.com> <48E0FD30.1020505@unibo.it> <48E0FEC9.8000209@usgs.gov> Message-ID: On Mon, Sep 29, 2008 at 9:14 AM, Michael Hearne wrote: > My main two uses of ipython are for: > 1) Testing code at the command line that I'm trying to write in a > script. Regular expressions, for example, are not something I can code > in my head. > 2) Using the debugger (run -d script.py). This is a huge help for me. > The debugger interaction is not as good as Matlab's - for example, if > you want to set a breakpoint in a module contained in a different file > than your "main", there's a lot of typing. It would eventually be nice > to have a graphical debugger, where you could click on the line of code > to set a breakpoint. If you are an emacs user, I urge you to check out the IPython (minor?) mode, ipython.el (in the ipython distribution) . It allows much tighter integration of editor and IPython session, more like Matlab. If you're a Mac user, I'd also urge you to check out TextMate. You can achieve similar editor/command-line integration in TextMate using AppleScript (email me offline if you're interested...I haven't gotten all of the relevant code posted anywhere yet). -Barry > > I use "reset" in ipython to re-import modules after modifying the code. > The only annoying thing about that is getting prompted (yes, I really > want to blow away all of my variables!) > > > > massimo sandal wrote: >> Jo?o Quinta da Fonseca wrote: >>> I have given up Matlab and I it feels great to use scipy for my >>> scientific computing needs. However, I find that I use scipy just >>> like I used matlab, with ipython as my terminal. Although this works >>> OK, some things are a little frustrating: updated modules need to >>> be reimported after modification, running scripts requires the run >>> command etc., but this is to be expected because scipy is not matlbab. >>> Now I don't want the matlab way necessarily and I am happy to learn >>> a new way to do things. I guess what I would like to do is do things >>> the python way, which I think means writing my code as classes with >>> the _main_ bit at the end etc., but I am not sure whether that is >>> the best way and if so, what is the best way to learn it. >>> Does any one feel like sharing what their workflow with scipy is? >>> Any tips for me? >> >> Apart from the data analysis software where I use scipy as a library, >> I practically only write scripts, without interactive use. >> >> I feel there is more control this way. I don't see much purpose in >> interactive use. >> >> However, you can also write procedural scripts (without classes etc.) >> in Python... no need to delve into OO programming if you don't need to >> (although I like OO personally). >> >> m. >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > -- > ------------------------------------------------------ > Michael Hearne > mhearne at usgs.gov > (303) 273-8620 > USGS National Earthquake Information Center > 1711 Illinois St. Golden CO 80401 > Senior Software Engineer > Synergetics, Inc. > ------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From s.mientki at ru.nl Mon Sep 29 17:09:21 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 29 Sep 2008 23:09:21 +0200 Subject: [SciPy-user] scipy workflow In-Reply-To: <0191F372-8348-42DC-AB33-61A2833157A3@gmail.com> References: <0191F372-8348-42DC-AB33-61A2833157A3@gmail.com> Message-ID: <48E14401.7030600@ru.nl> Jo?o Quinta da Fonseca wrote: > I have given up Matlab and I it feels great to use scipy for my > scientific computing needs. However, I find that I use scipy just like > I used matlab, with ipython as my terminal. Although this works OK, > some things are a little frustrating: updated modules need to be > reimported after modification, running scripts requires the run > command etc., but this is to be expected because scipy is not matlbab. > Now I don't want the matlab way necessarily and I am happy to learn a > new way to do things. I guess what I would like to do is do things the > python way, which I think means writing my code as classes with the > _main_ bit at the end etc., but I am not sure whether that is the best > way and if so, what is the best way to learn it. > Does any one feel like sharing what their workflow with scipy is? Any > tips for me? > > For now I use Signal WorkBench as a MatLab replacement, it misses the interactive single line statement and MatLab's workspace. Changed modules are always automatically reloaded. http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_signal_workbench.html And here is concrete example of filter design, compared to how we did it in MatLab http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_filter_design.html As part of the project I'm now improving the editor / debugger, which will both solve above mentioned omissions. As soon as this works well, it's almost also implemented in Signal WorkBench. http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_debug.html cheers, Stef > Thanks, > > Joao > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From lo.maximo73 at gmail.com Tue Sep 30 00:43:10 2008 From: lo.maximo73 at gmail.com (luis cota) Date: Mon, 29 Sep 2008 23:43:10 -0500 Subject: [SciPy-user] Problems Building / Installing SciPy on OSX Leopard Message-ID: <2e598d7f0809292143x508cec5dq806bd3136dfd8987@mail.gmail.com> I first tried installing using the OSX SuperPack. The install seemed to work, though when importing scipy.integrate, Python crashed because of an error with the referenced modules. This probably has something to do with my OSX setup, though I am not sure where to look. I have installed the latest gFortran and the latest FFTW-3 library as well. I have appended my build log at the end of this email. Any help greatly appreciated ... thanks! *When compiling the SVN Trunk version I get the following output: * python setup.py build Warning: No configuration returned, assuming unavailable. mkl_info: libraries mkl,vml,guide not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib libraries mkl,vml,guide not found in /opt/local/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries fftw3 not found in /usr/local/lib libraries fftw3 not found in /usr/lib FOUND: libraries = ['fftw3'] library_dirs = ['/opt/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/opt/local/include'] djbfft_info: NOT AVAILABLE blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-msse3'] umfpack_info: libraries umfpack not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib libraries umfpack not found in /opt/local/lib /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy-1.3.0.dev5861-py2.5-macosx-10.3-i386.egg/numpy/distutils/system_info.py:414: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "superlu_src" sources building library "arpack" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.cluster._distance_wrap" sources building extension "scipy.cluster._hierarchy_wrap" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. adding 'build/src.macosx-10.3-i386-2.5/scipy/interpolate/src/dfitpack-f2pywrappers.f' to sources. building extension "scipy.interpolate._interpolate" sources building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. adding 'build/src.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.3-i386-2.5/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.3-i386-2.5/scipy/lib/lapack/clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.macosx-10.3-i386-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. adding 'build/src.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.3-i386-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.macosx-10.3-i386-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. adding 'build/src.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.3-i386-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.optimize._slsqp" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.optimize._nnls" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse.linalg.isolve._iterative" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.sparse.linalg.dsolve._zsuperlu" sources building extension "scipy.sparse.linalg.dsolve._dsuperlu" sources building extension "scipy.sparse.linalg.dsolve._csuperlu" sources building extension "scipy.sparse.linalg.dsolve._ssuperlu" sources building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. adding 'build/src.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' to sources. building extension "scipy.sparse.sparsetools._csr" sources building extension "scipy.sparse.sparsetools._csc" sources building extension "scipy.sparse.sparsetools._coo" sources building extension "scipy.sparse.sparsetools._bsr" sources building extension "scipy.sparse.sparsetools._dia" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.macosx-10.3-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-i386-2.5' to include_dirs. adding 'build/src.macosx-10.3-i386-2.5/scipy/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building extension "scipy.stsci.convolve._correlate" sources building extension "scipy.stsci.convolve._lineshape" sources building extension "scipy.stsci.image._combine" sources building data_files sources running build_py copying build/src.macosx-10.3-i386-2.5/scipy/__config__.py -> build/lib.macosx-10.3-i386-2.5/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_clib building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.3-i386-2.5 creating build/temp.macosx-10.3-i386-2.5/scipy creating build/temp.macosx-10.3-i386-2.5/scipy/fftpack creating build/temp.macosx-10.3-i386-2.5/scipy/fftpack/dfftpack compile options: '-c' gfortran:f77: scipy/fftpack/dfftpack/dcosqb.f gfortran:f77: scipy/fftpack/dfftpack/dcosqf.f gfortran:f77: scipy/fftpack/dfftpack/dcosqi.f gfortran:f77: scipy/fftpack/dfftpack/dcost.f gfortran:f77: scipy/fftpack/dfftpack/dcosti.f gfortran:f77: scipy/fftpack/dfftpack/dfftb.f gfortran:f77: scipy/fftpack/dfftpack/dfftb1.f gfortran:f77: scipy/fftpack/dfftpack/dfftf.f gfortran:f77: scipy/fftpack/dfftpack/dfftf1.f gfortran:f77: scipy/fftpack/dfftpack/dffti.f gfortran:f77: scipy/fftpack/dfftpack/dffti1.f scipy/fftpack/dfftpack/dffti1.f: In function 'dffti1': scipy/fftpack/dfftpack/dffti1.f:11: warning: 'ntry' may be used uninitialized in this function gfortran:f77: scipy/fftpack/dfftpack/dsinqb.f gfortran:f77: scipy/fftpack/dfftpack/dsinqf.f gfortran:f77: scipy/fftpack/dfftpack/dsinqi.f gfortran:f77: scipy/fftpack/dfftpack/dsint.f gfortran:f77: scipy/fftpack/dfftpack/dsint1.f gfortran:f77: scipy/fftpack/dfftpack/dsinti.f gfortran:f77: scipy/fftpack/dfftpack/zfftb.f gfortran:f77: scipy/fftpack/dfftpack/zfftb1.f gfortran:f77: scipy/fftpack/dfftpack/zfftf.f gfortran:f77: scipy/fftpack/dfftpack/zfftf1.f gfortran:f77: scipy/fftpack/dfftpack/zffti.f gfortran:f77: scipy/fftpack/dfftpack/zffti1.f scipy/fftpack/dfftpack/zffti1.f: In function 'zffti1': scipy/fftpack/dfftpack/zffti1.f:11: warning: 'ntry' may be used uninitialized in this function ar: adding 23 object files to build/temp.macosx-10.3-i386-2.5/libdfftpack.a ranlib:@ build/temp.macosx-10.3-i386-2.5/libdfftpack.a building 'linpack_lite' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.3-i386-2.5/scipy/integrate creating build/temp.macosx-10.3-i386-2.5/scipy/integrate/linpack_lite compile options: '-c' gfortran:f77: scipy/integrate/linpack_lite/dgbfa.f gfortran:f77: scipy/integrate/linpack_lite/dgbsl.f gfortran:f77: scipy/integrate/linpack_lite/dgefa.f gfortran:f77: scipy/integrate/linpack_lite/dgesl.f gfortran:f77: scipy/integrate/linpack_lite/dgtsl.f gfortran:f77: scipy/integrate/linpack_lite/zgbfa.f gfortran:f77: scipy/integrate/linpack_lite/zgbsl.f gfortran:f77: scipy/integrate/linpack_lite/zgefa.f gfortran:f77: scipy/integrate/linpack_lite/zgesl.f ar: adding 9 object files to build/temp.macosx-10.3-i386-2.5/liblinpack_lite.a ranlib:@ build/temp.macosx-10.3-i386-2.5/liblinpack_lite.a building 'mach' library using additional config_fc from setup script for fortran compiler: {'noopt': ('scipy/integrate/setup.pyc', 1)} customize Gnu95FCompiler compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC creating build/temp.macosx-10.3-i386-2.5/scipy/integrate/mach compile options: '-c' gfortran:f77: scipy/integrate/mach/d1mach.f gfortran:f77: scipy/integrate/mach/i1mach.f gfortran:f77: scipy/integrate/mach/r1mach.f gfortran:f77: scipy/integrate/mach/xerror.f scipy/integrate/mach/xerror.f:1.40: SUBROUTINE XERROR(MESS,NMESS,L1,L2) 1 Warning: Unused dummy argument 'l2' at (1) scipy/integrate/mach/xerror.f:1.37: SUBROUTINE XERROR(MESS,NMESS,L1,L2) 1 Warning: Unused dummy argument 'l1' at (1) ar: adding 4 object files to build/temp.macosx-10.3-i386-2.5/libmach.a ranlib:@ build/temp.macosx-10.3-i386-2.5/libmach.a building 'quadpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.3-i386-2.5/scipy/integrate/quadpack compile options: '-c' gfortran:f77: scipy/integrate/quadpack/dqag.f gfortran:f77: scipy/integrate/quadpack/dqage.f gfortran:f77: scipy/integrate/quadpack/dqagi.f gfortran:f77: scipy/integrate/quadpack/dqagie.f scipy/integrate/quadpack/dqagie.f: In function 'dqagie': scipy/integrate/quadpack/dqagie.f:154: warning: 'small' may be used uninitialized in this function scipy/integrate/quadpack/dqagie.f:153: warning: 'ertest' may be used uninitialized in this function scipy/integrate/quadpack/dqagie.f:152: warning: 'erlarg' may be used uninitialized in this function scipy/integrate/quadpack/dqagie.f:151: warning: 'correc' may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqagp.f gfortran:f77: scipy/integrate/quadpack/dqagpe.f scipy/integrate/quadpack/dqagpe.f: In function 'dqagpe': scipy/integrate/quadpack/dqagpe.f:196: warning: 'k' may be used uninitialized in this function scipy/integrate/quadpack/dqagpe.f:191: warning: 'correc' may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqags.f gfortran:f77: scipy/integrate/quadpack/dqagse.f scipy/integrate/quadpack/dqagse.f: In function 'dqagse': scipy/integrate/quadpack/dqagse.f:153: warning: 'small' may be used uninitialized in this function scipy/integrate/quadpack/dqagse.f:152: warning: 'ertest' may be used uninitialized in this function scipy/integrate/quadpack/dqagse.f:151: warning: 'erlarg' may be used uninitialized in this function scipy/integrate/quadpack/dqagse.f:150: warning: 'correc' may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqawc.f gfortran:f77: scipy/integrate/quadpack/dqawce.f gfortran:f77: scipy/integrate/quadpack/dqawf.f gfortran:f77: scipy/integrate/quadpack/dqawfe.f scipy/integrate/quadpack/dqawfe.f: In function 'dqawfe': scipy/integrate/quadpack/dqawfe.f:203: warning: 'll' may be used uninitialized in this function scipy/integrate/quadpack/dqawfe.f:200: warning: 'drl' may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqawo.f gfortran:f77: scipy/integrate/quadpack/dqawoe.f scipy/integrate/quadpack/dqawoe.f: In function 'dqawoe': scipy/integrate/quadpack/dqawoe.f:208: warning: 'ertest' may be used uninitialized in this function scipy/integrate/quadpack/dqawoe.f:207: warning: 'erlarg' may be used uninitialized in this function scipy/integrate/quadpack/dqawoe.f:206: warning: 'correc' may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqaws.f gfortran:f77: scipy/integrate/quadpack/dqawse.f gfortran:f77: scipy/integrate/quadpack/dqc25c.f gfortran:f77: scipy/integrate/quadpack/dqc25f.f scipy/integrate/quadpack/dqc25f.f: In function 'dqc25f': scipy/integrate/quadpack/dqc25f.f:103: warning: 'm' may be used uninitialized in this function gfortran:f77: scipy/integrate/quadpack/dqc25s.f gfortran:f77: scipy/integrate/quadpack/dqcheb.f gfortran:f77: scipy/integrate/quadpack/dqelg.f scipy/integrate/quadpack/dqelg.f: In function 'dqelg': scipy/integrate/quadpack/dqelg.f:1: internal compiler error: vector VEC(tree,base) index domain error, in build_classic_dist_vector_1 at tree-data-ref.c:2725 Please submit a full bug report, with preprocessed source if appropriate. See for instructions. scipy/integrate/quadpack/dqelg.f: In function 'dqelg': scipy/integrate/quadpack/dqelg.f:1: internal compiler error: vector VEC(tree,base) index domain error, in build_classic_dist_vector_1 at tree-data-ref.c:2725 Please submit a full bug report, with preprocessed source if appropriate. See for instructions. error: Command "/usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops -c -c scipy/integrate/quadpack/dqelg.f -o build/temp.macosx-10.3-i386-2.5/scipy/integrate/quadpack/dqelg.o" failed with exit status 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Tue Sep 30 00:35:56 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 30 Sep 2008 13:35:56 +0900 Subject: [SciPy-user] Problems Building / Installing SciPy on OSX Leopard In-Reply-To: <2e598d7f0809292143x508cec5dq806bd3136dfd8987@mail.gmail.com> References: <2e598d7f0809292143x508cec5dq806bd3136dfd8987@mail.gmail.com> Message-ID: <48E1ACAC.9070907@ar.media.kyoto-u.ac.jp> luis cota wrote: > I first tried installing using the OSX SuperPack. The install seemed > to work, though when importing scipy.integrate, Python crashed because > of an error with the referenced modules. This probably has something > to do with my OSX setup, though I am not sure where to look. I have > installed the latest gFortran and the latest FFTW-3 library as well. > I have appended my build log at the end of this email. > > Any help greatly appreciated ... thanks! Hi Luis, I am sorry for the trouble. For the building part, which gfortran are you using ? I suspect you are using a very recent gfortran, which is not stable (you see a compiler crash, which is not a good sign :) ). Please use this one: http://r.research.att.com/tools/ In particular, do not use this one for the time being: http://hpc.sourceforge.net/ cheers, David From lo.maximo73 at gmail.com Tue Sep 30 01:50:02 2008 From: lo.maximo73 at gmail.com (luis cota) Date: Tue, 30 Sep 2008 00:50:02 -0500 Subject: [SciPy-user] Problems Building / Installing SciPy on OSX Leopard In-Reply-To: <48E1ACAC.9070907@ar.media.kyoto-u.ac.jp> References: <2e598d7f0809292143x508cec5dq806bd3136dfd8987@mail.gmail.com> <48E1ACAC.9070907@ar.media.kyoto-u.ac.jp> Message-ID: <2e598d7f0809292250s5ad33320o9d0a3ddd65e2d1a@mail.gmail.com> Thanks! That was the problem - after installing the ATT version of gFortran the build worked fine. - Luis On Mon, Sep 29, 2008 at 11:35 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > luis cota wrote: > > I first tried installing using the OSX SuperPack. The install seemed > > to work, though when importing scipy.integrate, Python crashed because > > of an error with the referenced modules. This probably has something > > to do with my OSX setup, though I am not sure where to look. I have > > installed the latest gFortran and the latest FFTW-3 library as well. > > I have appended my build log at the end of this email. > > > > Any help greatly appreciated ... thanks! > > Hi Luis, > > I am sorry for the trouble. For the building part, which gfortran > are you using ? I suspect you are using a very recent gfortran, which is > not stable (you see a compiler crash, which is not a good sign :) ). > > Please use this one: > > http://r.research.att.com/tools/ > > In particular, do not use this one for the time being: > > http://hpc.sourceforge.net/ > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmac at mit.edu Tue Sep 30 10:09:52 2008 From: cmac at mit.edu (Christopher MacMinn) Date: Tue, 30 Sep 2008 10:09:52 -0400 Subject: [SciPy-user] sparse matrices -- slicing and fancy indexing for assignment Message-ID: <4FBBE3B4-041E-4593-8ADA-1318BF43A853@mit.edu> Hey folks - I installed SciPy 0.7.0.dev4753 from svn yesterday to get access to some of the upcoming improvements to scipy.sparse. In particular, I was excited about the prospect of slicing and fancy indexing for CSC and CSR matrices. Unfortunately, I find now that slicing and fancy indexing are only supported for viewing and copying, not for assignment. E.g.: # Create a sparse CSC matrix A >>> from scipy import sparse >>> A = sparse.eye(10,10,format="csc") # Have a look at the first row of A >>> print A[0,:].todense() [[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] # Now try to set the first row equal to 5 >>> A[0,:] = 5. Traceback (most recent call last): File "", line 1, in File "/Library/Python/2.5/site-packages/scipy/sparse/ compressed.py", line 601, in __setitem__ raise NotImplementedError("Fancy indexing in assignment not " NotImplementedError: Fancy indexing in assignment not supported for csr matrices. # Fail. Is slicing / fancy indexing for assignment planned for the final 0.7.0 release, or is this (above) the best we're going to get? Best, Chris MacMinn From wnbell at gmail.com Tue Sep 30 12:01:06 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 30 Sep 2008 12:01:06 -0400 Subject: [SciPy-user] sparse matrices -- slicing and fancy indexing for assignment In-Reply-To: <4FBBE3B4-041E-4593-8ADA-1318BF43A853@mit.edu> References: <4FBBE3B4-041E-4593-8ADA-1318BF43A853@mit.edu> Message-ID: On Tue, Sep 30, 2008 at 10:09 AM, Christopher MacMinn wrote: > > Is slicing / fancy indexing for assignment planned for the final 0.7.0 > release, or is this (above) the best we're going to get? > This is probably all you'll see in 0.7.0 Note that assigning to a CSR/CSC matrix is in general a bad idea. Any change to the sparsity structure of these formats requires O(nnz) operations, which basically means reconstructing the matrix from scratch. OTOH MATLAB lets you do it, so we will probably support it someday too. If you submit a patch soon we might be able to integrate it by 0.7.0. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From flyingdeckchair at googlemail.com Tue Sep 30 12:36:00 2008 From: flyingdeckchair at googlemail.com (peter websdell) Date: Tue, 30 Sep 2008 17:36:00 +0100 Subject: [SciPy-user] unusual fitting problem Message-ID: Howdy gang, I have an unusual fitting problem that has me totally stumped. I need to perform a basic interpolation between a few data points, but i need the result to look linear on a loglog scale. Making sense? OK, i'll try again. Imagine you plotted the data points on a loglog scale, then printed it out and joined the dots with a ruler and pencil. That is what I need to achieve. I'm trying to automate an old assessment method at work which requires manually drawing a chart to extrapolate new values between the known data points. It's tedious and there are literally hundreds need doing. :-o Any help will be graciously received. Please let me know if my explanation is nonsense. Cheers, Pete. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Tue Sep 30 12:40:54 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 30 Sep 2008 12:40:54 -0400 Subject: [SciPy-user] unusual fitting problem In-Reply-To: References: Message-ID: <200809301240.54583.pgmdevlist@gmail.com> On Tuesday 30 September 2008 12:36:00 peter websdell wrote: > Howdy gang, > I have an unusual fitting problem that has me totally stumped. Have you thought about modifying your data ? you want log(y) = a + b*log(x) ? Use Y=log(y) and X=log(X), fit a straight-line the standard way and you should get your parameters a & b. That should be easier than trying to fit y=exp(a+b*log(x)) = exp(a) * x**b From aisaac at american.edu Tue Sep 30 13:09:32 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 30 Sep 2008 13:09:32 -0400 Subject: [SciPy-user] NumpyTest warning after upgrade In-Reply-To: <1d36917a0809281236r5042b66fxddee186e7ec405d2@mail.gmail.com> References: <88fe22a0809281205p11155ff8l2204ffa5d63011ff@mail.gmail.com> <1d36917a0809281236r5042b66fxddee186e7ec405d2@mail.gmail.com> Message-ID: <48E25D4C.9000000@american.edu> On 9/28/2008 3:36 PM Alan McIntyre apparently wrote: > if you just want to patch up your local install, you > should be able to change code that looks like this in > scipy/misc/__init__.py (and probably several other > places): > from numpy.testing import NumpyTest > test = NumpyTest().test > to this: > from numpy.testing import Tester > test = Tester().test About a dozen other places. But this has been working for me. Cheers, Alan Isaac