From nico.schloemer at gmail.com Fri Oct 1 06:06:30 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Fri, 1 Oct 2010 12:06:30 +0200 Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? In-Reply-To: References: Message-ID: > Use the 'callback' argument. That works alright I guess. What I do right now is creating a *global array that's filled up as the callback function is called, after which I go ahead and plot it. Using a global variable here seems somewhat ugly to me -- might there be a more elegant solution at all? Cheers, Nico On Tue, Sep 28, 2010 at 5:19 PM, Pauli Virtanen wrote: > Tue, 28 Sep 2010 15:34:21 +0200, Nico Schl?mer wrote: > >> Hi all, >> >> I was just running some test problems with scipy.sparse.linalg.cg and >> thing seem work work out fine. >> To compare preconditioners, I'd like to get more detailed information >> about the convergence history, but I haven't been able to figure out how >> to, e.g., get the number of performed iterations or the (relative) >> residual for each of those steps. Compare this with >> http://www.mathworks.com/help/techdoc/ref/pcg.html. >> >> Any hints? > > Use the 'callback' argument. > >>>> help(scipy.sparse.linalg.cg) > ... > callback : function > ? ?User-supplied function to call after each iteration. ?It is called > ? ?as callback(xk), where xk is the current solution vector. > ... > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sebastian.walter at gmail.com Fri Oct 1 06:44:07 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Fri, 1 Oct 2010 12:44:07 +0200 Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? In-Reply-To: References: Message-ID: I don't know if that helps you, but probably you can provide any callable object. I.e., you can make a class like In [1]: class Foo: ...: def __call__(self,args): ...: print 'Bar' ...: ...: In [2]: f = Foo() In [4]: f(1) Bar regards, Sebastian On Fri, Oct 1, 2010 at 12:06 PM, Nico Schl?mer wrote: >> Use the 'callback' argument. > > That works alright I guess. > What I do right now is creating a *global array that's filled up as > the callback function is called, after which I go ahead and plot it. > Using a global variable here seems somewhat ugly to me -- might there > be a more elegant solution at all? > > Cheers, > Nico > > > On Tue, Sep 28, 2010 at 5:19 PM, Pauli Virtanen wrote: >> Tue, 28 Sep 2010 15:34:21 +0200, Nico Schl?mer wrote: >> >>> Hi all, >>> >>> I was just running some test problems with scipy.sparse.linalg.cg and >>> thing seem work work out fine. >>> To compare preconditioners, I'd like to get more detailed information >>> about the convergence history, but I haven't been able to figure out how >>> to, e.g., get the number of performed iterations or the (relative) >>> residual for each of those steps. Compare this with >>> http://www.mathworks.com/help/techdoc/ref/pcg.html. >>> >>> Any hints? >> >> Use the 'callback' argument. >> >>>>> help(scipy.sparse.linalg.cg) >> ... >> callback : function >> ? ?User-supplied function to call after each iteration. ?It is called >> ? ?as callback(xk), where xk is the current solution vector. >> ... >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Fri Oct 1 08:56:31 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 1 Oct 2010 12:56:31 +0000 (UTC) Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? References: Message-ID: Fri, 01 Oct 2010 12:06:30 +0200, Nico Schl?mer wrote: >> Use the 'callback' argument. > > That works alright I guess. > What I do right now is creating a *global array that's filled up as the > callback function is called, after which I go ahead and plot it. Using a > global variable here seems somewhat ugly to me -- might there be a more > elegant solution at all? Use variables from the outer scope: def doit(M, b): residuals = [] def callback(xk): residuals.append(M*xk - b) sol, info = scipy.sparse.linalg.cg(M, b, callback=callback) return residuals http://docs.python.org/tutorial/classes.html#python-scopes-and-namespaces http://www.saltycrane.com/blog/2008/01/python-variable-scope-notes/ From josef.pktd at gmail.com Fri Oct 1 09:13:22 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 1 Oct 2010 09:13:22 -0400 Subject: [SciPy-User] [SciPy-user] Using Sphinx with numpy/scipy template In-Reply-To: <29837454.post@talk.nabble.com> References: <29837454.post@talk.nabble.com> Message-ID: On Wed, Sep 29, 2010 at 7:20 AM, bfrenay wrote: > > Hello, > > I just migrated all my code documentation to the numpy/scipy format. > However, I have some problems with sphinx. ?I have (i) installed sphinx and > numpydoc using easyinstall, (ii) inserted the following line in conf.py > > extensions= ['sphinx.ext.autodoc', 'sphinx.ext.autosummary', 'numpydoc'] > > and (iii) created a simple, dummy index.rst file reproduced here bellow > > .. project documentation master file > > .. automodule:: project.datasets > ? ?:members: > > However, I only get errors (reproduced after the message). ?Does anyone have > an idea about what is going on, please ? > > Thank you, > > Beno?t > > $ make html > sphinx-build -b html -d _build/doctrees ? . _build/html > Running Sphinx v1.0.4 > > Exception occurred: > ?File > "/usr/local/lib/python2.6/dist-packages/numpydoc-0.3.1-py2.6.egg/numpydoc/numpydoc.py", > line 136, in get_directive > ? ?raise RuntimeError("No directive named '%s' found" % name) > RuntimeError: No directive named 'function' found > The full traceback has been saved in /tmp/sphinx-err-pCQ8Df.log, if you want > to report the issue to the developers. > Please also report this if it was a user error, so that a better error > message can be provided next time. > Either send bugs to the mailing list at > , > or report them in the tracker at > . Thanks! > make: *** [html] Erreur 1 > -- > View this message in context: http://old.nabble.com/Using-Sphinx-with-numpy-scipy-template-tp29837454p29837454.html > Sent from the Scipy-User mailing list archive at Nabble.com. copied form an earlier thread, It should still work with an older sphinx --------- On Mon, Jul 26, 2010 at 1:56 PM, Gael Varoquaux wrote: > On Mon, Jul 26, 2010 at 01:52:06PM -0700, Matthew Brett wrote: >> http://old.nabble.com/numpydoc-broken-by-latest-sphinx-td28896476.html >> http://projects.scipy.org/numpy/ticket/1489 ---- fix was applied in numpy, see ticket; maybe also in numpydoc Josef > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Fri Oct 1 09:17:20 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 1 Oct 2010 09:17:20 -0400 Subject: [SciPy-User] [SciPy-user] Using Sphinx with numpy/scipy template In-Reply-To: References: <29837454.post@talk.nabble.com> Message-ID: On Fri, Oct 1, 2010 at 9:13 AM, wrote: > On Wed, Sep 29, 2010 at 7:20 AM, bfrenay wrote: >> >> Hello, >> >> I just migrated all my code documentation to the numpy/scipy format. >> However, I have some problems with sphinx. ?I have (i) installed sphinx and >> numpydoc using easyinstall, (ii) inserted the following line in conf.py >> >> extensions= ['sphinx.ext.autodoc', 'sphinx.ext.autosummary', 'numpydoc'] >> >> and (iii) created a simple, dummy index.rst file reproduced here bellow >> >> .. project documentation master file >> >> .. automodule:: project.datasets >> ? ?:members: >> >> However, I only get errors (reproduced after the message). ?Does anyone have >> an idea about what is going on, please ? >> >> Thank you, >> >> Beno?t >> >> $ make html >> sphinx-build -b html -d _build/doctrees ? . _build/html >> Running Sphinx v1.0.4 >> >> Exception occurred: >> ?File >> "/usr/local/lib/python2.6/dist-packages/numpydoc-0.3.1-py2.6.egg/numpydoc/numpydoc.py", >> line 136, in get_directive >> ? ?raise RuntimeError("No directive named '%s' found" % name) >> RuntimeError: No directive named 'function' found >> The full traceback has been saved in /tmp/sphinx-err-pCQ8Df.log, if you want >> to report the issue to the developers. >> Please also report this if it was a user error, so that a better error >> message can be provided next time. >> Either send bugs to the mailing list at >> , >> or report them in the tracker at >> . Thanks! >> make: *** [html] Erreur 1 >> -- >> View this message in context: http://old.nabble.com/Using-Sphinx-with-numpy-scipy-template-tp29837454p29837454.html >> Sent from the Scipy-User mailing list archive at Nabble.com. > > copied form an earlier thread, It should still work with an older sphinx > > --------- > On Mon, Jul 26, 2010 at 1:56 PM, Gael Varoquaux > wrote: >> On Mon, Jul 26, 2010 at 01:52:06PM -0700, Matthew Brett wrote: >>> http://old.nabble.com/numpydoc-broken-by-latest-sphinx-td28896476.html >>> http://projects.scipy.org/numpy/ticket/1489 > ---- > > fix was applied in numpy, see ticket; maybe also in numpydoc looks like numpydoc is here http://github.com/numpy/numpy/tree/master/doc/sphinxext Josef > > Josef >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From faltet at pytables.org Fri Oct 1 13:22:49 2010 From: faltet at pytables.org (Francesc Alted) Date: Fri, 1 Oct 2010 19:22:49 +0200 Subject: [SciPy-User] [ANN] python-blosc 1.0.1, a wrapper for the Blosc compression library Message-ID: <201010011922.49386.faltet@pytables.org> ==================================================== Announcing python-blosc 1.0.1 A Python wrapper for the Blosc compression library ==================================================== What is it? =========== Blosc (http://blosc.pytables.org) is a high performance compressor optimized for binary data. It has been designed to transmit data to the processor cache faster than the traditional, non-compressed, direct memory fetch approach via a memcpy() OS call. Blosc works well for compressing numerical arrays that contains data with relatively low entropy, like sparse data, time series, grids with regular-spaced values, etc. python-blosc is a Python package that wraps it. What is new? ============ Everything. This is the first public version of the Python wrapper for Blosc (1.1.1). It supports Python 2.6, 2.7 and 3.1. The API is very simple and it loosely follows that of the zlib module. There are two basic functions, `compress()` and `decompress()`, as well as two additional calls specific for compressing NumPy arrays, namely `pack_array()` and `unpack_array`. There are also utilities for changing dynamically the number of threads used or to release resources when you are not going to need blosc for a while. Basic Usage =========== >>> import numpy as np >>> a = np.linspace(0, 100, 1e7) >>> bytes_array = a.tostring() >>> import blosc >>> bpacked = blosc.compress(bytes_array, typesize=8) >>> bytes_array2 = blosc.decompress(bpacked) >>> print(bytes_array == bytes_array2) True More examples are available on python-blosc wiki page: http://github.com/FrancescAlted/python-blosc/wiki Documentation ============= Please refer to docstrings. Start by the main package: >>> import blosc >>> help(blosc) and ask for more docstrings in the referenced functions. Download sources ================ Go to: http://github.com/FrancescAlted/python-blosc and download the most recent release from here. Blosc is distributed using the MIT license, see LICENSES/BLOSC.txt for details. Mailing list ============ There is an official mailing list for Blosc at: blosc at googlegroups.com http://groups.google.es/group/blosc ---- **Enjoy data!** -- Francesc Alted From nico.schloemer at gmail.com Sat Oct 2 06:19:56 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Sat, 2 Oct 2010 12:19:56 +0200 Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? In-Reply-To: References: Message-ID: Good idea, thanks for the hint! On Fri, Oct 1, 2010 at 2:56 PM, Pauli Virtanen wrote: > Fri, 01 Oct 2010 12:06:30 +0200, Nico Schl?mer wrote: >>> Use the 'callback' argument. >> >> That works alright I guess. >> What I do right now is creating a *global array that's filled up as the >> callback function is called, after which I go ahead and plot it. Using a >> global variable here seems somewhat ugly to me -- might there be a more >> elegant solution at all? > > Use variables from the outer scope: > > def doit(M, b): > ? ?residuals = [] > > ? ?def callback(xk): > ? ? ? ?residuals.append(M*xk - b) > > ? ?sol, info = scipy.sparse.linalg.cg(M, b, callback=callback) > ? ?return residuals > > > http://docs.python.org/tutorial/classes.html#python-scopes-and-namespaces > > http://www.saltycrane.com/blog/2008/01/python-variable-scope-notes/ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From gorkypl at gmail.com Sat Oct 2 09:39:12 2010 From: gorkypl at gmail.com (=?UTF-8?Q?Pawe=C5=82_Rumian?=) Date: Sat, 2 Oct 2010 15:39:12 +0200 Subject: [SciPy-User] [scikits-timeseries] Some problems with inheritance (begginer's question, long) Message-ID: hello, I'm using Python to do scientfic research - most of the time I'm analysing meteorogical data (mainly from gas chromatographs). I'm not an experienced programmer. I learned C and Pascal quite a long time ago, and then discovered Python, which quickly became my main tool to do day-to-day tasks, but I never went 'deeper'. I've also never touched object-oriented programming. Last year I've written a bunch of programs using scikits-timeseries - it's certainly one of the most important modules I use. Everything works flawlessly, but the demands are increasing and currently I have to add some kind of GUI and web interface to my tools. So I recently decided to rewrite my code using object-oriented approach - which means that I needed to learn OOP, which I have never tried before. After some reading I decided to extend TimeSeries class with a few methods that are specific to chomatographic data. The problem is that some of my methods work, but some not. I suspect that it is somehow connected with inheritance, but so far I cannot resolve it on my own... I define the class like this: class ChromaData(ts.TimeSeries): u"""Represents chromatographic timeseries data, inherits from TimeSeries.""" def __init__(self,data,dates): super(ChromaData, self).__init__() and initialize the instance with something like: data_to_be_analyzed= ChromaData(retrieved_timeseries.data,retrieved_timeseries.dates) where retrived_timeseries is a TimeSeries instance, initialised using tsfromtxt. Then I define methods inside the ChromaData. Some of them (the simpler ones) work, but some don't. An example of such not working method is the one that I used to retrieve the lower envelope of data: def daily_bottom(self): u"""Returns bottom envelope of input data, optimised for data with daily frequency.""" daily_interpolated= ipl.interp_masked1d(self, kind='linear') minima_of_daily_interpolated= mvf.mov_min(daily_interpolated,span=8) minima_of_daily_minima= mvf.mov_min(minima_of_daily_interpolated,span=8) return minima_of_daily_minima what I get is: Traceback (most recent call last): File "process.py", line 118, in dbottom= test_data.daily_bottom() File "process.py", line 75, in daily_bottom daily_interpolated= ipl.interp_masked1d(self, kind='linear') File "/usr/lib/python2.6/site-packages/scikits/timeseries/lib/interpolate.py", line 125, in interp_masked1d marr = marray(marr, copy=True) File "/usr/lib/python2.6/site-packages/numpy/ma/core.py", line 5491, in array fill_value=fill_value, ndmin=ndmin, shrink=shrink) TypeError: __init__() got an unexpected keyword argument 'fill_value' Does it mean that I need to pass every possible argument of TimeSeries in __init__? Like def __init__(self,data,dates, fill_value=None, dtype=None,...........) ? Or am I doing something completely wrong? Like I've written - I'm completely new to object-oriented programming, so excuse me if the problem is really trivial... greetings, Pawe? From johnl at cs.wisc.edu Sat Oct 2 13:27:38 2010 From: johnl at cs.wisc.edu (J. David Lee) Date: Sat, 02 Oct 2010 12:27:38 -0500 Subject: [SciPy-User] [scikits-timeseries] Some problems with inheritance (begginer's question, long) In-Reply-To: References: Message-ID: <1286040458.15794.13.camel@johnl-laptop> Pawe?, I think the code is probably calling one of your class's functions with keywords that it doesn't expect. You can pass arbitrary arguments up to the superclass using *args and **kwargs. For example, with init: def __init__(self, data, dates, *args, **kwargs): super(ChromaData, self).__init__(*args, **kwargs) Now if a constructor to your object is passed an unknown keyword argument, it will get passed up to the superclass. The same for any other function you want to override. I think you should be careful not to change the signature of an overridden function. In this case I think you will have to add any additional arguments as keywords: def someFn(self, *args, myNewArg = None, **kwargs): super(ChromaData, self).someFn(*args, **kwargs) # Additional code here. I'm not sure there is another way to preserve the function's signature while adding arguments. I'd be interested to know of other ways. Now that I look at it, if you want your constructor to be callable by code that works with the superclass as well, you'd have to be sure not to change the constructor's signature as I did above by adding the two positional arguments. JD On Sat, 2010-10-02 at 15:39 +0200, Pawe? Rumian wrote: > hello, > > I'm using Python to do scientfic research - most of the time I'm > analysing meteorogical data (mainly from gas chromatographs). > I'm not an experienced programmer. I learned C and Pascal quite a long > time ago, and then discovered Python, which quickly became my main > tool to do day-to-day tasks, but I never went 'deeper'. I've also > never touched object-oriented programming. > > Last year I've written a bunch of programs using scikits-timeseries - > it's certainly one of the most important modules I use. Everything > works flawlessly, but the demands are increasing and currently I have > to add some kind of GUI and web interface to my tools. So I recently > decided to rewrite my code using object-oriented approach - which > means that I needed to learn OOP, which I have never tried before. > > After some reading I decided to extend TimeSeries class with a few > methods that are specific to chomatographic data. > The problem is that some of my methods work, but some not. I suspect > that it is somehow connected with inheritance, but so far I cannot > resolve it on my own... > > I define the class like this: > > class ChromaData(ts.TimeSeries): > u"""Represents chromatographic timeseries data, inherits from TimeSeries.""" > > def __init__(self,data,dates): > super(ChromaData, self).__init__() > > and initialize the instance with something like: > > data_to_be_analyzed= > ChromaData(retrieved_timeseries.data,retrieved_timeseries.dates) > > where retrived_timeseries is a TimeSeries instance, initialised using tsfromtxt. > > Then I define methods inside the ChromaData. Some of them (the simpler > ones) work, but some don't. An example of such not working method is > the one that I used to retrieve the lower envelope of data: > > def daily_bottom(self): > u"""Returns bottom envelope of input data, optimised for data > with daily frequency.""" > > daily_interpolated= ipl.interp_masked1d(self, kind='linear') > minima_of_daily_interpolated= mvf.mov_min(daily_interpolated,span=8) > minima_of_daily_minima= mvf.mov_min(minima_of_daily_interpolated,span=8) > > return minima_of_daily_minima > > what I get is: > > Traceback (most recent call last): > File "process.py", line 118, in > dbottom= test_data.daily_bottom() > File "process.py", line 75, in daily_bottom > daily_interpolated= ipl.interp_masked1d(self, kind='linear') > File "/usr/lib/python2.6/site-packages/scikits/timeseries/lib/interpolate.py", > line 125, in interp_masked1d > marr = marray(marr, copy=True) > File "/usr/lib/python2.6/site-packages/numpy/ma/core.py", line 5491, in array > fill_value=fill_value, ndmin=ndmin, shrink=shrink) > TypeError: __init__() got an unexpected keyword argument 'fill_value' > > Does it mean that I need to pass every possible argument of TimeSeries > in __init__? > Like def __init__(self,data,dates, fill_value=None, dtype=None,...........) ? > Or am I doing something completely wrong? > > Like I've written - I'm completely new to object-oriented programming, > so excuse me if the problem is really trivial... > > greetings, > Pawe? > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From apalomba at austin.rr.com Sat Oct 2 17:34:58 2010 From: apalomba at austin.rr.com (Anthony Palomba) Date: Sat, 2 Oct 2010 16:34:58 -0500 Subject: [SciPy-User] matplotlib with UI Message-ID: I want to use matplotlib to visualiize data sets but I also want to create a UI that lets me manipulate properties of that data and see the result. Does matplotlib support creating UI interfaces or do I need to use some other package? Thanks, Anthony -------------- next part -------------- An HTML attachment was scrubbed... URL: From gorkypl at gmail.com Sat Oct 2 20:14:17 2010 From: gorkypl at gmail.com (=?UTF-8?Q?Pawe=C5=82_Rumian?=) Date: Sun, 3 Oct 2010 02:14:17 +0200 Subject: [SciPy-User] [scikits-timeseries] Some problems with inheritance (begginer's question, long) In-Reply-To: <1286040458.15794.13.camel@johnl-laptop> References: <1286040458.15794.13.camel@johnl-laptop> Message-ID: Thank you J. David - your mail was a significant help. I've read about *args and *kwargs, and they indeed seem to be important in such cases. However I still cannot solve the problem... For me it seems that the function interp_masked1d from scikits.timeseries.lib.interpolate won't work with anything else than TimeSeries instance - or is it still me doing something wrong, maybe with initializing the instance of ChromaData? I'm trying to do is as simple as it could be - I stripped the whole ChromaData Class to single __init__ method, like this: class ChromaData(ts.TimeSeries): u"""Represents chromatographic timeseries data, inherits from TimeSeries.""" def __init__(self,data,dates,*args,**kwargs): super(ChromaData, self).__init__(*args,**kwargs) Then I create a TimeSeries instance, loading it from file (get_data_from_file is just a wrapper to ts.tsfromtxt), and a ChromaData instance: timeseries_instance= get_data_from_file('testdata.dat') chromadata_instance= ChromaData(timeseries_instance.data,timeseries_instance.dates) For certainity: print (timeseries_instance.__class__) print (chromadata_instance.__class__) And then I run ipl.interp_masked1d(timeseries_instance) (everything is OK here) ipl.interp_masked1d(chromadata_instance) and I get: Traceback (most recent call last): File "testcase.py", line 39, in ipl.interp_masked1d(chromadata_instance) File "/usr/lib/python2.6/site-packages/scikits/timeseries/lib/interpolate.py", line 125, in interp_masked1d marr = marray(marr, copy=True) File "/usr/lib/python2.6/site-packages/numpy/ma/core.py", line 5491, in array fill_value=fill_value, ndmin=ndmin, shrink=shrink) TypeError: __init__() takes at least 3 non-keyword arguments (2 given) The whole code can be found at: http://pastebin.com/Vd3mCr8M I've tried converting ChromaData instance to TimeSeries before feeding interp_masked1d with it, like ipl.interp_masked1d(ts.TimeSeries(chromadata_instance.data, chromadata_instance.dates), kind='linear') and it works, but it's just a workaround... Thank you for all possible help, Pawe? From johnl at cs.wisc.edu Sat Oct 2 20:25:51 2010 From: johnl at cs.wisc.edu (J. David Lee) Date: Sat, 02 Oct 2010 19:25:51 -0500 Subject: [SciPy-User] [scikits-timeseries] Some problems with inheritance (begginer's question, long) In-Reply-To: References: <1286040458.15794.13.camel@johnl-laptop> Message-ID: <1286065551.15794.21.camel@johnl-laptop> Pawe?, It looks to me like ipl.interp_masked1d is calling the constructor of the ChromaData class. If this is the case, you'll have to make sure that the ChromaData constructor has the same signature as the TimeSeries constructor. You can pass `data' and `dates' as keyword arguments: class ChromaData(ts.TimeSeries): u"""Represents chromatographic timeseries data, inherits from TimeSeries.""" def __init__(self, *args, data = None, dates = None, **kwargs): super(ChromaData, self).__init__(*args, **kwargs) Just be sure that you don't have problems when `data' and `dates' are None. This way when another part of the code tries to create a ChromaData object in place of a TimeSeries object it will work properly. David On Sun, 2010-10-03 at 02:14 +0200, Pawe? Rumian wrote: > Thank you J. David - your mail was a significant help. I've read about > *args and *kwargs, and they indeed seem to be important in such cases. > > However I still cannot solve the problem... For me it seems that the > function interp_masked1d from scikits.timeseries.lib.interpolate won't > work with anything else than TimeSeries instance - or is it still me > doing something wrong, maybe with initializing the instance of > ChromaData? > > I'm trying to do is as simple as it could be - I stripped the whole > ChromaData Class to single __init__ method, like this: > > class ChromaData(ts.TimeSeries): > u"""Represents chromatographic timeseries data, inherits from TimeSeries.""" > > def __init__(self,data,dates,*args,**kwargs): > super(ChromaData, self).__init__(*args,**kwargs) > > Then I create a TimeSeries instance, loading it from file > (get_data_from_file is just a wrapper to ts.tsfromtxt), and a > ChromaData instance: > > timeseries_instance= get_data_from_file('testdata.dat') > chromadata_instance= > ChromaData(timeseries_instance.data,timeseries_instance.dates) > > For certainity: > print (timeseries_instance.__class__) > > > print (chromadata_instance.__class__) > > > And then I run > ipl.interp_masked1d(timeseries_instance) > (everything is OK here) > > ipl.interp_masked1d(chromadata_instance) > > and I get: > > Traceback (most recent call last): > File "testcase.py", line 39, in > ipl.interp_masked1d(chromadata_instance) > File "/usr/lib/python2.6/site-packages/scikits/timeseries/lib/interpolate.py", > line 125, in interp_masked1d > marr = marray(marr, copy=True) > File "/usr/lib/python2.6/site-packages/numpy/ma/core.py", line 5491, in array > fill_value=fill_value, ndmin=ndmin, shrink=shrink) > TypeError: __init__() takes at least 3 non-keyword arguments (2 given) > > The whole code can be found at: http://pastebin.com/Vd3mCr8M > > I've tried converting ChromaData instance to TimeSeries before feeding > interp_masked1d with it, like > ipl.interp_masked1d(ts.TimeSeries(chromadata_instance.data, > chromadata_instance.dates), kind='linear') > and it works, but it's just a workaround... > > Thank you for all possible help, > Pawe? > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From gorkypl at gmail.com Sat Oct 2 20:44:30 2010 From: gorkypl at gmail.com (=?UTF-8?Q?Pawe=C5=82_Rumian?=) Date: Sun, 3 Oct 2010 02:44:30 +0200 Subject: [SciPy-User] [scikits-timeseries] Some problems with inheritance (begginer's question, long) In-Reply-To: <1286065551.15794.21.camel@johnl-laptop> References: <1286040458.15794.13.camel@johnl-laptop> <1286065551.15794.21.camel@johnl-laptop> Message-ID: Great, it works! I've checked all methods, but somehow missed passing those 'additionalArgs=None' to __init__ - I probably thought it would make no sense to initialize an instance with empty data and dates... Anyway, thank you for the patience. greetings, Pawe? 2010/10/3 J. David Lee : > Pawe?, > > It looks to me like ipl.interp_masked1d is calling the constructor of > the ChromaData class. If this is the case, you'll have to make sure that > the ChromaData constructor has the same signature as the TimeSeries > constructor. You can pass `data' and `dates' as keyword arguments: > > ?class ChromaData(ts.TimeSeries): > ? ?u"""Represents chromatographic timeseries data, inherits from TimeSeries.""" > > ? ?def __init__(self, *args, data = None, dates = None, **kwargs): > ? ? ? ?super(ChromaData, self).__init__(*args, **kwargs) > > Just be sure that you don't have problems when `data' and `dates' are > None. > > This way when another part of the code tries to create a ChromaData > object in place of a TimeSeries object it will work properly. > > David > > On Sun, 2010-10-03 at 02:14 +0200, Pawe? Rumian wrote: >> Thank you J. David - your mail was a significant help. I've read about >> *args and *kwargs, and they indeed seem to be important in such cases. >> >> However I still cannot solve the problem... For me it seems that the >> function interp_masked1d from scikits.timeseries.lib.interpolate won't >> work with anything else than TimeSeries instance - or is it still me >> doing something wrong, maybe with initializing the instance of >> ChromaData? >> >> I'm trying to do is as simple as it could be - I stripped the whole >> ChromaData Class to single __init__ method, like this: >> >> class ChromaData(ts.TimeSeries): >> ? ? u"""Represents chromatographic timeseries data, inherits from TimeSeries.""" >> >> ? ? def __init__(self,data,dates,*args,**kwargs): >> ? ? ? ? super(ChromaData, self).__init__(*args,**kwargs) >> >> Then I create a TimeSeries instance, loading it from file >> (get_data_from_file is just a wrapper to ts.tsfromtxt), and a >> ChromaData instance: >> >> timeseries_instance= get_data_from_file('testdata.dat') >> chromadata_instance= >> ChromaData(timeseries_instance.data,timeseries_instance.dates) >> >> For certainity: >> print (timeseries_instance.__class__) >> >> >> print (chromadata_instance.__class__) >> >> >> And then I run >> ipl.interp_masked1d(timeseries_instance) >> (everything is OK here) >> >> ipl.interp_masked1d(chromadata_instance) >> >> and I get: >> >> Traceback (most recent call last): >> ? File "testcase.py", line 39, in >> ? ? ipl.interp_masked1d(chromadata_instance) >> ? File "/usr/lib/python2.6/site-packages/scikits/timeseries/lib/interpolate.py", >> line 125, in interp_masked1d >> ? ? marr = marray(marr, copy=True) >> ? File "/usr/lib/python2.6/site-packages/numpy/ma/core.py", line 5491, in array >> ? ? fill_value=fill_value, ndmin=ndmin, shrink=shrink) >> TypeError: __init__() takes at least 3 non-keyword arguments (2 given) >> >> The whole code can be found at: http://pastebin.com/Vd3mCr8M >> >> I've tried converting ChromaData instance to TimeSeries before feeding >> interp_masked1d with it, like >> ipl.interp_masked1d(ts.TimeSeries(chromadata_instance.data, >> chromadata_instance.dates), kind='linear') >> and it works, but it's just a workaround... >> >> Thank you for all possible help, >> Pawe? >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josh.holbrook at gmail.com Sat Oct 2 23:52:37 2010 From: josh.holbrook at gmail.com (Joshua Holbrook) Date: Sat, 2 Oct 2010 19:52:37 -0800 Subject: [SciPy-User] matplotlib with UI In-Reply-To: References: Message-ID: On Sat, Oct 2, 2010 at 1:34 PM, Anthony Palomba wrote: > I want to use matplotlib to visualiize data sets but > I also want to create a UI that lets me manipulate > properties of that data and see the result. > > Does matplotlib support creating UI interfaces or > do I need to use some other package? > > > > Thanks, > Anthony > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > Hey Anthony, I'm no expert on plotting in python, but you may find Chaco useful--it's Enthought's plotting library, and is intended for use in GUIs. http://code.enthought.com/chaco/ --Josh From johnl at cs.wisc.edu Sun Oct 3 00:23:11 2010 From: johnl at cs.wisc.edu (J. David Lee) Date: Sat, 02 Oct 2010 23:23:11 -0500 Subject: [SciPy-User] matplotlib with UI In-Reply-To: References: Message-ID: <1286079791.15794.29.camel@johnl-laptop> Integrating a matplotlib window into a wxPython app is pretty easy. I import the following: from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg from matplotlib.backends.backend_wxagg import NavigationToolbar2WxAgg from matplotlib.figure import Figure Create the controls: figure = Figure(figsize=(0.1,0.1)) canvas = FigureCanvasWxAgg(wxParent, -1, figure) toolbar = NavigationToolbar2WxAgg(canvas) Then you add the canvas and toolbar to a layout. Now you can plot as usual: figure.clf() ax = figure.gca() ax.plot(x, y) canvas.draw() David On Sat, 2010-10-02 at 19:52 -0800, Joshua Holbrook wrote: > On Sat, Oct 2, 2010 at 1:34 PM, Anthony Palomba wrote: > > I want to use matplotlib to visualiize data sets but > > I also want to create a UI that lets me manipulate > > properties of that data and see the result. > > > > Does matplotlib support creating UI interfaces or > > do I need to use some other package? > > > > > > > > Thanks, > > Anthony > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > Hey Anthony, > > I'm no expert on plotting in python, but you may find Chaco > useful--it's Enthought's plotting library, and is intended for use in > GUIs. > > http://code.enthought.com/chaco/ > > --Josh > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From pgmdevlist at gmail.com Sun Oct 3 06:51:29 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 3 Oct 2010 12:51:29 +0200 Subject: [SciPy-User] [scikits-timeseries] Some problems with inheritance (begginer's question, long) In-Reply-To: References: <1286040458.15794.13.camel@johnl-laptop> <1286065551.15794.21.camel@johnl-laptop> Message-ID: On Oct 3, 2010, at 2:44 AM, Pawe? Rumian wrote: > Great, it works! > > I've checked all methods, but somehow missed passing those > 'additionalArgs=None' to __init__ - I probably thought it would make > no sense to initialize an instance with empty data and dates... > Anyway, thank you for the patience. Check also the wiki on subclassing: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html And some examples of applications of scikits.timeseries in hydroclimpy http://hydroclimpy.sourceforge.net/ From gorkypl at gmail.com Sun Oct 3 08:46:23 2010 From: gorkypl at gmail.com (=?UTF-8?Q?Pawe=C5=82_Rumian?=) Date: Sun, 3 Oct 2010 14:46:23 +0200 Subject: [SciPy-User] [scikits-timeseries] Some problems with inheritance (begginer's question, long) In-Reply-To: References: <1286040458.15794.13.camel@johnl-laptop> <1286065551.15794.21.camel@johnl-laptop> Message-ID: > Check also the wiki on subclassing: > http://docs.scipy.org/doc/numpy/user/basics.subclassing.html That's probably exactly what I need, thank you. greetings, Pawe? From jpjadaud at orange.fr Sun Oct 3 10:28:17 2010 From: jpjadaud at orange.fr (jp) Date: Sun, 03 Oct 2010 16:28:17 +0200 Subject: [SciPy-User] matplotlib with UI In-Reply-To: References: Message-ID: <1286116097.1582.4.camel@evans> Le samedi 02 octobre 2010 ? 16:34 -0500, Anthony Palomba a ?crit : > I want to use matplotlib to visualiize data sets but > I also want to create a UI that lets me manipulate > properties of that data and see the result. > > Does matplotlib support creating UI interfaces or > do I need to use some other package? > > > > Thanks, > Anthony > ___________ > ____________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user You should have a look at those two following sites (and the code examples they provide) to see whether it can help you to do what you want : http://packages.python.org/guiqwt/ http://packages.python.org/guidata/ Cheers, Jean-paul From apalomba at austin.rr.com Sun Oct 3 20:04:02 2010 From: apalomba at austin.rr.com (Anthony Palomba) Date: Sun, 3 Oct 2010 19:04:02 -0500 Subject: [SciPy-User] matplotlib with UI In-Reply-To: <1286079791.15794.29.camel@johnl-laptop> References: <1286079791.15794.29.camel@johnl-laptop> Message-ID: Thanks for the tips guys! Will give it try. -ap On Sat, Oct 2, 2010 at 11:23 PM, J. David Lee wrote: > Integrating a matplotlib window into a wxPython app is pretty easy. I > import the following: > > from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg > from matplotlib.backends.backend_wxagg import NavigationToolbar2WxAgg > from matplotlib.figure import Figure > > Create the controls: > > figure = Figure(figsize=(0.1,0.1)) > canvas = FigureCanvasWxAgg(wxParent, -1, figure) > toolbar = NavigationToolbar2WxAgg(canvas) > > Then you add the canvas and toolbar to a layout. > > Now you can plot as usual: > > figure.clf() > ax = figure.gca() > ax.plot(x, y) > canvas.draw() > > David > > > On Sat, 2010-10-02 at 19:52 -0800, Joshua Holbrook wrote: > > On Sat, Oct 2, 2010 at 1:34 PM, Anthony Palomba > wrote: > > > I want to use matplotlib to visualiize data sets but > > > I also want to create a UI that lets me manipulate > > > properties of that data and see the result. > > > > > > Does matplotlib support creating UI interfaces or > > > do I need to use some other package? > > > > > > > > > > > > Thanks, > > > Anthony > > > > > > _______________________________________________ > > > SciPy-User mailing list > > > SciPy-User at scipy.org > > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > Hey Anthony, > > > > I'm no expert on plotting in python, but you may find Chaco > > useful--it's Enthought's plotting library, and is intended for use in > > GUIs. > > > > http://code.enthought.com/chaco/ > > > > --Josh > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dharhas.Pothina at twdb.state.tx.us Mon Oct 4 09:30:14 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Mon, 04 Oct 2010 08:30:14 -0500 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file Message-ID: <4CA99095.63BA.009B.1@twdb.state.tx.us> Hi, I'm using scipy 0.8.0 and having problems using the loadmat function. I've looked through the forum and found a few posts that seem related separately to this warning and error but not the solution. Any help is appreciated. I'm receiving the following warning when importing scipy.io In [4]: import scipy.io C:\Python26\lib\site-packages\scipy\io\matlab\mio5.py:90: RuntimeWarning: __builtin__.file size changed, may indicate binary incompatibility from mio5_utils import VarReader5 And then the following error when trying to read a .mat file In [5]: a = scipy.io.loadmat('20100819102442.mat') --------------------------------------------------------------------------- TypeError Traceback (most recent call last) C:\Documents and Settings\dpothina\My Documents\Projects\InstreamFlows\M9\ in () C:\Python26\lib\site-packages\scipy\io\matlab\mio.pyc in loadmat(file_name, mdict, appendmat, **kwar gs) 138 ''' 139 MR = mat_reader_factory(file_name, appendmat, **kwargs) --> 140 matfile_dict = MR.get_variables() 141 if mdict is not None: 142 mdict.update(matfile_dict) C:\Python26\lib\site-packages\scipy\io\matlab\mio5.pyc in get_variables(self, variable_names) 416 continue 417 try: --> 418 res = self.read_var_array(hdr, process) 419 except MatReadError, err: 420 warnings.warn( C:\Python26\lib\site-packages\scipy\io\matlab\mio5.pyc in read_var_array(self, header, process) 385 `process`. 386 ''' --> 387 return self._matrix_reader.array_from_header(header, process) 388 389 def get_variables(self, variable_names=None): C:\Python26\lib\site-packages\scipy\io\matlab\mio5_utils.pyd in scipy.io.matlab.mio5_utils.VarReader 5.array_from_header (scipy\io\matlab\mio5_utils.c:4821)() C:\Python26\lib\site-packages\scipy\io\matlab\mio5_utils.pyd in scipy.io.matlab.mio5_utils.VarReader 5.array_from_header (scipy\io\matlab\mio5_utils.c:4491)() C:\Python26\lib\site-packages\scipy\io\matlab\mio5_utils.pyd in scipy.io.matlab.mio5_utils.VarReader 5.read_struct (scipy\io\matlab\mio5_utils.c:6713)() C:\Python26\lib\site-packages\scipy\io\matlab\mio5_utils.pyd in scipy.io.matlab.mio5_utils.VarReader 5.read_mi_matrix (scipy\io\matlab\mio5_utils.c:4079)() C:\Python26\lib\site-packages\scipy\io\matlab\mio5_utils.pyd in scipy.io.matlab.mio5_utils.VarReader 5.array_from_header (scipy\io\matlab\mio5_utils.c:4491)() C:\Python26\lib\site-packages\scipy\io\matlab\mio5_utils.pyd in scipy.io.matlab.mio5_utils.VarReader 5.read_struct (scipy\io\matlab\mio5_utils.c:6713)() C:\Python26\lib\site-packages\scipy\io\matlab\mio5_utils.pyd in scipy.io.matlab.mio5_utils.VarReader 5.read_mi_matrix (scipy\io\matlab\mio5_utils.c:4079)() C:\Python26\lib\site-packages\scipy\io\matlab\mio5_utils.pyd in scipy.io.matlab.mio5_utils.VarReader 5.array_from_header (scipy\io\matlab\mio5_utils.c:4406)() C:\Python26\lib\site-packages\scipy\io\matlab\mio5_utils.pyd in scipy.io.matlab.mio5_utils.VarReader 5.read_char (scipy\io\matlab\mio5_utils.c:5555)() TypeError: buffer is too small for requested array thanks - dharhas From pav at iki.fi Mon Oct 4 10:16:04 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 4 Oct 2010 14:16:04 +0000 (UTC) Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file References: <4CA99095.63BA.009B.1@twdb.state.tx.us> Message-ID: Mon, 04 Oct 2010 08:30:14 -0500, Dharhas Pothina wrote: > I'm using scipy 0.8.0 and having problems using the loadmat function. > I've looked through the forum and found a few posts that seem related > separately to this warning and error but not the solution. Any help is > appreciated. (i) Does it happen with a specific .mat file only, or with all files? If a specific file, it would be useful if you can put it available somewhere. (ii) How did you install Scipy 0.8.0 (which binaries?) (iii) Try running the test suite, >>> import scipy.io >>> scipy.io.test("full", verbose=2, extra_argv=["--exe"]) Does it pass? From Dharhas.Pothina at twdb.state.tx.us Mon Oct 4 11:24:33 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Mon, 04 Oct 2010 10:24:33 -0500 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> Message-ID: <4CA9AB60.63BA.009B.1@twdb.state.tx.us> Pauli, It looks like it fails with files generated by one of our instruments. I was able to read in another type of matlab file qithout a problem. I think I was able to read this type of file in a previous version of scipy (can't tell you which one now, it was a long time ago), when I had tested our ability to read this type of file. I also tried scipy v0.7.0 installed on one of our linux boxes just know and it gave me the same error (but not the warning). Running the test suite on the windows box gave me three errors listed below, on linux (w/ ver scipy 7.0) all tests passed. I've posted two sample files at the links below. On windows I'm installed with the scipy-0.8.0-win32-superpack-python2.6.exe binary. sample files: http://midgewater.twdb.state.tx.us/bays_estuaries/Publications\20100819125947.mat (~330KB) http://midgewater.twdb.state.tx.us/bays_estuaries/Publications\20100819102442.mat (~18mb) scipy.io test errors on windows 32 bit: ====================================================================== ERROR: test_complex (test_array_import.TestReadArray) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python26\lib\site-packages\scipy\io\tests\test_array_import.py", line 38, in test_complex io.write_array(fname,a) AttributeError: 'module' object has no attribute 'write_array' ====================================================================== ERROR: test_float (test_array_import.TestReadArray) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python26\lib\site-packages\scipy\io\tests\test_array_import.py", line 46, in test_float io.write_array(fname,a) AttributeError: 'module' object has no attribute 'write_array' ====================================================================== ERROR: test_integer (test_array_import.TestReadArray) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python26\lib\site-packages\scipy\io\tests\test_array_import.py", line 55, in test_integer io.write_array(fname,a) AttributeError: 'module' object has no attribute 'write_array' ---------------------------------------------------------------------- Ran 386 tests in 8.061s FAILED (errors=3) Out[7]: - dharhas From pav at iki.fi Mon Oct 4 12:04:35 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 4 Oct 2010 16:04:35 +0000 (UTC) Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> Message-ID: Mon, 04 Oct 2010 10:24:33 -0500, Dharhas Pothina wrote: > It looks like it fails with files generated by one of our instruments. I > was able to read in another type of matlab file without a problem. I > think I was able to read this type of file in a previous version of > scipy (can't tell you which one now, it was a long time ago), when I had > tested our ability to read this type of file. I also tried scipy v0.7.0 > installed on one of our linux boxes just know and it gave me the same > error (but not the warning). Running the test suite on the windows box > gave me three errors listed below, on linux (w/ ver scipy 7.0) all tests > passed. I've posted two sample files at the links below. On windows I'm > installed with the scipy-0.8.0-win32-superpack-python2.6.exe binary. Ok, access to the failing file is probably needed to sort this out. > sample files: > > http://midgewater.twdb.state.tx.us/bays_estuaries/Publications \20100819125947.mat > (~330KB) > http://midgewater.twdb.state.tx.us/bays_estuaries/Publications \20100819102442.mat > (~18mb) I'm not able to access these -- maybe your institute has a firewall in between? You can also e.g. send the smaller file to me via mail. > scipy.io test errors on windows 32 bit: [clip] > "C:\Python26\lib\site-packages\scipy\io\tests\test_array_import.py", There's no file named "test_array_import.py" in Scipy 0.8.0. You probably have some left-over files from Scipy 0.7.x in the directory. Anyway, the test suite seems to be in order. -- Pauli Virtanen From Dharhas.Pothina at twdb.state.tx.us Mon Oct 4 12:26:04 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Mon, 04 Oct 2010 11:26:04 -0500 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> Message-ID: <4CA9B9CC.63BA.009B.1@twdb.state.tx.us> hmm. That site should be outside our firewall. I'll send you the smaller file via email to your personal email, since I'm not sure what the list policy on attachment sizes. - dharhas >>> Pauli Virtanen 10/4/2010 11:04 AM >>> Mon, 04 Oct 2010 10:24:33 -0500, Dharhas Pothina wrote: > It looks like it fails with files generated by one of our instruments. I > was able to read in another type of matlab file without a problem. I > think I was able to read this type of file in a previous version of > scipy (can't tell you which one now, it was a long time ago), when I had > tested our ability to read this type of file. I also tried scipy v0.7.0 > installed on one of our linux boxes just know and it gave me the same > error (but not the warning). Running the test suite on the windows box > gave me three errors listed below, on linux (w/ ver scipy 7.0) all tests > passed. I've posted two sample files at the links below. On windows I'm > installed with the scipy-0.8.0-win32-superpack-python2.6.exe binary. Ok, access to the failing file is probably needed to sort this out. > sample files: > > http://midgewater.twdb.state.tx.us/bays_estuaries/Publications \20100819125947.mat > (~330KB) > http://midgewater.twdb.state.tx.us/bays_estuaries/Publications \20100819102442.mat > (~18mb) I'm not able to access these -- maybe your institute has a firewall in between? You can also e.g. send the smaller file to me via mail. > scipy.io test errors on windows 32 bit: [clip] > "C:\Python26\lib\site-packages\scipy\io\tests\test_array_import.py", There's no file named "test_array_import.py" in Scipy 0.8.0. You probably have some left-over files from Scipy 0.7.x in the directory. Anyway, the test suite seems to be in order. -- Pauli Virtanen _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Mon Oct 4 12:29:20 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 4 Oct 2010 12:29:20 -0400 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: <4CA9B9CC.63BA.009B.1@twdb.state.tx.us> References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CA9B9CC.63BA.009B.1@twdb.state.tx.us> Message-ID: On Mon, Oct 4, 2010 at 12:26 PM, Dharhas Pothina wrote: > > hmm. That site should be outside our firewall. I'll send you the smaller file via email to your personal email, since I'm not sure what the list policy on attachment sizes. > > - dharhas > >>>> Pauli Virtanen 10/4/2010 11:04 AM >>> > Mon, 04 Oct 2010 10:24:33 -0500, Dharhas Pothina wrote: >> It looks like it fails with files generated by one of our instruments. I >> was able to read in another type of matlab file without a problem. I >> think I was able to read this type of file in a previous version of >> scipy (can't tell you which one now, it was a long time ago), when I had >> tested our ability to read this type of file. I also tried scipy v0.7.0 >> installed on one of our linux boxes just know and it gave me the same >> error (but not the warning). Running the test suite on the windows box >> gave me three errors listed below, on linux (w/ ver scipy 7.0) all tests >> passed. I've posted two sample files at the links below. On windows I'm >> installed with the scipy-0.8.0-win32-superpack-python2.6.exe binary. > > Ok, access to the failing file is probably needed to sort this out. > >> sample files: >> >> http://midgewater.twdb.state.tx.us/bays_estuaries/Publications > \20100819125947.mat >> (~330KB) >> http://midgewater.twdb.state.tx.us/bays_estuaries/Publications > \20100819102442.mat >> (~18mb) a forward slash does it for me: http://midgewater.twdb.state.tx.us/bays_estuaries/Publications/20100819102442.mat Josef > > I'm not able to access these -- maybe your institute has a firewall in > between? You can also e.g. send the smaller file to me via mail. > >> scipy.io test errors on windows 32 bit: > [clip] >> ? "C:\Python26\lib\site-packages\scipy\io\tests\test_array_import.py", > > There's no file named "test_array_import.py" in Scipy 0.8.0. You probably > have some left-over files from Scipy 0.7.x in the directory. Anyway, the > test suite seems to be in order. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From wesmckinn at gmail.com Mon Oct 4 12:35:58 2010 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 4 Oct 2010 12:35:58 -0400 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? Message-ID: I'm starting to notice miscellaneous issues in scipy.stats.distributions that should be worth fixing-- I can start filing tickets but I wondered if there were generally any plans to give the distributions a working over. One example: gamma(n) = (n - 1)! obviously blows up when n is sufficiently large. So basically anywhere special.gamma is used in a calculation is potentially at risk. For example, in Bayesian inference it's not uncommon to derive gamma posterior distributions with very small scale, functions like pdf don't work: class gamma_gen(rv_continuous): def _pdf(self, x, a): return x**(a-1)*exp(-x)/special.gamma(a) # using rpy2 In [180]: list(r.dgamma(2.2, 2505, scale=1./1137)) Out[180]: [9.0521612284316788] In [181]: stats.gamma(2505, scale=1./1137).pdf(2.2) Out[181]: nan I can fix some of these things (this one's easy-- take logs and use gammaln) but wondered first if there were any other plans for this code. - Wes From Dharhas.Pothina at twdb.state.tx.us Mon Oct 4 12:43:17 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Mon, 04 Oct 2010 11:43:17 -0500 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CA9B9CC.63BA.009B.1@twdb.state.tx.us> Message-ID: <4CA9BDD5.63BA.009B.1@twdb.state.tx.us> > a forward slash does it for me: > http://midgewater.twdb.state.tx.us/bays_estuaries/Publications/20100819102442.mat > > Josef Thanks, I didn't notice that. - dharhas From iwanttobeabadger at googlemail.com Mon Oct 4 12:59:11 2010 From: iwanttobeabadger at googlemail.com (Nathan Harmston) Date: Mon, 4 Oct 2010 17:59:11 +0100 Subject: [SciPy-User] lower bounds argument for scipy.optimize.anneal Message-ID: Hi, So I'm trying to use the siman function to minimise a likelihood function: print anneal(anneal_ll,[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0],lower=[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], full_output=1,feps=1e-4,maxiter=1000000000,schedule='cauchy') My parameters estimate properly however I get negative values for some of the parameters. I thought the lower argument specified the lowest value a parameter could take? Am I doing something incredibly stupid or is this functionality not implemented? Many thanks in advance, Nathan Harmston -- Nathan Harmston PhD Researcher: Theoretical Systems Biology Biochemistry Building Department of Biological Sciences Imperial College London SW7 2AY London Tel: +44 (0)20 7594 5044 Theoretical Systems Biology - Imperial College London http://www3.imperial.ac.uk/theoreticalsystemsbiology/people/nathanharmston http://biogeeks.wordpress.com/ http://biostar.stackexchange.com/users/236/nathan-harmston From robert.kern at gmail.com Mon Oct 4 13:03:40 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 4 Oct 2010 12:03:40 -0500 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? In-Reply-To: References: Message-ID: On Mon, Oct 4, 2010 at 11:35, Wes McKinney wrote: > I'm starting to notice miscellaneous issues in > scipy.stats.distributions that should be worth fixing-- I can start > filing tickets but I wondered if there were generally any plans to > give the distributions a working over. > > One example: gamma(n) = (n - 1)! obviously blows up when n is > sufficiently large. So basically anywhere special.gamma is used in a > calculation is potentially at risk. For example, in Bayesian inference > it's not uncommon to derive gamma posterior distributions with very > small scale, functions like pdf don't work: > > class gamma_gen(rv_continuous): > ? ?def _pdf(self, x, a): > ? ? ? ?return x**(a-1)*exp(-x)/special.gamma(a) > > # using rpy2 > In [180]: list(r.dgamma(2.2, 2505, scale=1./1137)) > Out[180]: [9.0521612284316788] > > In [181]: stats.gamma(2505, scale=1./1137).pdf(2.2) > Out[181]: nan > > I can fix some of these things (this one's easy-- take logs and use > gammaln) but wondered first if there were any other plans for this > code. I think fixes like this are perfectly appropriate and shouldn't interfere with anyone's plans, if there are any. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From robert.kern at gmail.com Mon Oct 4 13:04:26 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 4 Oct 2010 12:04:26 -0500 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? In-Reply-To: References: Message-ID: On Mon, Oct 4, 2010 at 11:35, Wes McKinney wrote: > I'm starting to notice miscellaneous issues in > scipy.stats.distributions that should be worth fixing-- I can start > filing tickets but I wondered if there were generally any plans to > give the distributions a working over. > > One example: gamma(n) = (n - 1)! obviously blows up when n is > sufficiently large. So basically anywhere special.gamma is used in a > calculation is potentially at risk. For example, in Bayesian inference > it's not uncommon to derive gamma posterior distributions with very > small scale, functions like pdf don't work: > > class gamma_gen(rv_continuous): > ? ?def _pdf(self, x, a): > ? ? ? ?return x**(a-1)*exp(-x)/special.gamma(a) > > # using rpy2 > In [180]: list(r.dgamma(2.2, 2505, scale=1./1137)) > Out[180]: [9.0521612284316788] > > In [181]: stats.gamma(2505, scale=1./1137).pdf(2.2) > Out[181]: nan > > I can fix some of these things (this one's easy-- take logs and use > gammaln) but wondered first if there were any other plans for this > code. I think fixes like this are perfectly appropriate and shouldn't interfere with anyone's plans, if there are any. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Mon Oct 4 13:30:35 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 4 Oct 2010 13:30:35 -0400 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? In-Reply-To: References: Message-ID: On Mon, Oct 4, 2010 at 1:03 PM, Robert Kern wrote: > On Mon, Oct 4, 2010 at 11:35, Wes McKinney wrote: >> I'm starting to notice miscellaneous issues in >> scipy.stats.distributions that should be worth fixing-- I can start >> filing tickets but I wondered if there were generally any plans to >> give the distributions a working over. >> >> One example: gamma(n) = (n - 1)! obviously blows up when n is >> sufficiently large. So basically anywhere special.gamma is used in a >> calculation is potentially at risk. For example, in Bayesian inference >> it's not uncommon to derive gamma posterior distributions with very >> small scale, functions like pdf don't work: >> >> class gamma_gen(rv_continuous): >> ? ?def _pdf(self, x, a): >> ? ? ? ?return x**(a-1)*exp(-x)/special.gamma(a) >> >> # using rpy2 >> In [180]: list(r.dgamma(2.2, 2505, scale=1./1137)) >> Out[180]: [9.0521612284316788] >> >> In [181]: stats.gamma(2505, scale=1./1137).pdf(2.2) >> Out[181]: nan >> >> I can fix some of these things (this one's easy-- take logs and use >> gammaln) but wondered first if there were any other plans for this >> code. > > I think fixes like this are perfectly appropriate and shouldn't > interfere with anyone's plans, if there are any. We have been improving a lot of this precision problems (and bug fixes) over the last two years. And there are still many left. Any improvements are very welcome. What's contentious is the big picture and the fit function, where I still don't know how far I want to fork it. (And don't use python's math module if you want to be robust to funny cases.) Josef > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From matthew.brett at gmail.com Mon Oct 4 13:31:43 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 4 Oct 2010 10:31:43 -0700 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> Message-ID: Hi, On Mon, Oct 4, 2010 at 9:04 AM, Pauli Virtanen wrote: > Mon, 04 Oct 2010 10:24:33 -0500, Dharhas Pothina wrote: >> It looks like it fails with files generated by one of our instruments. I >> was able to read in another type of matlab file without a problem. I >> think I was able to read this type of file in a previous version of >> scipy (can't tell you which one now, it was a long time ago), when I had >> tested our ability to read this type of file. I also tried scipy v0.7.0 >> installed on one of our linux boxes just know and it gave me the same >> error (but not the warning). Running the test suite on the windows box >> gave me three errors listed below, on linux (w/ ver scipy 7.0) all tests >> passed. I've posted two sample files at the links below. On windows I'm >> installed with the scipy-0.8.0-win32-superpack-python2.6.exe binary. > > Ok, access to the failing file is probably needed to sort this out. It's very possible that the behavior of loadmat has changed because I refactored it radically between 0.7 and 0.8. I'll have a look your file this morning in case it will save anyone some time, See you, Matthew From wesmckinn at gmail.com Mon Oct 4 13:33:13 2010 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 4 Oct 2010 13:33:13 -0400 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? In-Reply-To: References: Message-ID: On Mon, Oct 4, 2010 at 1:30 PM, wrote: > On Mon, Oct 4, 2010 at 1:03 PM, Robert Kern wrote: >> On Mon, Oct 4, 2010 at 11:35, Wes McKinney wrote: >>> I'm starting to notice miscellaneous issues in >>> scipy.stats.distributions that should be worth fixing-- I can start >>> filing tickets but I wondered if there were generally any plans to >>> give the distributions a working over. >>> >>> One example: gamma(n) = (n - 1)! obviously blows up when n is >>> sufficiently large. So basically anywhere special.gamma is used in a >>> calculation is potentially at risk. For example, in Bayesian inference >>> it's not uncommon to derive gamma posterior distributions with very >>> small scale, functions like pdf don't work: >>> >>> class gamma_gen(rv_continuous): >>> ? ?def _pdf(self, x, a): >>> ? ? ? ?return x**(a-1)*exp(-x)/special.gamma(a) >>> >>> # using rpy2 >>> In [180]: list(r.dgamma(2.2, 2505, scale=1./1137)) >>> Out[180]: [9.0521612284316788] >>> >>> In [181]: stats.gamma(2505, scale=1./1137).pdf(2.2) >>> Out[181]: nan >>> >>> I can fix some of these things (this one's easy-- take logs and use >>> gammaln) but wondered first if there were any other plans for this >>> code. >> >> I think fixes like this are perfectly appropriate and shouldn't >> interfere with anyone's plans, if there are any. > > We have been improving a lot of this precision problems (and bug > fixes) over the last two years. And there are still many left. Any > improvements are very welcome. > > What's contentious is the big picture and the fit function, where I > still don't know how far I want to fork it. > > (And don't use python's math module if you want to be robust to funny cases.) > > Josef > >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma that is made terrible by our own mad attempt to interpret it as >> though it had an underlying truth." >> ? -- Umberto Eco >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Got it. Is SciPy development still taking place in SVN? I should get commit rights and start chipping away on these misc issues. From josef.pktd at gmail.com Mon Oct 4 13:47:08 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 4 Oct 2010 13:47:08 -0400 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? In-Reply-To: References: Message-ID: On Mon, Oct 4, 2010 at 1:33 PM, Wes McKinney wrote: > On Mon, Oct 4, 2010 at 1:30 PM, ? wrote: >> On Mon, Oct 4, 2010 at 1:03 PM, Robert Kern wrote: >>> On Mon, Oct 4, 2010 at 11:35, Wes McKinney wrote: >>>> I'm starting to notice miscellaneous issues in >>>> scipy.stats.distributions that should be worth fixing-- I can start >>>> filing tickets but I wondered if there were generally any plans to >>>> give the distributions a working over. >>>> >>>> One example: gamma(n) = (n - 1)! obviously blows up when n is >>>> sufficiently large. So basically anywhere special.gamma is used in a >>>> calculation is potentially at risk. For example, in Bayesian inference >>>> it's not uncommon to derive gamma posterior distributions with very >>>> small scale, functions like pdf don't work: >>>> >>>> class gamma_gen(rv_continuous): >>>> ? ?def _pdf(self, x, a): >>>> ? ? ? ?return x**(a-1)*exp(-x)/special.gamma(a) >>>> >>>> # using rpy2 >>>> In [180]: list(r.dgamma(2.2, 2505, scale=1./1137)) >>>> Out[180]: [9.0521612284316788] >>>> >>>> In [181]: stats.gamma(2505, scale=1./1137).pdf(2.2) >>>> Out[181]: nan >>>> >>>> I can fix some of these things (this one's easy-- take logs and use >>>> gammaln) but wondered first if there were any other plans for this >>>> code. >>> >>> I think fixes like this are perfectly appropriate and shouldn't >>> interfere with anyone's plans, if there are any. >> >> We have been improving a lot of this precision problems (and bug >> fixes) over the last two years. And there are still many left. Any >> improvements are very welcome. >> >> What's contentious is the big picture and the fit function, where I >> still don't know how far I want to fork it. >> >> (And don't use python's math module if you want to be robust to funny cases.) >> >> Josef >> >>> >>> -- >>> Robert Kern >>> >>> "I have come to believe that the whole world is an enigma, a harmless >>> enigma that is made terrible by our own mad attempt to interpret it as >>> though it had an underlying truth." >>> ? -- Umberto Eco >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > Got it. Is SciPy development still taking place in SVN? I should get > commit rights and start chipping away on these misc issues. Either svn, or if you are comfortable with git, through github like many of the developers work now. But I don't know any details about that route. Josef > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Mon Oct 4 14:11:36 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 4 Oct 2010 18:11:36 +0000 (UTC) Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CA9B9CC.63BA.009B.1@twdb.state.tx.us> Message-ID: Mon, 04 Oct 2010 11:26:04 -0500, Dharhas Pothina wrote: > hmm. That site should be outside our firewall. I'll send you the smaller > file via email to your personal email, since I'm not sure what the list > policy on attachment sizes. Ok, I seem to be able to download these with Google Chrome, but not with Wget. Apparently, the site does some filtering based on the user agent string... -- Pauli Virtanen From pav at iki.fi Mon Oct 4 14:55:29 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 4 Oct 2010 18:55:29 +0000 (UTC) Subject: [SciPy-User] Any planned work on scipy.stats.distributions? References: Message-ID: Mon, 04 Oct 2010 13:47:08 -0400, josef.pktd wrote: > On Mon, Oct 4, 2010 at 1:33 PM, Wes McKinney > wrote: [clip] >> Got it. Is SciPy development still taking place in SVN? I should get >> commit rights and start chipping away on these misc issues. > > Either svn, or if you are comfortable with git, through github like many > of the developers work now. > But I don't know any details about that route. I'd recommend going for git already. Some instructions how to get set up using Scipy's SVN mirror can be found here: http://projects.scipy.org/numpy/wiki/GitMirror Pauli From josef.pktd at gmail.com Mon Oct 4 16:47:06 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 4 Oct 2010 16:47:06 -0400 Subject: [SciPy-User] lower bounds argument for scipy.optimize.anneal In-Reply-To: References: Message-ID: On Mon, Oct 4, 2010 at 12:59 PM, Nathan Harmston wrote: > Hi, > > So I'm trying to use the siman function to minimise a likelihood function: > > print anneal(anneal_ll,[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, > 100.0, 100.0, 100.0, 100.0, 100.0, 100.0],lower=[0.0, 0.0, 0.0, 0.0, > 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], > full_output=1,feps=1e-4,maxiter=1000000000,schedule='cauchy') > > My parameters estimate properly however I get negative values for some > of the parameters. I thought the lower argument specified the lowest > value a parameter could take? Am I doing something incredibly stupid > or is this functionality not implemented? It's not implemented from what I understand, see http://projects.scipy.org/scipy/ticket/1126 Josef > > Many thanks in advance, > > Nathan Harmston > > -- > Nathan Harmston > PhD Researcher: Theoretical Systems Biology > Biochemistry Building > Department of Biological Sciences > Imperial College London > SW7 2AY London > Tel: +44 (0)20 7594 5044 > > Theoretical Systems Biology - Imperial College London > http://www3.imperial.ac.uk/theoreticalsystemsbiology/people/nathanharmston > > http://biogeeks.wordpress.com/ > http://biostar.stackexchange.com/users/236/nathan-harmston > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From iwanttobeabadger at googlemail.com Mon Oct 4 17:20:21 2010 From: iwanttobeabadger at googlemail.com (Nathan Harmston) Date: Mon, 4 Oct 2010 22:20:21 +0100 Subject: [SciPy-User] lower bounds argument for scipy.optimize.anneal In-Reply-To: References: Message-ID: So is there any way I can hack around this functionality? I only want to allow my parameters to be zero or higher. I see that I can use some of these algorithms for constrained optimization: fmin_slsqp fmin_l_bfgs_b fmin_cobyla but I have never used any of them.......how sensitive are they to local minima? Thanks On 4 October 2010 21:47, wrote: > On Mon, Oct 4, 2010 at 12:59 PM, Nathan Harmston > wrote: >> Hi, >> >> So I'm trying to use the siman function to minimise a likelihood function: >> >> print anneal(anneal_ll,[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, >> 100.0, 100.0, 100.0, 100.0, 100.0, 100.0],lower=[0.0, 0.0, 0.0, 0.0, >> 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], >> full_output=1,feps=1e-4,maxiter=1000000000,schedule='cauchy') >> >> My parameters estimate properly however I get negative values for some >> of the parameters. I thought the lower argument specified the lowest >> value a parameter could take? Am I doing something incredibly stupid >> or is this functionality not implemented? > > It's not implemented from what I understand, see > > http://projects.scipy.org/scipy/ticket/1126 > > Josef > >> >> Many thanks in advance, >> >> Nathan Harmston >> >> -- >> Nathan Harmston >> PhD Researcher: Theoretical Systems Biology >> Biochemistry Building >> Department of Biological Sciences >> Imperial College London >> SW7 2AY London >> Tel: +44 (0)20 7594 5044 >> >> Theoretical Systems Biology - Imperial College London >> http://www3.imperial.ac.uk/theoreticalsystemsbiology/people/nathanharmston >> >> http://biogeeks.wordpress.com/ >> http://biostar.stackexchange.com/users/236/nathan-harmston >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Nathan Harmston PhD Researcher: Theoretical Systems Biology Biochemistry Building Department of Biological Sciences Imperial College London SW7 2AY London Tel: +44 (0)20 7594 5044 Theoretical Systems Biology - Imperial College London http://www3.imperial.ac.uk/theoreticalsystemsbiology/people/nathanharmston http://biogeeks.wordpress.com/ http://biostar.stackexchange.com/users/236/nathan-harmston From Chris.Barker at noaa.gov Mon Oct 4 17:32:28 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 04 Oct 2010 14:32:28 -0700 Subject: [SciPy-User] matplotlib with UI In-Reply-To: <1286079791.15794.29.camel@johnl-laptop> References: <1286079791.15794.29.camel@johnl-laptop> Message-ID: <4CAA47EC.2040708@noaa.gov> J. David Lee wrote: > Integrating a matplotlib window into a wxPython app is pretty easy. I > import the following: > > from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg > from matplotlib.backends.backend_wxagg import NavigationToolbar2WxAgg > from matplotlib.figure import Figure I like wxMPL: http://agni.phys.iit.edu/~kmcivor/wxmpl/ It's a wx wrapper around MPL that gives you nifty interactive features ready to go. -Chris > Create the controls: > > figure = Figure(figsize=(0.1,0.1)) > canvas = FigureCanvasWxAgg(wxParent, -1, figure) > toolbar = NavigationToolbar2WxAgg(canvas) > > Then you add the canvas and toolbar to a layout. > > Now you can plot as usual: > > figure.clf() > ax = figure.gca() > ax.plot(x, y) > canvas.draw() > > David > > > On Sat, 2010-10-02 at 19:52 -0800, Joshua Holbrook wrote: >> On Sat, Oct 2, 2010 at 1:34 PM, Anthony Palomba wrote: >>> I want to use matplotlib to visualiize data sets but >>> I also want to create a UI that lets me manipulate >>> properties of that data and see the result. >>> >>> Does matplotlib support creating UI interfaces or >>> do I need to use some other package? >>> >>> >>> >>> Thanks, >>> Anthony >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> Hey Anthony, >> >> I'm no expert on plotting in python, but you may find Chaco >> useful--it's Enthought's plotting library, and is intended for use in >> GUIs. >> >> http://code.enthought.com/chaco/ >> >> --Josh >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From matthew.brett at gmail.com Mon Oct 4 18:56:27 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 4 Oct 2010 15:56:27 -0700 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> Message-ID: Hi, > It's very possible that the behavior of loadmat has changed because I > refactored it radically between 0.7 and 0.8. > > I'll have a look your file this morning in case it will save anyone some time, I see that, if I load the 20100819102442.mat file in matlab and save it again, loadmat can read it. This is in matlab r2008a. I think the problem is arising in the 'System' variable in that file. How did this mat file arise? What version of matlab is it being saved from? With what flags? Best, Matthew From david_baddeley at yahoo.com.au Mon Oct 4 19:48:27 2010 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Mon, 4 Oct 2010 16:48:27 -0700 (PDT) Subject: [SciPy-User] lower bounds argument for scipy.optimize.anneal In-Reply-To: References: Message-ID: <646498.29690.qm@web113406.mail.gq1.yahoo.com> The classic hack to keep a parameter positive is to fit the square root of the parameter rather than the parameter itself - when you square your rejigged parameter in your expression it's guaranteed to be positive. cheers, David ----- Original Message ---- From: Nathan Harmston To: SciPy Users List Sent: Tue, 5 October, 2010 10:20:21 AM Subject: Re: [SciPy-User] lower bounds argument for scipy.optimize.anneal So is there any way I can hack around this functionality? I only want to allow my parameters to be zero or higher. I see that I can use some of these algorithms for constrained optimization: fmin_slsqp fmin_l_bfgs_b fmin_cobyla but I have never used any of them.......how sensitive are they to local minima? Thanks On 4 October 2010 21:47, wrote: > On Mon, Oct 4, 2010 at 12:59 PM, Nathan Harmston > wrote: >> Hi, >> >> So I'm trying to use the siman function to minimise a likelihood function: >> >> print anneal(anneal_ll,[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, >> 100.0, 100.0, 100.0, 100.0, 100.0, 100.0],lower=[0.0, 0.0, 0.0, 0.0, >> 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], >> full_output=1,feps=1e-4,maxiter=1000000000,schedule='cauchy') >> >> My parameters estimate properly however I get negative values for some >> of the parameters. I thought the lower argument specified the lowest >> value a parameter could take? Am I doing something incredibly stupid >> or is this functionality not implemented? > > It's not implemented from what I understand, see > > http://projects.scipy.org/scipy/ticket/1126 > > Josef > >> >> Many thanks in advance, >> >> Nathan Harmston >> >> -- >> Nathan Harmston >> PhD Researcher: Theoretical Systems Biology >> Biochemistry Building >> Department of Biological Sciences >> Imperial College London >> SW7 2AY London >> Tel: +44 (0)20 7594 5044 >> >> Theoretical Systems Biology - Imperial College London >> http://www3.imperial.ac.uk/theoreticalsystemsbiology/people/nathanharmston >> >> http://biogeeks.wordpress.com/ >> http://biostar.stackexchange.com/users/236/nathan-harmston >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Nathan Harmston PhD Researcher: Theoretical Systems Biology Biochemistry Building Department of Biological Sciences Imperial College London SW7 2AY London Tel: +44 (0)20 7594 5044 Theoretical Systems Biology - Imperial College London http://www3.imperial.ac.uk/theoreticalsystemsbiology/people/nathanharmston http://biogeeks.wordpress.com/ http://biostar.stackexchange.com/users/236/nathan-harmston _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From gdahl at cs.toronto.edu Mon Oct 4 20:52:52 2010 From: gdahl at cs.toronto.edu (George Dahl) Date: Mon, 4 Oct 2010 20:52:52 -0400 Subject: [SciPy-User] can anyone confirm this bug in scipy.io.savemat on the latest version of scipy? Message-ID: Hi everyone, The code snippet I have pasted below exhibits what I believe to be incorrect behavior on the version of scipy I happen to have installed ('0.8.0.dev6113'). Basically, scipy.io.savemat doesn't respect the dtype of the arrays it saves and everything gets saved as float64 for me. So files holding arrays of, for example, dtype=int8 balloon to a huge size when written to disk by scipy.io.savemat. ######################### import numpy as num from scipy.io import loadmat, savemat x = num.random.random_integers(64, size=(100,)) x = x.astype(num.int8) savemat("garb.mat", {'x':x}) d = loadmat("garb.mat") print d['x'].dtype, "should be int8" ########################## Can anyone else confirm that this happens on the latest version of scipy? - George From matthew.brett at gmail.com Tue Oct 5 00:41:10 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 4 Oct 2010 21:41:10 -0700 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> Message-ID: Hi, > I see that, if I load the 20100819102442.mat file in matlab and save > it again, loadmat can read it. ? This is in matlab r2008a. Just a little more on the file 20100819102442.mat I suspect it must be being written from outside matlab. The error you are getting is because there is a 1-length string which has a 0 byte length. Matlab copes OK with this, and returns a string ' ' - here for the System.Units struct 'Sample' field: Pitch: 'deg' Roll: 'deg' Heading: 'deg' Temperature: 'degC' Time: ' ' SNR: 'dB' Sample: ' ' Voltage: 'V' Cell_Start: 'm' Cell_Size: 'm' Step: ' ' Pings: ' ' scipy and octave barf. I don't know what scipy used to do in that situation, but I would be surprised if it hasn't always barfed. It's easy to fix that one, but then, once fixed, you get this: /Users/mb312/usr/local/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so in scipy.io.matlab.mio5_utils.VarReader5.read_struct (./mio5_utils.c:6836)() ValueError: two fields with the same name That turns out to be because the 'Summary' struct has 4 fields all named 'Station_Q': Top_Q: [34x1 double] Middle_Q: [34x1 double] Bottom_Q: [34x1 double] Left_Q: [34x1 double] Right_Q: [34x1 double] Total_Q: [34x1 double] Depth: [34x1 double] Cells: [34x1 double] Track: [34x2 double] Mean_Vel: [34x2 double] Boat_Vel: [34x4 double] Station_Q: [34x1 double] Station_Q: [34x1 double] Station_Q: [34x1 double] Station_Q: [34x1 double] Matlab can handle that, although I don't know how you can make a structure that has two fields with the same name - as far as I know there's no way to do that with matlab syntax, e.g: >> a = struct('field1', 1, 'field1', 2) ??? Error using ==> struct Duplicate field name "field1" I don't know how to return those guys to you either - structs come back either as record arrays or objects, and neither allow duplicate names. Of course we could rename the duplicates, but that would start to get a bit tangled. Is there any way you can load these into matlab and fix them up before loading into scipy? I'll commit the changes to allow the 1 length 0 byte strings tomorrow. Best, Matthew From matthew.brett at gmail.com Tue Oct 5 01:08:48 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 4 Oct 2010 22:08:48 -0700 Subject: [SciPy-User] can anyone confirm this bug in scipy.io.savemat on the latest version of scipy? In-Reply-To: References: Message-ID: Hi, > The code snippet I have pasted below exhibits what I believe to be > incorrect behavior on the version of scipy I happen to have installed > ('0.8.0.dev6113'). Basically, scipy.io.savemat doesn't respect the > dtype of the arrays it saves and everything gets saved as float64 for > me. Yes, can confirm, and I agree it's not good - will have a look tomorrow. Best, Matthew From pav at iki.fi Tue Oct 5 05:09:32 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 5 Oct 2010 09:09:32 +0000 (UTC) Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> Message-ID: Mon, 04 Oct 2010 21:41:10 -0700, Matthew Brett wrote: [clip] > That turns out to be because the 'Summary' struct has 4 fields all named > 'Station_Q': [clip] > Matlab can handle that, although I don't know how you can make a > structure that has two fields with the same name - as far as I know > there's no way to do that with matlab syntax, e.g: > >>> a = struct('field1', 1, 'field1', 2) > ??? Error using ==> struct > Duplicate field name "field1" > > I don't know how to return those guys to you either - structs come back > either as record arrays or objects, and neither allow duplicate names. > Of course we could rename the duplicates, but that would start to get a > bit tangled. [clip] Renaming to "originalname_counter" sounds like an OK solution. Skipping duplicates + emitting a warning might also be OK... -- Pauli Virtanen From ndbecker2 at gmail.com Tue Oct 5 07:52:57 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 05 Oct 2010 07:52:57 -0400 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? References: Message-ID: I'm needing doubly noncentric F distribution. I found this: http://www.ifpress.com/pdfs/comm22-897.pdf It might not be the best, but it's all I found. Coded it up as attached: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_double_f_distr.py Type: text/x-python Size: 980 bytes Desc: not available URL: From tomi.pievilainen at hut.fi Tue Oct 5 07:32:16 2010 From: tomi.pievilainen at hut.fi (Tomi Pievil?inen) Date: Tue, 5 Oct 2010 11:32:16 +0000 (UTC) Subject: [SciPy-User] Lines through subplots? Message-ID: Hi all, is it possible to draw a line / area through several vertically aligned subplots? I'd like to mark an event on one of the plots and make it easy to see what happens on the other subplots at the same point (the subplots share the x axis). -- Tomi Pievil?inen, +358 400 487 504 A: Because it disrupts the natural way of thinking. Q: Why is top posting frowned upon? From proscovia at aims.ac.za Tue Oct 5 08:29:31 2010 From: proscovia at aims.ac.za (Proscovia Namayanja) Date: Tue, 5 Oct 2010 14:29:31 +0200 (SAST) Subject: [SciPy-User] unsubsribe Message-ID: <15845.146.230.128.30.1286281771.squirrel@webmail.aims.ac.za> I would like to be removed from the list please. How do I do it? Thanks From pav at iki.fi Tue Oct 5 08:35:19 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 5 Oct 2010 12:35:19 +0000 (UTC) Subject: [SciPy-User] unsubsribe References: <15845.146.230.128.30.1286281771.squirrel@webmail.aims.ac.za> Message-ID: Tue, 05 Oct 2010 14:29:31 +0200, Proscovia Namayanja wrote: > I would like to be removed from the list please. How do I do it? > > Thanks http://scipy.org/Mailing_Lists http://mail.scipy.org/mailman/listinfo/scipy-user From proscovia at aims.ac.za Tue Oct 5 08:36:05 2010 From: proscovia at aims.ac.za (Proscovia Namayanja) Date: Tue, 5 Oct 2010 14:36:05 +0200 (SAST) Subject: [SciPy-User] unsubsribe Message-ID: <49166.146.230.128.30.1286282165.squirrel@webmail.aims.ac.za> Sorry about the previous message. I can do it. Thanks From ndbecker2 at gmail.com Tue Oct 5 08:40:14 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 05 Oct 2010 08:40:14 -0400 Subject: [SciPy-User] [ot] matlab in supercomputing Message-ID: This subject came up recently at my work. The question is, does matlab have significant usage in the supercomputing community? I suspect not, probably most supercomputing code is still fortran. But does anyone here know? From Dharhas.Pothina at twdb.state.tx.us Tue Oct 5 09:48:45 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Tue, 05 Oct 2010 08:48:45 -0500 Subject: [SciPy-User] [ot] matlab in supercomputing In-Reply-To: References: Message-ID: <4CAAE66C.63BA.009B.1@twdb.state.tx.us> I know from conversations with colleagues and also email newsletters etc that they are heavily pushing the automatic parallelization features in the most recent versions of matlab. From what I could tell, it seemed to be focused on taking advantage of todays multi-core/multi-cpu workstations. Haven't heard of anyone using it on larger HPC clusters though. - dharhas >>> Neal Becker 10/5/2010 7:40 AM >>> This subject came up recently at my work. The question is, does matlab have significant usage in the supercomputing community? I suspect not, probably most supercomputing code is still fortran. But does anyone here know? _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From Dharhas.Pothina at twdb.state.tx.us Tue Oct 5 09:55:53 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Tue, 05 Oct 2010 08:55:53 -0500 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> Message-ID: <4CAAE819.63BA.009B.1@twdb.state.tx.us> @ Mathew > I suspect it must be being written from outside matlab. You are right, this file is being written by a instrument software (Sontek River Surveyor Live). Unfortunately, the new version of this instrument and software no longer allows export of data in any format but .mat. > Is there any way you can load these into matlab and fix them up before > loading into scipy? I'm trying to get rid of the matlab dependency since we only have one copy available and I'm trying to write some automated processing scripts. > Matlab can handle that, although I don't know how you can make a > structure that has two fields with the same name - as far as I know > there's no way to do that with matlab syntax, e.g: I opened the files in matlab (v R14) and I can see the 4 duplicate StationQ arrays but I don't seem to be able access them separately. The Array Editor view and the tab complete shows the 4 Station_Q but I can only access a Summary.StationQ and I don't know which one of the 4 it is accessing. The documentation for software that describes the matlab variables/structures generated by the matlab export command does not list Summary.StationQ at all. It isn't one of the primary variables we need and I'm not sure what it contains. I am assuming the four copies is a bug in their software, I'm going to call them and submit a bug report, not sure how much traction it will get though. The other option is that it represents data from the 4 sonar beams but in all the other structures that have data from the 4 beams the data is represented by a 2D array. @Pauli & Mathew > Renaming to "originalname_counter" sounds like an OK solution. Skipping > duplicates + emitting a warning might also be OK... Since it looks like matlab cannot handle duplicate field names (at least w/ R14 I haven't tried more recent versions) in terms of accessing the variable once loaded in, and also since it is impossible to make a structure with duplicate field names within matlab, I think this is an error in the way the data file is being written. My vote would be for skipping the duplicates and issuing a warning. That would allow for a more robust way loading the file into scipy directly and also avoid matlab completely. Thanks for all your help on this. - dharhas From ben.root at ou.edu Tue Oct 5 10:13:19 2010 From: ben.root at ou.edu (Benjamin Root) Date: Tue, 5 Oct 2010 09:13:19 -0500 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: <4CAAE819.63BA.009B.1@twdb.state.tx.us> References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CAAE819.63BA.009B.1@twdb.state.tx.us> Message-ID: On Tue, Oct 5, 2010 at 8:55 AM, Dharhas Pothina < Dharhas.Pothina at twdb.state.tx.us> wrote: > > @ Mathew > > I suspect it must be being written from outside matlab. > > You are right, this file is being written by a instrument software (Sontek > River Surveyor Live). Unfortunately, the new version of this instrument and > software no longer allows export of data in any format but .mat. > > > Is there any way you can load these into matlab and fix them up before > > loading into scipy? > > I'm trying to get rid of the matlab dependency since we only have one copy > available and I'm trying to write some automated processing scripts. > > > Matlab can handle that, although I don't know how you can make a > > structure that has two fields with the same name - as far as I know > > there's no way to do that with matlab syntax, e.g: > > I opened the files in matlab (v R14) and I can see the 4 duplicate StationQ > arrays but I don't seem to be able access them separately. The Array Editor > view and the tab complete shows the 4 Station_Q but I can only access a > Summary.StationQ and I don't know which one of the 4 it is accessing. The > documentation for software that describes the matlab variables/structures > generated by the matlab export command does not list Summary.StationQ at > all. It isn't one of the primary variables we need and I'm not sure what it > contains. I am assuming the four copies is a bug in their software, I'm > going to call them and submit a bug report, not sure how much traction it > will get though. The other option is that it represents data from the 4 > sonar beams but in all the other structures that have data from the 4 beams > the data is represented by a 2D array. > > @Pauli & Mathew > > Renaming to "originalname_counter" sounds like an OK solution. Skipping > > duplicates + emitting a warning might also be OK... > > Since it looks like matlab cannot handle duplicate field names (at least w/ > R14 I haven't tried more recent versions) in terms of accessing the variable > once loaded in, and also since it is impossible to make a structure with > duplicate field names within matlab, I think this is an error in the way the > data file is being written. My vote would be for skipping the duplicates and > issuing a warning. That would allow for a more robust way loading the file > into scipy directly and also avoid matlab completely. > > Thanks for all your help on this. > > - dharhas > > > I did a little bit of testing on this issue since I was curious about how this is even possible. It appears that the Station_Q that you are accessing is the last one. If I modify the contents of x.Summary.Station_Q, and then display what is in x.Summary, the last Station_Q is changed. It is *possible* to get access to the other 3 Station_Q variables if you use the "struct2cell" function, which gets you back everything in a cell array I wonder if Matlab is loading the data into a cell array first and then -- somehow --- converting that into a structure. BTW, this was checked using Matlab R2008b. Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Tue Oct 5 10:25:48 2010 From: ben.root at ou.edu (Benjamin Root) Date: Tue, 5 Oct 2010 09:25:48 -0500 Subject: [SciPy-User] Lines through subplots? In-Reply-To: References: Message-ID: On Tue, Oct 5, 2010 at 6:32 AM, Tomi Pievil?inen wrote: > Hi all, > > is it possible to draw a line / area through several vertically aligned > subplots? I'd like to mark an event on one of the plots and make it easy > to see what happens on the other subplots at the same point (the subplots > share the x axis). > > Tomi, Maybe a ConnectorPatch? http://matplotlib.sourceforge.net/users/annotations_guide.html?highlight=annotation#using-connectorpatch Note, this is more suited for the matplotlib-users mailing list and I would try there if the ConnectorPatch is not what you are looking for. http://sourceforge.net/mail/?group_id=80706 Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Tue Oct 5 11:02:44 2010 From: robince at gmail.com (Robin) Date: Tue, 5 Oct 2010 16:02:44 +0100 Subject: [SciPy-User] build scipy on os x 10.6, python.org 2.7 Message-ID: Hi, I am trying to build scipy (current trunk) on OS X 10.6 with python.org 2.7 (32 bit). There seems to be a problem that it is trying to build -arch ppc64 (log below). Is there a way to disable this architecture (and even x86_64) so the build will complete? gcc-4.0: build/src.macosx-10.3-fat-2.7/scipy/fftpack/_fftpackmodule.c gcc-4.0: scipy/fftpack/src/zfftnd.c gcc-4.0: scipy/fftpack/src/zrfft.c gcc-4.0: scipy/fftpack/src/drfft.c gcc-4.0: build/src.macosx-10.3-fat-2.7/scipy/fftpack/src/dct.c gcc-4.0: scipy/fftpack/src/zfft.c gcc-4.0: build/src.macosx-10.3-fat-2.7/fortranobject.c /usr/local/bin/gfortran -Wall -arch ppc -arch i686 -arch x86_64 -arch ppc64 -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfft.o build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/drfft.o build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zrfft.o build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/src/dct.o build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/fortranobject.o -Lbuild/temp.macosx-10.3-fat-2.7 -ldfftpack -lfftpack -lgfortran -o build/lib.macosx-10.3-fat-2.7/scipy/fftpack/_fftpack.so ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/_fftpackmodule.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfft.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/drfft.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zrfft.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfftnd.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/src/dct.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/fortranobject.o, missing required architecture x86_64 in file ld: warning: in /usr/lib/bundle1.o, missing required architecture ppc64 in file ld: warning: object file compiled with -mlong-branch which is no longer needed. To remove this warning, recompile without -mlong-branch: /usr/local/lib/gcc/powerpc-apple-darwin8/4.2.3/ppc64/crt3.o ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/_fftpackmodule.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfft.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/drfft.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zrfft.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfftnd.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/src/dct.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/fortranobject.o, missing required architecture ppc64 in file ld: warning: in /usr/lib/libSystemStubs.a, missing required architecture ppc64 in file ld: warning: in /usr/lib/libSystem.dylib, missing required architecture ppc64 in file ld: in /usr/lib/libSystem.B.dylib, missing required architecture ppc64 in file for architecture ppc64 collect2: ld returned 1 exit status ld: warning: object file compiled with -mlong-branch which is no longer needed. To remove this warning, recompile without -mlong-branch: /usr/local/lib/gcc/powerpc-apple-darwin8/4.2.3/crt3.o lipo: can't open input file: /var/tmp//cccSSCfO.out (No such file or directory) ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/_fftpackmodule.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfft.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/drfft.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zrfft.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfftnd.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/src/dct.o, missing required architecture x86_64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/fortranobject.o, missing required architecture x86_64 in file ld: warning: in /usr/lib/bundle1.o, missing required architecture ppc64 in file ld: warning: object file compiled with -mlong-branch which is no longer needed. To remove this warning, recompile without -mlong-branch: /usr/local/lib/gcc/powerpc-apple-darwin8/4.2.3/ppc64/crt3.o ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/_fftpackmodule.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfft.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/drfft.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zrfft.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfftnd.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/src/dct.o, missing required architecture ppc64 in file ld: warning: in build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/fortranobject.o, missing required architecture ppc64 in file ld: warning: in /usr/lib/libSystemStubs.a, missing required architecture ppc64 in file ld: warning: in /usr/lib/libSystem.dylib, missing required architecture ppc64 in file ld: in /usr/lib/libSystem.B.dylib, missing required architecture ppc64 in file for architecture ppc64 collect2: ld returned 1 exit status ld: warning: object file compiled with -mlong-branch which is no longer needed. To remove this warning, recompile without -mlong-branch: /usr/local/lib/gcc/powerpc-apple-darwin8/4.2.3/crt3.o lipo: can't open input file: /var/tmp//cccSSCfO.out (No such file or directory) error: Command "/usr/local/bin/gfortran -Wall -arch ppc -arch i686 -arch x86_64 -arch ppc64 -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfft.o build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/drfft.o build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zrfft.o build/temp.macosx-10.3-fat-2.7/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/scipy/fftpack/src/dct.o build/temp.macosx-10.3-fat-2.7/build/src.macosx-10.3-fat-2.7/fortranobject.o -Lbuild/temp.macosx-10.3-fat-2.7 -ldfftpack -lfftpack -lgfortran -o build/lib.macosx-10.3-fat-2.7/scipy/fftpack/_fftpack.so" failed with exit status 1 Cheers Robin From johnl at cs.wisc.edu Tue Oct 5 11:05:53 2010 From: johnl at cs.wisc.edu (J. David Lee) Date: Tue, 05 Oct 2010 10:05:53 -0500 Subject: [SciPy-User] matplotlib with UI In-Reply-To: <4CAA47EC.2040708@noaa.gov> References: <1286079791.15794.29.camel@johnl-laptop> <4CAA47EC.2040708@noaa.gov> Message-ID: <1286291153.16695.0.camel@johnl-laptop> Chris, This looks interesting. I'll look into it when I have a chance. Thanks for the tip, David On Mon, 2010-10-04 at 14:32 -0700, Christopher Barker wrote: > J. David Lee wrote: > > Integrating a matplotlib window into a wxPython app is pretty easy. I > > import the following: > > > > from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg > > from matplotlib.backends.backend_wxagg import NavigationToolbar2WxAgg > > from matplotlib.figure import Figure > > I like wxMPL: > > http://agni.phys.iit.edu/~kmcivor/wxmpl/ > > It's a wx wrapper around MPL that gives you nifty interactive features > ready to go. > > -Chris > > > > > Create the controls: > > > > figure = Figure(figsize=(0.1,0.1)) > > canvas = FigureCanvasWxAgg(wxParent, -1, figure) > > toolbar = NavigationToolbar2WxAgg(canvas) > > > > Then you add the canvas and toolbar to a layout. > > > > Now you can plot as usual: > > > > figure.clf() > > ax = figure.gca() > > ax.plot(x, y) > > canvas.draw() > > > > David > > > > > > On Sat, 2010-10-02 at 19:52 -0800, Joshua Holbrook wrote: > >> On Sat, Oct 2, 2010 at 1:34 PM, Anthony Palomba wrote: > >>> I want to use matplotlib to visualiize data sets but > >>> I also want to create a UI that lets me manipulate > >>> properties of that data and see the result. > >>> > >>> Does matplotlib support creating UI interfaces or > >>> do I need to use some other package? > >>> > >>> > >>> > >>> Thanks, > >>> Anthony > >>> > >>> _______________________________________________ > >>> SciPy-User mailing list > >>> SciPy-User at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> > >>> > >> Hey Anthony, > >> > >> I'm no expert on plotting in python, but you may find Chaco > >> useful--it's Enthought's plotting library, and is intended for use in > >> GUIs. > >> > >> http://code.enthought.com/chaco/ > >> > >> --Josh > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > From apalomba at austin.rr.com Tue Oct 5 11:44:09 2010 From: apalomba at austin.rr.com (Anthony Palomba) Date: Tue, 5 Oct 2010 10:44:09 -0500 Subject: [SciPy-User] matplotlib with UI In-Reply-To: <1286079791.15794.29.camel@johnl-laptop> References: <1286079791.15794.29.camel@johnl-laptop> Message-ID: Hey David, I have been looking at wxPython, I think it is what I am looking for. Would you happen to have an example wxPython python file that uses matplotlib that I can use as an example to get me started. Many thanks, Anthony On Sat, Oct 2, 2010 at 11:23 PM, J. David Lee wrote: > Integrating a matplotlib window into a wxPython app is pretty easy. I > import the following: > > from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg > from matplotlib.backends.backend_wxagg import NavigationToolbar2WxAgg > from matplotlib.figure import Figure > > Create the controls: > > figure = Figure(figsize=(0.1,0.1)) > canvas = FigureCanvasWxAgg(wxParent, -1, figure) > toolbar = NavigationToolbar2WxAgg(canvas) > > Then you add the canvas and toolbar to a layout. > > Now you can plot as usual: > > figure.clf() > ax = figure.gca() > ax.plot(x, y) > canvas.draw() > > David > > > On Sat, 2010-10-02 at 19:52 -0800, Joshua Holbrook wrote: > > On Sat, Oct 2, 2010 at 1:34 PM, Anthony Palomba > wrote: > > > I want to use matplotlib to visualiize data sets but > > > I also want to create a UI that lets me manipulate > > > properties of that data and see the result. > > > > > > Does matplotlib support creating UI interfaces or > > > do I need to use some other package? > > > > > > > > > > > > Thanks, > > > Anthony > > > > > > _______________________________________________ > > > SciPy-User mailing list > > > SciPy-User at scipy.org > > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > Hey Anthony, > > > > I'm no expert on plotting in python, but you may find Chaco > > useful--it's Enthought's plotting library, and is intended for use in > > GUIs. > > > > http://code.enthought.com/chaco/ > > > > --Josh > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apalomba at austin.rr.com Tue Oct 5 12:00:22 2010 From: apalomba at austin.rr.com (Anthony Palomba) Date: Tue, 5 Oct 2010 11:00:22 -0500 Subject: [SciPy-User] matplotlib with UI In-Reply-To: <4CAA47EC.2040708@noaa.gov> References: <1286079791.15794.29.camel@johnl-laptop> <4CAA47EC.2040708@noaa.gov> Message-ID: Wow this is even better! I don't think it can any easier than wxMPL. I will study the examples that come with with it. Thanks for posting this Chris! -ap On Mon, Oct 4, 2010 at 4:32 PM, Christopher Barker wrote: > J. David Lee wrote: > > Integrating a matplotlib window into a wxPython app is pretty easy. I > > import the following: > > > > from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg > > from matplotlib.backends.backend_wxagg import NavigationToolbar2WxAgg > > from matplotlib.figure import Figure > > I like wxMPL: > > http://agni.phys.iit.edu/~kmcivor/wxmpl/ > > It's a wx wrapper around MPL that gives you nifty interactive features > ready to go. > > -Chris > > > > > Create the controls: > > > > figure = Figure(figsize=(0.1,0.1)) > > canvas = FigureCanvasWxAgg(wxParent, -1, figure) > > toolbar = NavigationToolbar2WxAgg(canvas) > > > > Then you add the canvas and toolbar to a layout. > > > > Now you can plot as usual: > > > > figure.clf() > > ax = figure.gca() > > ax.plot(x, y) > > canvas.draw() > > > > David > > > > > > On Sat, 2010-10-02 at 19:52 -0800, Joshua Holbrook wrote: > >> On Sat, Oct 2, 2010 at 1:34 PM, Anthony Palomba > wrote: > >>> I want to use matplotlib to visualiize data sets but > >>> I also want to create a UI that lets me manipulate > >>> properties of that data and see the result. > >>> > >>> Does matplotlib support creating UI interfaces or > >>> do I need to use some other package? > >>> > >>> > >>> > >>> Thanks, > >>> Anthony > >>> > >>> _______________________________________________ > >>> SciPy-User mailing list > >>> SciPy-User at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> > >>> > >> Hey Anthony, > >> > >> I'm no expert on plotting in python, but you may find Chaco > >> useful--it's Enthought's plotting library, and is intended for use in > >> GUIs. > >> > >> http://code.enthought.com/chaco/ > >> > >> --Josh > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue Oct 5 13:49:30 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 5 Oct 2010 10:49:30 -0700 Subject: [SciPy-User] build scipy on os x 10.6, python.org 2.7 In-Reply-To: References: Message-ID: Hi Robin, > I am trying to build scipy (current trunk) on OS X 10.6 with > python.org 2.7 (32 bit). > > There seems to be a problem that it is trying to build -arch ppc64 (log below). > > Is there a way to disable this architecture (and even x86_64) so the > build will complete? I think I might have posted on this yesterday. If I understand correctly the problem is that the calculated gfortran linking flags get all the archs that the compiler / machine are capable of, which will likely be a superset of those that python and the numpy C code were built with. I had to set the FFLAGS and LDFLAGS to constrain the archs. In my case that was: export FFLAGS=-arch ppc -arch i386 export LDFLAGS=-arch ppc -arch i386 -Wall -undefined dynamic_lookup -bundle but I guess you would need to review the scipy default link flags output to get the right match for your system. See you, Matthew From matthew.brett at gmail.com Tue Oct 5 14:01:48 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 5 Oct 2010 11:01:48 -0700 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: <4CAAE819.63BA.009B.1@twdb.state.tx.us> References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CAAE819.63BA.009B.1@twdb.state.tx.us> Message-ID: Hi, > I'm trying to get rid of the matlab dependency since we only have one copy available and I'm trying to write some automated processing scripts. That's easy to imagine... > @Pauli & Mathew >> Renaming to "originalname_counter" sounds like an OK solution. Skipping >> duplicates + emitting a warning might also be OK... > > Since it looks like matlab cannot handle duplicate field names (at least w/ R14 I haven't tried more recent versions) in terms of accessing the variable once loaded in, and also since it is impossible to make a structure with duplicate field names within matlab, I think this is an error in the way the data file is being written. Yes, I'm sure it's an error, but I suppose we should set ourselves the goal of being able to read what matlab can read. I don't like the 'originalname_counter' option because that might conceivably still clash with the intended fieldnames (imagine the original fieldnames of ['field1_1', 'field1', 'field1'] for example). I see that matlab intends names prepended with an underscore to be invalid: >> a = []; >> a._field1 = 1 ??? a._field1 = 1 | Error: The input character is not valid in MATLAB statements or expressions. So one option would be ['field1_1', 'field1', 'field1'] -> ['field1_1', 'field1', '_0_field1'] . It would be a problem when saving back to matlab, but that seems like a sensible problem to have. Best, Matthew From matthew.brett at gmail.com Tue Oct 5 14:25:20 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 5 Oct 2010 11:25:20 -0700 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CAAE819.63BA.009B.1@twdb.state.tx.us> Message-ID: Hi, > I did a little bit of testing on this issue since I was curious about how > this is even possible. > > It appears that the Station_Q that you are accessing is the last one.? If I > modify the contents of x.Summary.Station_Q, and then display what is in > x.Summary, the last Station_Q is changed.? It is *possible* to get access to > the other 3 Station_Q variables if you use the "struct2cell" function, which > gets you back everything in a cell array Thanks - nice forensics ;) . It makes a difference, because it means you can get the data out, and that we should probably allow that too, See you, Matthew From stef.mientki at gmail.com Tue Oct 5 14:49:18 2010 From: stef.mientki at gmail.com (Stef Mientki) Date: Tue, 05 Oct 2010 20:49:18 +0200 Subject: [SciPy-User] matplotlib with UI In-Reply-To: References: <1286079791.15794.29.camel@johnl-laptop> Message-ID: <4CAB732E.3030404@gmail.com> On 05-10-2010 17:44, Anthony Palomba wrote: > Hey David, > > > I have been looking at wxPython, I think it is what I am > looking for. python-xy might be a better choice cheers, Stef From ahig321 at gmail.com Sun Oct 3 19:26:25 2010 From: ahig321 at gmail.com (Adam Higuera) Date: Sun, 3 Oct 2010 19:26:25 -0400 Subject: [SciPy-User] Problems with odeint Message-ID: I am on Mac OS X 10.5, and I am using the copy of SciPy that comes bundled with sage. I am having what seems like a pretty basic problem. Entering the following code: def b(y, t): [0.0, 0.0] y_0 = [1.0, 0.0] t = linspace(0, 10, 100) print odeint(b, y_0, t) results in a solution vector that looks like [[ 1.00000000e+000 0.00000000e+000] [ NaN 2.14766595e-315] [ NaN 4.29533808e-315] [ NaN 6.44301043e-315] [ NaN 8.59068278e-315] ] Can other people runt this code without problems? -Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Tue Oct 5 15:10:28 2010 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Tue, 05 Oct 2010 16:10:28 -0300 Subject: [SciPy-User] Problems with odeint In-Reply-To: References: Message-ID: <1286305828.4671.5.camel@florian-desktop> El dim., 03-10-2010 a las 19:26 -0400, Adam Higuera escribi?: > I am on Mac OS X 10.5, and I am using the copy of SciPy that comes > bundled with sage. > > > I am having what seems like a pretty basic problem. Entering the > following code: > > > def b(y, t): > return [0.0, 0.0] you missed the return -- Fabrice Silva LMA CNRS UPR 7051 From matthew.brett at gmail.com Tue Oct 5 21:13:13 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 5 Oct 2010 18:13:13 -0700 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CAAE819.63BA.009B.1@twdb.state.tx.us> Message-ID: Hi, > Yes, I'm sure it's an error, but I suppose we should set ourselves the > goal of being able to read what matlab can read. You will be pleased to know that, on further investigation, there are also two 'System' variables in the file. You only see the second when you read it into Matlab. The first has fields: ('SerialNumber', 'FirmwareVersion', 'FirmwareRevision', 'Frequency', 'InstrumentMatrix_0', 'InstrumentMatrix_1', 'InstrumentMatrix_2') It's a fun file ;) Matthew From oldcanine at yahoo.com Wed Oct 6 01:05:03 2010 From: oldcanine at yahoo.com (Barry Olddog) Date: Tue, 5 Oct 2010 22:05:03 -0700 (PDT) Subject: [SciPy-User] error compiling csr_wrap in sparsetools Message-ID: <637283.52884.qm@web59602.mail.ac4.yahoo.com> Hi, I'm trying to build scipy 0.8 in python 2.7 on a centos 5.5 machine, but failing when I get to csr_wrap. Searching shows this error has been around for a while, but I never found a solution. Specifically, I it's an error and two warnings: scipy/sparse/sparsetools/csr_wrap.cxx: In function 'int require_size(PyArrayObject*, npy_intp*, int)': scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before 'PRIdPTR' scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious trailing '%' in format scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments for format (It's repeated a few lines down in the source) I'm using the default gcc 4.1.2; numpy 1.5 built and imports fine. The sources all came from scipy.org; but blas and lapack are the centos/redhat packages. This is a 32-bit machine; I have a similar setup on a 64-bit machine, with scipy 0.7.2, numpy 1.4.1 and python 2.6, and that didn't cause much headache. I'll try dropping down to those versions tomorrow, but I'm not sure they'll work, and I'd like to use the later versions. Thanks for any help. Barry From wesmckinn at gmail.com Wed Oct 6 01:28:34 2010 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 6 Oct 2010 01:28:34 -0400 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? In-Reply-To: References: Message-ID: On Mon, Oct 4, 2010 at 2:55 PM, Pauli Virtanen wrote: > Mon, 04 Oct 2010 13:47:08 -0400, josef.pktd wrote: >> On Mon, Oct 4, 2010 at 1:33 PM, Wes McKinney >> wrote: > [clip] >>> Got it. Is SciPy development still taking place in SVN? I should get >>> commit rights and start chipping away on these misc issues. >> >> Either svn, or if you are comfortable with git, through github like many >> of the developers work now. >> But I don't know any details about that route. > > I'd recommend going for git already. Some instructions how to get set up > using Scipy's SVN mirror can be found here: > > http://projects.scipy.org/numpy/wiki/GitMirror > > > ? ? ? ?Pauli > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Thanks-- I assume the next step is to get SVN commit rights? - Wes From pav at iki.fi Wed Oct 6 05:02:30 2010 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 6 Oct 2010 09:02:30 +0000 (UTC) Subject: [SciPy-User] error compiling csr_wrap in sparsetools References: <637283.52884.qm@web59602.mail.ac4.yahoo.com> Message-ID: Tue, 05 Oct 2010 22:05:03 -0700, Barry Olddog wrote: > I'm trying to build scipy 0.8 in python 2.7 on a centos 5.5 machine, but > failing when I get to csr_wrap. Searching shows this error has been > around for a while, but I never found a solution. > > Specifically, I it's an error and two warnings: > > scipy/sparse/sparsetools/csr_wrap.cxx: In function 'int > require_size(PyArrayObject*, npy_intp*, int)': > scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before > 'PRIdPTR' scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious > trailing '%' in format > scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments > for format [clip] That should be fixed by r6645 in SVN. From ndbecker2 at gmail.com Wed Oct 6 07:09:16 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 06 Oct 2010 07:09:16 -0400 Subject: [SciPy-User] ide for numpy? Message-ID: My research uses numpy, scipy, and pyublas for interface to c++ code. Now the research phase is ending, and a real project is beginning. That means delivering code to other engineers (in other groups). Convincing them to accept code in this form is going to be an uphill battle. I'm sure they'd accept matlab code, only because many (recent) engineers have some familiarity with it. I do all my development with emacs, but that's not to everyone's liking. I think an attraction of matlab is a decent development environment, things like: click on the name of a vector in a gui to plot it. Is there anything I should look at that might offer an IDE for numpy? From harald.schilly at gmail.com Wed Oct 6 07:18:59 2010 From: harald.schilly at gmail.com (Harald Schilly) Date: Wed, 6 Oct 2010 13:18:59 +0200 Subject: [SciPy-User] ide for numpy? In-Reply-To: References: Message-ID: On Wed, Oct 6, 2010 at 13:09, Neal Becker wrote: > Is there anything I should look at that might offer an IDE for numpy? There are various python IDEs that might fit for your purpose. One that comes to my mind is Spyder, but there are at least two other ones I can't remember yet - I'm sure others will post them ;) http://code.google.com/p/spyderlib/ H From justinbois at gmail.com Wed Oct 6 07:25:07 2010 From: justinbois at gmail.com (Justin Bois) Date: Wed, 6 Oct 2010 13:25:07 +0200 Subject: [SciPy-User] SciPy-User Digest, Vol 86, Issue 11 In-Reply-To: References: Message-ID: Your function b(y, t) must return an array-like object. This works: def b(y, t): return [0.0, 0.0] y_0 = [1.0, 0.0] t = linspace(0, 10, 100) print odeint(b, y_0, t) Date: Sun, 3 Oct 2010 19:26:25 -0400 > From: Adam Higuera > Subject: [SciPy-User] Problems with odeint > To: scipy-user at scipy.org > Message-ID: > > > > Content-Type: text/plain; charset="iso-8859-1" > > I am on Mac OS X 10.5, and I am using the copy of SciPy that comes bundled > with sage. > > I am having what seems like a pretty basic problem. Entering the following > code: > > def b(y, t): > [0.0, 0.0] > > y_0 = [1.0, 0.0] > > t = linspace(0, 10, 100) > > print odeint(b, y_0, t) > > results in a solution vector that looks like > > [[ 1.00000000e+000 0.00000000e+000] > [ NaN 2.14766595e-315] > [ NaN 4.29533808e-315] > [ NaN 6.44301043e-315] > [ NaN 8.59068278e-315] > ] > > Can other people runt this code without problems? > > -Adam > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20101003/5c5affcd/attachment-0001.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From justinbois at gmail.com Wed Oct 6 07:32:25 2010 From: justinbois at gmail.com (Justin Bois) Date: Wed, 6 Oct 2010 13:32:25 +0200 Subject: [SciPy-User] Problems with odeint Message-ID: Your function b(y, t) must return an array-like object. This works: def b(y, t): return [0.0, 0.0] y_0 = [1.0, 0.0] t = linspace(0, 10, 100) print odeint(b, y_0, t) (Sorry, forgot to edit subject line on previous post.....) > Date: Sun, 3 Oct 2010 19:26:25 -0400 >> From: Adam Higuera >> Subject: [SciPy-User] Problems with odeint >> To: scipy-user at scipy.org >> Message-ID: >> >> > >> Content-Type: text/plain; charset="iso-8859-1" >> >> I am on Mac OS X 10.5, and I am using the copy of SciPy that comes bundled >> with sage. >> >> I am having what seems like a pretty basic problem. Entering the >> following >> code: >> >> def b(y, t): >> [0.0, 0.0] >> >> y_0 = [1.0, 0.0] >> >> t = linspace(0, 10, 100) >> >> print odeint(b, y_0, t) >> >> results in a solution vector that looks like >> >> [[ 1.00000000e+000 0.00000000e+000] >> [ NaN 2.14766595e-315] >> [ NaN 4.29533808e-315] >> [ NaN 6.44301043e-315] >> [ NaN 8.59068278e-315] >> ] >> >> Can other people runt this code without problems? >> >> -Adam >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> http://mail.scipy.org/pipermail/scipy-user/attachments/20101003/5c5affcd/attachment-0001.html >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlz at lmn.pub.ro Wed Oct 6 07:58:37 2010 From: alexlz at lmn.pub.ro (Ioan-Alexandru Lazar) Date: Wed, 6 Oct 2010 14:58:37 +0300 Subject: [SciPy-User] ide for numpy? In-Reply-To: References: Message-ID: Have a look at Spyder. It's Matlab-like and comes with a number of goodies like Pylint integration. Alex From alan.isaac at gmail.com Wed Oct 6 08:36:35 2010 From: alan.isaac at gmail.com (Alan G Isaac) Date: Wed, 06 Oct 2010 08:36:35 -0400 Subject: [SciPy-User] ide for numpy? In-Reply-To: References: Message-ID: <4CAC6D53.3080003@gmail.com> Eclipse with the pydev and subversive plugins? Alan Isaac From ndbecker2 at gmail.com Wed Oct 6 08:42:25 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 06 Oct 2010 08:42:25 -0400 Subject: [SciPy-User] ide for numpy? References: <4CAC6D53.3080003@gmail.com> Message-ID: Alan G Isaac wrote: > Eclipse with the pydev and subversive plugins? > > Alan Isaac Does that give interactive plotting of numpy vectors, for example? From Dharhas.Pothina at twdb.state.tx.us Wed Oct 6 08:50:52 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 06 Oct 2010 07:50:52 -0500 Subject: [SciPy-User] ide for numpy? In-Reply-To: References: Message-ID: <4CAC2A5C.63BA.009B.1@twdb.state.tx.us> >>> "Ioan-Alexandru Lazar" 10/6/2010 6:58 AM >>> > Have a look at Spyder. It's Matlab-like and comes with a number of goodies > like Pylint integration. > > Alex Spyder is definitely worth looking at. I prefer using emacs and ipython for most of what I do but my less technical colleagues have found spyder easier to use. The interface is more matlab like and also it doesn't feel like they are having to muck around in the command line for stuff like changing directories and running scripts etc. - dharhas From robince at gmail.com Wed Oct 6 09:10:34 2010 From: robince at gmail.com (Robin) Date: Wed, 6 Oct 2010 14:10:34 +0100 Subject: [SciPy-User] build scipy on os x 10.6, python.org 2.7 In-Reply-To: References: Message-ID: On Tue, Oct 5, 2010 at 6:49 PM, Matthew Brett wrote: > Hi Robin, > >> I am trying to build scipy (current trunk) on OS X 10.6 with >> python.org 2.7 (32 bit). >> >> There seems to be a problem that it is trying to build -arch ppc64 (log below). >> >> Is there a way to disable this architecture (and even x86_64) so the >> build will complete? > > I think I might have posted on this yesterday. ?If I understand > correctly the problem is that the calculated gfortran linking flags > get all the archs that the compiler / machine are capable of, which > will likely be a superset of those that python and the numpy C code > were built with. ?I had to set the FFLAGS and LDFLAGS to constrain the > archs. ?In my case that was: > > export FFLAGS=-arch ppc -arch i386 > export LDFLAGS=-arch ppc -arch i386 -Wall -undefined dynamic_lookup -bundle > > but I guess you would need to review the scipy default link flags > output to get the right match for your system. Thanks very much - that solved it (I wasn't sure about how to set compile flags with distutils). Sorry I missed your post on scipy-dev. Cheers Robin From josef.pktd at gmail.com Wed Oct 6 09:18:10 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 Oct 2010 09:18:10 -0400 Subject: [SciPy-User] ide for numpy? In-Reply-To: <4CAC2A5C.63BA.009B.1@twdb.state.tx.us> References: <4CAC2A5C.63BA.009B.1@twdb.state.tx.us> Message-ID: On Wed, Oct 6, 2010 at 8:50 AM, Dharhas Pothina wrote: > > >>>> "Ioan-Alexandru Lazar" 10/6/2010 6:58 AM >>> >> Have a look at Spyder. It's Matlab-like and comes with a number of goodies >> like Pylint integration. >> >> Alex > > Spyder is definitely worth looking at. I prefer using emacs and ipython for most of what I do but my less technical colleagues have found spyder easier to use. The interface is more matlab like and also it doesn't feel like they are having to muck around in the command line for stuff like changing directories and running scripts etc. I have been using Spyder for more than a year as my main interactive shell. The new 2.0 is currently in beta, and has a few problems left, but looks very good, especially coming from a matlab ide experience. Now, it also has a ipython shell integrated (with matlab like name space pollution, "from mlab import *", by default). I wouldn't recommend eclipse for non-software-developers. I like it to get around in a package, but I found the views confusing when I started with it and, the pydev version that I use doesn't come with an interactive shell. Otherwise my favorite is IDLE, plain and simple and (almost) no magic. Josef > > - dharhas > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ndbecker2 at gmail.com Wed Oct 6 09:24:24 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 06 Oct 2010 09:24:24 -0400 Subject: [SciPy-User] ide for numpy? References: <4CAC2A5C.63BA.009B.1@twdb.state.tx.us> Message-ID: josef.pktd at gmail.com wrote: > On Wed, Oct 6, 2010 at 8:50 AM, Dharhas Pothina > wrote: >> >> >>>>> "Ioan-Alexandru Lazar" 10/6/2010 6:58 AM >>> >>> Have a look at Spyder. It's Matlab-like and comes with a number of >>> goodies like Pylint integration. >>> >>> Alex >> >> Spyder is definitely worth looking at. I prefer using emacs and ipython >> for most of what I do but my less technical colleagues have found spyder >> easier to use. The interface is more matlab like and also it doesn't feel >> like they are having to muck around in the command line for stuff like >> changing directories and running scripts etc. > > I have been using Spyder for more than a year as my main interactive > shell. The new 2.0 is currently in beta, and has a few problems left, > but looks very good, especially coming from a matlab ide experience. > Now, it also has a ipython shell integrated (with matlab like name > space pollution, "from mlab import *", by default). > > I wouldn't recommend eclipse for non-software-developers. I like it to > get around in a package, but I found the views confusing when I > started with it and, the pydev version that I use doesn't come with an > interactive shell. > > Otherwise my favorite is IDLE, plain and simple and (almost) no magic. > > Josef > > I'm trying spyder 2.0beta4. Looks interesting, but I'm not quite sure how to use it. I was expecting to load a program, then be able to set breakpoints, and examine variables. I see I can use the interactive console to manually enter: u = np.arange (10) and then see it in variable window and right click to plot. I can set a breakpoint in my source code (displayed on the left in editor), but only does something if I run using debug. But running from debug puts me in pdb. Not sure if trying to do everything under pdb is newb friendly. Am I missing something? From oldcanine at yahoo.com Wed Oct 6 09:30:05 2010 From: oldcanine at yahoo.com (Barry Olddog) Date: Wed, 6 Oct 2010 06:30:05 -0700 (PDT) Subject: [SciPy-User] error compiling csr_wrap in sparsetools In-Reply-To: References: <637283.52884.qm@web59602.mail.ac4.yahoo.com> Message-ID: <239893.37115.qm@web59614.mail.ac4.yahoo.com> ----- Original Message ---- > From: Pauli Virtanen > To: scipy-user at scipy.org > Sent: Wed, October 6, 2010 5:02:30 AM > Subject: Re: [SciPy-User] error compiling csr_wrap in sparsetools > > Tue, 05 Oct 2010 22:05:03 -0700, Barry Olddog wrote: > > I'm trying to build scipy 0.8 in python 2.7 on a centos 5.5 machine, but > > failing when I get to csr_wrap. Searching shows this error has been > > around for a while, but I never found a solution. > > > > Specifically, I it's an error and two warnings: > > > > scipy/sparse/sparsetools/csr_wrap.cxx: In function 'int > > require_size(PyArrayObject*, npy_intp*, int)': > > scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before > > 'PRIdPTR' scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious > > trailing '%' in format > > scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments > > for format > [clip] > > That should be fixed by r6645 in SVN. > > Thanks! That did it. From josef.pktd at gmail.com Wed Oct 6 09:30:44 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 Oct 2010 09:30:44 -0400 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? In-Reply-To: References: Message-ID: On Tue, Oct 5, 2010 at 7:52 AM, Neal Becker wrote: > I'm needing doubly noncentric F distribution. > > I found this: > http://www.ifpress.com/pdfs/comm22-897.pdf > > It might not be the best, but it's all I found. Looks interesting, I never heard of the doubly noncentric something distributions. Both your code and the paper are a bit stingy on explanations. >From a few examples that I tried, the precision looks pretty good. I found some papers that use saddle point approximation to get both pdf and cdf of doubly non-central f and t distributions, which might be faster for some use cases (e.g. evaluating the cdf or pdf at many points.) But for the statistics applications that I saw mentioned (needing only one or a few points of the cdf), your version looks good and could be dressed up into a scipy.stats.distributions case. Thanks, Josef > > Coded it up as attached: > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From ndbecker2 at gmail.com Wed Oct 6 09:35:42 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 06 Oct 2010 09:35:42 -0400 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? References: Message-ID: josef.pktd at gmail.com wrote: > On Tue, Oct 5, 2010 at 7:52 AM, Neal Becker wrote: >> I'm needing doubly noncentric F distribution. >> >> I found this: >> http://www.ifpress.com/pdfs/comm22-897.pdf >> >> It might not be the best, but it's all I found. > > Looks interesting, I never heard of the doubly noncentric something > distributions. > > Both your code and the paper are a bit stingy on explanations. > >>From a few examples that I tried, the precision looks pretty good. I > found some papers that use saddle point approximation to get both pdf > and cdf of doubly non-central f and t distributions, which might be > faster for some use cases (e.g. evaluating the cdf or pdf at many > points.) > > But for the statistics applications that I saw mentioned (needing only > one or a few points of the cdf), your version looks good and could be > dressed up into a scipy.stats.distributions case. > > Thanks, > > Josef > > I also found Mudholkar, Govind S., Chaubey, Yogendra P. and Ching-Chuong, Lin(1976) Communications in Statistics - Theory and Methods, 5: 1, 49-63 Approximations for the doubly noncentral-F distribution I haven't tried coding it (and probably won't) From josef.pktd at gmail.com Wed Oct 6 09:36:07 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 Oct 2010 09:36:07 -0400 Subject: [SciPy-User] ide for numpy? In-Reply-To: References: <4CAC2A5C.63BA.009B.1@twdb.state.tx.us> Message-ID: On Wed, Oct 6, 2010 at 9:24 AM, Neal Becker wrote: > josef.pktd at gmail.com wrote: > >> On Wed, Oct 6, 2010 at 8:50 AM, Dharhas Pothina >> wrote: >>> >>> >>>>>> "Ioan-Alexandru Lazar" 10/6/2010 6:58 AM >>> >>>> Have a look at Spyder. It's Matlab-like and comes with a number of >>>> goodies like Pylint integration. >>>> >>>> Alex >>> >>> Spyder is definitely worth looking at. I prefer using emacs and ipython >>> for most of what I do but my less technical colleagues have found spyder >>> easier to use. The interface is more matlab like and also it doesn't feel >>> like they are having to muck around in the command line for stuff like >>> changing directories and running scripts etc. >> >> I have been using Spyder for more than a year as my main interactive >> shell. The new 2.0 is currently in beta, and has a few problems left, >> but looks very good, especially coming from a matlab ide experience. >> Now, it also has a ipython shell integrated (with matlab like name >> space pollution, "from mlab import *", by default). >> >> I wouldn't recommend eclipse for non-software-developers. I like it to >> get around in a package, but I found the views confusing when I >> started with it and, the pydev version that I use doesn't come with an >> interactive shell. >> >> Otherwise my favorite is IDLE, plain and simple and (almost) no magic. >> >> Josef >> >> > I'm trying spyder 2.0beta4. ?Looks interesting, but I'm not quite sure how > to use it. > > I was expecting to load a program, then be able to set breakpoints, and > examine variables. > > I see I can use the interactive console to manually enter: > u = np.arange (10) > > and then see it in variable window and right click to plot. > > I can set a breakpoint in my source code (displayed on the left in editor), > but only does something if I run using debug. ?But running from debug puts > me in pdb. ?Not sure if trying to do everything under pdb is newb friendly. > Am I missing something? No, debugging is still one of the weak points. And I still don't know how to use pdb. I haven't tried out how the pdb integration works. (I'm debugging with print statements, and stack inspection after an exception.) I played a bit with winpdb in the past, and it looks more beginner friendly. Josef > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From Dharhas.Pothina at twdb.state.tx.us Wed Oct 6 09:36:40 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 06 Oct 2010 08:36:40 -0500 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CAAE819.63BA.009B.1@twdb.state.tx.us> Message-ID: <4CAC3518.63BA.009B.1@twdb.state.tx.us> fun fun fun. I can't see the second 'System' variable in Matlab R14. Also, from what I could tell the 4 duplicate Summary.StationQ arrays contain different data. The Summary.Units array that contains the units for each variable in summary only has a single Summary.Units.Station_Q value = m3/s. So I still think the duplicates are a mistake. I let you know what I find out from the manufacturer when I'm able to get a hold of them. In the meanwhile is there a work around to load the rest of the data file into scipy. The duplicate variables that are causing the problems are not ones I need for my analysis. Is it possible from a technical point of view to mimic the matlab load command that lets you specify which variables to load: Examples for pattern matching: load fname a* % Load variables starting with "a" load fname -regexp ^b\d{3}$ % Load variables starting with "b" and % followed by 3 digits load fname -regexp \d % Load variables containing any digits i.e. something along the lines of scipy.io.loadmat(fname, return_vars='BottomTrack, GPS, Station') Of course this would require knowing what the field names where. So to be completely independent from matlab we would need a helper function that just returned the names of variables contained in the .mat file. i.e. something like scipy.io.loadmat(fname, return_headers=True) where the result would be a list of variable names. - dharhas >>> Matthew Brett 10/5/2010 8:13 PM >>> Hi, > Yes, I'm sure it's an error, but I suppose we should set ourselves the > goal of being able to read what matlab can read. You will be pleased to know that, on further investigation, there are also two 'System' variables in the file. You only see the second when you read it into Matlab. The first has fields: ('SerialNumber', 'FirmwareVersion', 'FirmwareRevision', 'Frequency', 'InstrumentMatrix_0', 'InstrumentMatrix_1', 'InstrumentMatrix_2') It's a fun file ;) Matthew _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From charlesr.harris at gmail.com Wed Oct 6 13:05:18 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 6 Oct 2010 11:05:18 -0600 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? In-Reply-To: References: Message-ID: On Tue, Oct 5, 2010 at 11:28 PM, Wes McKinney wrote: > On Mon, Oct 4, 2010 at 2:55 PM, Pauli Virtanen wrote: > > Mon, 04 Oct 2010 13:47:08 -0400, josef.pktd wrote: > >> On Mon, Oct 4, 2010 at 1:33 PM, Wes McKinney > >> wrote: > > [clip] > >>> Got it. Is SciPy development still taking place in SVN? I should get > >>> commit rights and start chipping away on these misc issues. > >> > >> Either svn, or if you are comfortable with git, through github like many > >> of the developers work now. > >> But I don't know any details about that route. > > > > I'd recommend going for git already. Some instructions how to get set up > > using Scipy's SVN mirror can be found here: > > > > http://projects.scipy.org/numpy/wiki/GitMirror > > > > > > Pauli > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > Thanks-- I assume the next step is to get SVN commit rights? > > If we go to git soon it should be easy to post patches for review and put in a pull request. But since that is still in the nebulous future I think svn commit rights would be appropriate. Josef? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From lutz.maibaum at gmail.com Wed Oct 6 13:23:38 2010 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Wed, 6 Oct 2010 10:23:38 -0700 Subject: [SciPy-User] can anyone confirm this bug in scipy.io.savemat on the latest version of scipy? In-Reply-To: References: Message-ID: On Mon, Oct 4, 2010 at 10:08 PM, Matthew Brett wrote: >> The code snippet I have pasted below exhibits what I believe to be >> incorrect behavior on the version of scipy I happen to have installed >> ('0.8.0.dev6113'). Basically, scipy.io.savemat doesn't respect the >> dtype of the arrays it saves and everything gets saved as float64 for >> me. I ran into a similar problem with the mmread/mmwrite routines, see http://mail.scipy.org/pipermail/scipy-user/2010-June/025772.html It would be great if this could be fixed, and even better if it would work for sparse matrices as well. Lutz From matthew.brett at gmail.com Wed Oct 6 15:37:44 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 6 Oct 2010 12:37:44 -0700 Subject: [SciPy-User] build scipy on os x 10.6, python.org 2.7 In-Reply-To: References: Message-ID: Hi, >> export FFLAGS=-arch ppc -arch i386 >> export LDFLAGS=-arch ppc -arch i386 -Wall -undefined dynamic_lookup -bundle >> >> but I guess you would need to review the scipy default link flags >> output to get the right match for your system. > > Thanks very much - that solved it (I wasn't sure about how to set > compile flags with distutils). Sorry I missed your post on scipy-dev. Glad it helped. Sorry, I forgot I posted previously to the other mailing list... Best, Matthew From josef.pktd at gmail.com Wed Oct 6 15:41:31 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 Oct 2010 15:41:31 -0400 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? In-Reply-To: References: Message-ID: On Wed, Oct 6, 2010 at 1:05 PM, Charles R Harris wrote: > > > On Tue, Oct 5, 2010 at 11:28 PM, Wes McKinney wrote: >> >> On Mon, Oct 4, 2010 at 2:55 PM, Pauli Virtanen wrote: >> > Mon, 04 Oct 2010 13:47:08 -0400, josef.pktd wrote: >> >> On Mon, Oct 4, 2010 at 1:33 PM, Wes McKinney >> >> wrote: >> > [clip] >> >>> Got it. Is SciPy development still taking place in SVN? I should get >> >>> commit rights and start chipping away on these misc issues. >> >> >> >> Either svn, or if you are comfortable with git, through github like >> >> many >> >> of the developers work now. >> >> But I don't know any details about that route. >> > >> > I'd recommend going for git already. Some instructions how to get set up >> > using Scipy's SVN mirror can be found here: >> > >> > http://projects.scipy.org/numpy/wiki/GitMirror >> > >> > >> > ? ? ? ?Pauli >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> >> Thanks-- I assume the next step is to get SVN commit rights? >> > > If we go to git soon it should be easy to post patches for review and put in > a pull request. But since that is still in the nebulous future I think svn > commit rights would be appropriate. Josef? I'm also in favor of svn commit rights. I just prefer if bigger changes are reviewed and fully tested. I don't think basic improvements and bugfixes need to go through a real review, but I will keep my eyes open. Thanks, Josef > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From gokhansever at gmail.com Wed Oct 6 21:29:38 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 6 Oct 2010 20:29:38 -0500 Subject: [SciPy-User] ide for numpy? In-Reply-To: References: Message-ID: On Wed, Oct 6, 2010 at 6:09 AM, Neal Becker wrote: > My research uses numpy, scipy, and pyublas for interface to c++ code. ?Now > the research phase is ending, and a real project is beginning. ?That means > delivering code to other engineers (in other groups). > > Convincing them to accept code in this form is going to be an uphill battle. > I'm sure they'd accept matlab code, only because many (recent) engineers > have some familiarity with it. ?I do all my development with emacs, but > that's not to everyone's liking. ?I think an attraction of matlab is a > decent development environment, things like: click on the name of a vector > in a gui to plot it. > > Is there anything I should look at that might offer an IDE for numpy? > > Still IPython and for the curious and for visual debuggers Eclipse + PyDev as suggested before. IPython is a great environment to experiment with the code and demonstrate others (as demonstrated in many SciPy conferences). Eclipse is a great platform if you are managing big projects and PyDev brings many useful additions on top of that. (Two examples: you can Ctrl-click a variable/function and jump to the definition of that variable/function, from Outline window you can easily jump to your function/class definition (possible in VIM through extensions) --more greatness are at http://stackoverflow.com/questions/54886/hidden-features-of-eclipse) -- G?khan From wardefar at iro.umontreal.ca Wed Oct 6 23:51:53 2010 From: wardefar at iro.umontreal.ca (David Warde-Farley) Date: Wed, 6 Oct 2010 23:51:53 -0400 Subject: [SciPy-User] ide for numpy? In-Reply-To: References: <4CAC2A5C.63BA.009B.1@twdb.state.tx.us> Message-ID: <63297807-15F3-4C4F-A397-5425F5E7B096@iro.umontreal.ca> On 2010-10-06, at 9:24 AM, Neal Becker wrote: > I'm trying spyder 2.0beta4. Looks interesting, but I'm not quite sure how > to use it. > > I was expecting to load a program, then be able to set breakpoints, and > examine variables. > > I see I can use the interactive console to manually enter: > u = np.arange (10) > > and then see it in variable window and right click to plot. > > I can set a breakpoint in my source code (displayed on the left in editor), > but only does something if I run using debug. But running from debug puts > me in pdb. Not sure if trying to do everything under pdb is newb friendly. > Am I missing something? One thing about Spyder in the past (and in the very recent past in one case) is that it doesn't play well on all platforms, namely I think it's still pretty weak on the Mac. If you're not worried about that platform then that's not a problem, though. As for debugging, someone else suggested winpdb which I think is probably the best you're going to do. I also like pudb which is pretty featureful but still lightweight, despite being all ncurses. plain pdb should be easy enough for someone who's written C before and used gdb... Also, does %debug work from within Spyder's IPython shell? I find the tab completion makes debugging a lot less frustrating. David From almar.klein at gmail.com Thu Oct 7 03:55:58 2010 From: almar.klein at gmail.com (Almar Klein) Date: Thu, 7 Oct 2010 09:55:58 +0200 Subject: [SciPy-User] ide for numpy? In-Reply-To: References: Message-ID: On 6 October 2010 13:18, Harald Schilly wrote: > On Wed, Oct 6, 2010 at 13:09, Neal Becker wrote: > > Is there anything I should look at that might offer an IDE for numpy? > > > There are various python IDEs that might fit for your purpose. One > that comes to my mind is Spyder, but there are at least two other ones > I can't remember yet - I'm sure others will post them ;) > There's also IEP (http://code.google.com/p/iep/) and Python toolkit ( http://pythontoolkit.sourceforge.net/). Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.ratcliff at gmail.com Thu Oct 7 09:48:03 2010 From: william.ratcliff at gmail.com (william ratcliff) Date: Thu, 7 Oct 2010 09:48:03 -0400 Subject: [SciPy-User] ide for numpy? In-Reply-To: References: Message-ID: It costs money, but I found Wing Professional to be rather useful. I can set breakpoints, get a python shell at the breakpoint and examine variables, etc. On Thu, Oct 7, 2010 at 3:55 AM, Almar Klein wrote: > > On 6 October 2010 13:18, Harald Schilly wrote: > >> On Wed, Oct 6, 2010 at 13:09, Neal Becker wrote: >> > Is there anything I should look at that might offer an IDE for numpy? >> >> >> There are various python IDEs that might fit for your purpose. One >> that comes to my mind is Spyder, but there are at least two other ones >> I can't remember yet - I'm sure others will post them ;) >> > > There's also IEP (http://code.google.com/p/iep/) and Python toolkit ( > http://pythontoolkit.sourceforge.net/). > > Almar > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kristofer.Wempa at sig.com Thu Oct 7 11:53:18 2010 From: Kristofer.Wempa at sig.com (Wempa, Kristofer) Date: Thu, 7 Oct 2010 11:53:18 -0400 Subject: [SciPy-User] Build Problems On SLES11SP1 64-Bit Message-ID: <55CC0C056191CB4F97EEB9AA0F83FA9709DEDE41E2@msgbal516.ds.susq.com> Scipy Users, I ran into a build problem and I didn't see this particular problem in the bug database or in the mailing list. I get a build failure when attempting to build scipy 0.8.0 on a SLES11SP1 64-bit server. Previously, I've built scipy the exact same way on 6 different SLES10 platforms (SP1, SP2 and SP3, both 32-bit and 64-bit). None of those builds had problems. However, on SLES11, I got the following errors: creating build/temp.linux-x86_64-2.7/scipy/sparse/sparsetools compile options: '-I/siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/Python-2.7/lib/python2.7/site-packages/numpy/core/include -I/siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/Python-2.7/include/python2.7 -c' g++: scipy/sparse/sparsetools/csr_wrap.cxx scipy/sparse/sparsetools/csr_wrap.cxx: In function ?int require_size(PyArrayObject*, npy_intp*, int)?: scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments for format scipy/sparse/sparsetools/csr_wrap.cxx:2917: error: expected `)' before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: too many arguments for format scipy/sparse/sparsetools/csr_wrap.cxx: In function ?int require_size(PyArrayObject*, npy_intp*, int)?: scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments for format scipy/sparse/sparsetools/csr_wrap.cxx:2917: error: expected `)' before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: too many arguments for format error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/Python-2.7/lib/python2.7/site-packages/numpy/core/include -I/siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/Python-2.7/include/python2.7 -c scipy/sparse/sparsetools/csr_wrap.cxx -o build/temp.linux-x86_64-2.7/scipy/sparse/sparsetools/csr_wrap.o" failed with exit status 1 The commands I ran to build were: export ATLAS=/siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/atlas-3.8.3/lib export ATLAS_LPATH= /siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/atlas-3.8.3/lib export ATLAS_IPATH=` /siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/atlas-3.8.3/include export LDFLAGS=" -Wl,-rpath -Wl,${ATLAS_LPATH} -shared" python setup.py build Apparently, the PRIdPTR from inttypes.h isn't getting expanded. I dug a little deeper and found that I could force it to expand them by defining the __STDC_FORMAT_MACROS macro. The workaround I used was to set my CFLAGS as follows: export CFLAGS=" -D__STDC_FORMAT_MACROS" Even though this workaround resolved the issue, I am wondering if there is something else that I should have done. If not, then the build process should be defining this macro automatically to avoid this error. Thanks. Kris ________________________________ IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yosefm at gmail.com Thu Oct 7 11:46:24 2010 From: yosefm at gmail.com (Yosef Meller) Date: Thu, 7 Oct 2010 17:46:24 +0200 Subject: [SciPy-User] ide for numpy? In-Reply-To: References: Message-ID: On Wed, Oct 6, 2010 at 1:09 PM, Neal Becker wrote: > My research uses numpy, scipy, and pyublas for interface to c++ code. ?Now > the research phase is ending, and a real project is beginning. ?That means > delivering code to other engineers (in other groups). > > Convincing them to accept code in this form is going to be an uphill battle. > I'm sure they'd accept matlab code, only because many (recent) engineers > have some familiarity with it. ?I do all my development with emacs, but > that's not to everyone's liking. ?I think an attraction of matlab is a > decent development environment, things like: click on the name of a vector > in a gui to plot it. > > Is there anything I should look at that might offer an IDE for numpy? Not saying anything about Spyder, as I need to try it myself. About the delivery - is it only for people to run, or also to edit the code? If they're just users, It might be less uphill to create a quick PyQt GUI to you code's main features and let them use that. If you wrap it with PyInstaller, they'll never know. HTH, Yosef. From ptittmann at gmail.com Thu Oct 7 19:51:53 2010 From: ptittmann at gmail.com (Peter Tittmann) Date: Thu, 7 Oct 2010 16:51:53 -0700 Subject: [SciPy-User] scipy.KDTree.query ball tree Message-ID: I'm having some trouble understanding how this query_ball_tree method works. The documentation says its parameters are: other : KDTree The tree containing points to search against r : positive float The maximum distance This is immediately strange to me because it would seem that it needs a focal point for the search. I understand that this is a search that references an existing KDTree as opposed to the query_ball_tree. So i go forth and pass it an existing KDTree: In [53]: test=spatial.KDTree.query_ball_tree(lasKDTree, 10.) ------------------------------------------------------------ Traceback (most recent call last): File "", line 1, in TypeError: query_ball_tree() takes at least 3 arguments (2 given) It seems that the documentation is missing a parameter. I found an obscure reference here: http://article.gmane.org/gmane.comp.python.scientific.user/25517/match=query_ball_tree that states the the tree should be the second argument. If someone could help clarify what the first argument should be I'd be grateful. Thanks! Peter From josef.pktd at gmail.com Thu Oct 7 20:04:07 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 7 Oct 2010 20:04:07 -0400 Subject: [SciPy-User] scipy.KDTree.query ball tree In-Reply-To: References: Message-ID: On Thu, Oct 7, 2010 at 7:51 PM, Peter Tittmann wrote: > I'm having some trouble understanding how this query_ball_tree method > works. The documentation says its parameters are: > > other : KDTree > ? ?The tree containing points to search against > r : positive float > ? ?The maximum distance > > This is immediately strange to me because it would seem that it needs > a focal point for the search. I understand that this is a search that > references an existing KDTree as opposed to the query_ball_tree. So i > go forth and pass it an existing KDTree: > > In [53]: test=spatial.KDTree.query_ball_tree(lasKDTree, 10.) > ------------------------------------------------------------ > Traceback (most recent call last): > ?File "", line 1, in > TypeError: query_ball_tree() takes at least 3 arguments (2 given) since it's a method the first argument should be self, i.e. it should be called with a KDTree instance and not with the class. my interpretation. Josef > > It seems that the documentation is missing a parameter. I found an > obscure reference here: > http://article.gmane.org/gmane.comp.python.scientific.user/25517/match=query_ball_tree > that states the the tree should be the second argument. If someone > could help clarify what the first argument should be I'd be grateful. > > Thanks! > > Peter > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Thu Oct 7 20:04:05 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 8 Oct 2010 00:04:05 +0000 (UTC) Subject: [SciPy-User] scipy.KDTree.query ball tree References: Message-ID: Thu, 07 Oct 2010 16:51:53 -0700, Peter Tittmann wrote: [clip] > In [53]: test=spatial.KDTree.query_ball_tree(lasKDTree, 10.) It's an instance method. You should call it as >>> a = KDTree(points_a) >>> b = KDTree(points_b) >>> a.query_ball_tree(b, 10.) -- Pauli Virtanen From ralf.gommers at googlemail.com Thu Oct 7 20:15:44 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 8 Oct 2010 08:15:44 +0800 Subject: [SciPy-User] Build Problems On SLES11SP1 64-Bit In-Reply-To: <55CC0C056191CB4F97EEB9AA0F83FA9709DEDE41E2@msgbal516.ds.susq.com> References: <55CC0C056191CB4F97EEB9AA0F83FA9709DEDE41E2@msgbal516.ds.susq.com> Message-ID: On Thu, Oct 7, 2010 at 11:53 PM, Wempa, Kristofer wrote: > Scipy Users, > > > > I ran into a build problem and I didn?t see this particular > problem in the bug database or in the mailing list. I get a build failure > when attempting to build scipy 0.8.0 on a SLES11SP1 64-bit server. > Previously, I?ve built scipy the exact same way on 6 different SLES10 > platforms (SP1, SP2 and SP3, both 32-bit and 64-bit). None of those builds > had problems. However, on SLES11, I got the following errors: > > > > creating build/temp.linux-x86_64-2.7/scipy/sparse/sparsetools > > compile options: > '-I/siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/Python-2.7/lib/python2.7/site-packages/numpy/core/include > -I/siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/Python-2.7/include/python2.7 > -c' > > g++: scipy/sparse/sparsetools/csr_wrap.cxx > > scipy/sparse/sparsetools/csr_wrap.cxx: In function ?int > require_size(PyArrayObject*, npy_intp*, int)?: > > scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before > ?PRIdPTR? > > scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious trailing ?%? > in format > > scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments for > format > > scipy/sparse/sparsetools/csr_wrap.cxx:2917: error: expected `)' before > ?PRIdPTR? > > scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: spurious trailing ?%? > in format > > scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: too many arguments for > format > > scipy/sparse/sparsetools/csr_wrap.cxx: In function ?int > require_size(PyArrayObject*, npy_intp*, int)?: > > scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before > ?PRIdPTR? > > scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious trailing ?%? > in format > > scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments for > format > > scipy/sparse/sparsetools/csr_wrap.cxx:2917: error: expected `)' before > ?PRIdPTR? > > scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: spurious trailing ?%? > in format > > scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: too many arguments for > format > > error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 > -Wall -fPIC > -I/siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/Python-2.7/lib/python2.7/site-packages/numpy/core/include > -I/siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/Python-2.7/include/python2.7 > -c scipy/sparse/sparsetools/csr_wrap.cxx -o > build/temp.linux-x86_64-2.7/scipy/sparse/sparsetools/csr_wrap.o" failed with > exit status 1 > > > > The commands I ran to build were: > > > > export ATLAS=/siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/atlas-3.8.3/lib > > export ATLAS_LPATH= > /siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/atlas-3.8.3/lib > > export ATLAS_IPATH=` > /siglinux/tc/sles11sp1_gcc-4.3.4_x86-64/sig1/atlas-3.8.3/include > > export LDFLAGS=" -Wl,-rpath -Wl,${ATLAS_LPATH} -shared" > > python setup.py build > > > > Apparently, the PRIdPTR from inttypes.h isn?t getting expanded. I dug a > little deeper and found that I could force it to expand them by defining the > __STDC_FORMAT_MACROS macro. The workaround I used was to set my CFLAGS as > follows: > > > > export CFLAGS=? -D__STDC_FORMAT_MACROS? > > > > Even though this workaround resolved the issue, I am wondering if there is > something else that I should have done. If not, then the build process > should be defining this macro automatically to avoid this error. Thanks. > > > This was fixed in r6645. That was just after 0.8.0, which did not support Python 2.7 (but it should work with this fix). Cheers, Ralf > Kris > > > > ------------------------------ > IMPORTANT: The information contained in this email and/or its attachments > is confidential. If you are not the intended recipient, please notify the > sender immediately by reply and immediately delete this message and all its > attachments. Any review, use, reproduction, disclosure or dissemination of > this message or any attachment by an unintended recipient is strictly > prohibited. Neither this message nor any attachment is intended as or should > be construed as an offer, solicitation or recommendation to buy or sell any > security or other financial instrument. Neither the sender, his or her > employer nor any of their respective affiliates makes any warranties as to > the completeness or accuracy of any of the information contained herein or > that this message or any of its attachments is free of viruses. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aarchiba at physics.mcgill.ca Thu Oct 7 21:43:34 2010 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Thu, 7 Oct 2010 21:43:34 -0400 Subject: [SciPy-User] scipy.KDTree.query ball tree In-Reply-To: References: Message-ID: Hi, In your example you are having two problems. One of them is that the python error message is confusing; this is a method intended to be called on an existing kdtree instance, and you are calling it directly. Some languages simply do not allow this, but python does, so instead you get the strange "missing argument" message; this occurs because if you don't call it on an instance, you need to supply an instance as the normally-invisible "self" argument. The basic problem you're having, though, is that this method is used for a "two-tree" query: you use it when you have two kd-trees and you want to find all the points of one that are near each point of the other. Think of it as if query_ball were being supplied many points, but instead of an array you're supplying them stored in a kd-tree. The reason this exists is that there are some algorithmic shortcuts you can take when both sets of points are stored in spatial data structures. But unless you know you want this, this is probably not the function you want to be using. In fact, this is probably not just the wrong function but the wrong kd-tree implementation; the compiled one, available as cKDTree, is faster in almost every way; the python implementation is there because it's more flexible (so that for example you could implement your own algorithms that walk the trees). Anne On 7 October 2010 19:51, Peter Tittmann wrote: > I'm having some trouble understanding how this query_ball_tree method > works. The documentation says its parameters are: > > other : KDTree > ? ?The tree containing points to search against > r : positive float > ? ?The maximum distance > > This is immediately strange to me because it would seem that it needs > a focal point for the search. I understand that this is a search that > references an existing KDTree as opposed to the query_ball_tree. So i > go forth and pass it an existing KDTree: > > In [53]: test=spatial.KDTree.query_ball_tree(lasKDTree, 10.) > ------------------------------------------------------------ > Traceback (most recent call last): > ?File "", line 1, in > TypeError: query_ball_tree() takes at least 3 arguments (2 given) > > It seems that the documentation is missing a parameter. I found an > obscure reference here: > http://article.gmane.org/gmane.comp.python.scientific.user/25517/match=query_ball_tree > that states the the tree should be the second argument. If someone > could help clarify what the first argument should be I'd be grateful. > > Thanks! > > Peter > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From wesmckinn at gmail.com Fri Oct 8 01:37:07 2010 From: wesmckinn at gmail.com (Wes McKinney) Date: Fri, 8 Oct 2010 01:37:07 -0400 Subject: [SciPy-User] Any planned work on scipy.stats.distributions? In-Reply-To: References: Message-ID: On Wed, Oct 6, 2010 at 3:41 PM, wrote: > On Wed, Oct 6, 2010 at 1:05 PM, Charles R Harris > wrote: >> >> >> On Tue, Oct 5, 2010 at 11:28 PM, Wes McKinney wrote: >>> >>> On Mon, Oct 4, 2010 at 2:55 PM, Pauli Virtanen wrote: >>> > Mon, 04 Oct 2010 13:47:08 -0400, josef.pktd wrote: >>> >> On Mon, Oct 4, 2010 at 1:33 PM, Wes McKinney >>> >> wrote: >>> > [clip] >>> >>> Got it. Is SciPy development still taking place in SVN? I should get >>> >>> commit rights and start chipping away on these misc issues. >>> >> >>> >> Either svn, or if you are comfortable with git, through github like >>> >> many >>> >> of the developers work now. >>> >> But I don't know any details about that route. >>> > >>> > I'd recommend going for git already. Some instructions how to get set up >>> > using Scipy's SVN mirror can be found here: >>> > >>> > http://projects.scipy.org/numpy/wiki/GitMirror >>> > >>> > >>> > ? ? ? ?Pauli >>> > >>> > _______________________________________________ >>> > SciPy-User mailing list >>> > SciPy-User at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> >>> Thanks-- I assume the next step is to get SVN commit rights? >>> >> >> If we go to git soon it should be easy to post patches for review and put in >> a pull request. But since that is still in the nebulous future I think svn >> commit rights would be appropriate. Josef? > > I'm also in favor of svn commit rights. I just prefer if bigger > changes are reviewed and fully tested. > I don't think basic improvements and bugfixes need to go through a > real review, but I will keep my eyes open. > > Thanks, > > Josef > >> >> Chuck >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > I'm all set to develop in git-- once you set up my svn credentials let me know so I can push changes via git-svn. Thanks for your help, Wes From ndbecker2 at gmail.com Fri Oct 8 07:55:03 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 08 Oct 2010 07:55:03 -0400 Subject: [SciPy-User] ide for numpy? References: Message-ID: Yosef Meller wrote: > On Wed, Oct 6, 2010 at 1:09 PM, Neal Becker wrote: >> My research uses numpy, scipy, and pyublas for interface to c++ code. >> Now the research phase is ending, and a real project is beginning. That >> means delivering code to other engineers (in other groups). >> >> Convincing them to accept code in this form is going to be an uphill >> battle. I'm sure they'd accept matlab code, only because many (recent) >> engineers have some familiarity with it. I do all my development with >> emacs, but that's not to everyone's liking. I think an attraction of >> matlab is a decent development environment, things like: click on the >> name of a vector in a gui to plot it. >> >> Is there anything I should look at that might offer an IDE for numpy? > > Not saying anything about Spyder, as I need to try it myself. About > the delivery - is it only for people to run, or also to edit the code? > If they're just users, It might be less uphill to create a quick PyQt > GUI to you code's main features and let them use that. If you wrap it > with PyInstaller, they'll never know. > > HTH, > Yosef. Not just users, I need development group to take ownership of the code. From ptittmann at gmail.com Fri Oct 8 11:16:43 2010 From: ptittmann at gmail.com (Peter Tittmann) Date: Fri, 8 Oct 2010 08:16:43 -0700 Subject: [SciPy-User] ide for numpy? In-Reply-To: References: Message-ID: I've just switched from emacs to spyder and am quite happy with it. Its a bit easier for a newbie to access documentation and explore methods with the live object browser. I've also experimented with PyDev for eclipse which is nice as well but eclipse is really made for java. The spyder option to run in Ipython or standard python interpreter is nice and the latest version can run a terminal shell as well as multiple python/ipython buffers. Its not completely bug free (tab completion stops working sort of randomly) but its not buggy in ways that matter. Good luck! Peter On 10/8/10, Neal Becker wrote: > Yosef Meller wrote: > >> On Wed, Oct 6, 2010 at 1:09 PM, Neal Becker wrote: >>> My research uses numpy, scipy, and pyublas for interface to c++ code. >>> Now the research phase is ending, and a real project is beginning. That >>> means delivering code to other engineers (in other groups). >>> >>> Convincing them to accept code in this form is going to be an uphill >>> battle. I'm sure they'd accept matlab code, only because many (recent) >>> engineers have some familiarity with it. I do all my development with >>> emacs, but that's not to everyone's liking. I think an attraction of >>> matlab is a decent development environment, things like: click on the >>> name of a vector in a gui to plot it. >>> >>> Is there anything I should look at that might offer an IDE for numpy? >> >> Not saying anything about Spyder, as I need to try it myself. About >> the delivery - is it only for people to run, or also to edit the code? >> If they're just users, It might be less uphill to create a quick PyQt >> GUI to you code's main features and let them use that. If you wrap it >> with PyInstaller, they'll never know. >> >> HTH, >> Yosef. > > Not just users, I need development group to take ownership of the code. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Sent from my mobile device From nico.schloemer at gmail.com Fri Oct 8 11:42:10 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Fri, 8 Oct 2010 17:42:10 +0200 Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? In-Reply-To: References: Message-ID: Hello again! > def callback(xk): > residuals.append(M*xk - b) This approach works nicely from a code-beautification point of view, thanks again for the hint. There's a major drawback, though, which may not be easily fixed: To calculate the residual, one has to compute an extra matrix-vector multiplication *per step*, which effectively increases the runtime of the CG algorithm by a factor of two (!). Ideally -- and classically -- the residual is retrieved directly from the CG method itself. Now obviously, the callback method does not provide for residual vector input, but I'm not quite sure what's the reason for that. In the numerical computation world, this would be considered a major deficiency. Cheers, Nico On Fri, Oct 1, 2010 at 2:56 PM, Pauli Virtanen wrote: > Fri, 01 Oct 2010 12:06:30 +0200, Nico Schl?mer wrote: >>> Use the 'callback' argument. >> >> That works alright I guess. >> What I do right now is creating a *global array that's filled up as the >> callback function is called, after which I go ahead and plot it. Using a >> global variable here seems somewhat ugly to me -- might there be a more >> elegant solution at all? > > Use variables from the outer scope: > > def doit(M, b): > ? ?residuals = [] > > ? ?def callback(xk): > ? ? ? ?residuals.append(M*xk - b) > > ? ?sol, info = scipy.sparse.linalg.cg(M, b, callback=callback) > ? ?return residuals > > > http://docs.python.org/tutorial/classes.html#python-scopes-and-namespaces > > http://www.saltycrane.com/blog/2008/01/python-variable-scope-notes/ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Fri Oct 8 12:30:10 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 8 Oct 2010 16:30:10 +0000 (UTC) Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? References: Message-ID: Fri, 08 Oct 2010 17:42:10 +0200, Nico Schl?mer wrote: [clip] > There's a major drawback, though, which may not be easily fixed: To > calculate the residual, one has to compute an extra matrix-vector > multiplication *per step*, which effectively increases the runtime of > the CG algorithm by a factor of two (!). Ideally -- and classically -- > the residual is retrieved directly from the CG method itself. Now > obviously, the callback method does not provide for residual vector > input, but I'm not quite sure what's the reason for that. That can quite likely be easily fixed by making a small change to Scipy. An interesting point here is that the `gmres` routine on the other hand only passes the residual norm to the callback, so there would be also some other things to harmonize here. Is there some other information that would be useful in the callback? I can think of: residual vector, residual 2-norm, and current iterate -- what else? If you have time, please file a bug report, http://projects.scipy.org/scipy/newticket so that we won't forget this issue. -- Pauli Virtanen From david.kirkby at onetel.net Fri Oct 8 14:14:25 2010 From: david.kirkby at onetel.net (Dr. David Kirkby) Date: Fri, 08 Oct 2010 19:14:25 +0100 Subject: [SciPy-User] Problems building SciPy on OS X due to ppc64 issues Message-ID: <4CAF5F81.6050800@onetel.net> Numpy and SciPy are used in the Sage project. A ticket has been opened to update Numpy to 1.5.0 and SciPy to 0.8. We hit a problem with Numpy and Solaris, though the number developers have now committed a fix for that issue. However, we have an issue with people being unable to build on OS X. I personally do not have an OS X machine, but there's a log of a failed build here: http://sage.math.washington.edu/home/palmieri/misc/scipy-0.8.log For reasons I think are rather stupid, we use a script sage_fortran which is the Fortran compiler, so references below to /Applications/sage_builds/numpy/sage-4.6.alpha2/local/bin/sage_fortran is actually just a link to 'gfortran' It seems to resolve around the error message about two files "...have the same architectures (ppc64) and can't be in the same fat output file See below. creating build/temp.macosx-10.6-i386-2.6/scipy/fftpack creating build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src creating build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/dfftpack compile options: '-I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include -c' sage_fortran:f77: scipy/fftpack/src/dfftpack/dcosqb.f lipo: /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccHZoMkv.out and /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccpHWmlC.out have the same architectures (ppc64) and can't be in the same fat output file lipo: /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccHZoMkv.out and /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccpHWmlC.out have the same architectures (ppc64) and can't be in the same fat output file error: Command "/Applications/sage_builds/numpy/sage-4.6.alpha2/local/bin/sage_fortran -Wall -ffixed-form -fno-second-underscore -arch ppc -arch x86_64 -arch ppc64 -fPIC -O3 -funroll-loops -I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include -c -c scipy/fftpack/src/dfftpack/dcosqb.f -o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/dfftpack/dcosqb.o" failed with exit status 1 Error building scipy. Googling has thrown up a few possible ideas for how to solve this. * http://projects.scipy.org/numpy/ticket/1399 * http://article.gmane.org/gmane.comp.python.scientific.devel/14564 * http://article.gmane.org/gmane.comp.python.scientific.user/26314 I now that http://projects.scipy.org/numpy/ticket/1399 has been open 8 months, is at the highest priority, but does not seem to have been resolved yet. It is unclear if our problem is the same or not. Any suggestions? Dave From robert.kern at gmail.com Fri Oct 8 14:23:55 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 8 Oct 2010 13:23:55 -0500 Subject: [SciPy-User] Problems building SciPy on OS X due to ppc64 issues In-Reply-To: <4CAF5F81.6050800@onetel.net> References: <4CAF5F81.6050800@onetel.net> Message-ID: On Fri, Oct 8, 2010 at 13:14, Dr. David Kirkby wrote: > Numpy and SciPy are used in the Sage project. A ticket has been opened to update > Numpy to 1.5.0 and SciPy to 0.8. > > We hit a problem with Numpy and Solaris, though the number developers have now > committed a fix for that issue. > > However, we have an issue with people being unable to build on OS X. I > personally do not have an OS X machine, but there's a log of a failed build here: > > http://sage.math.washington.edu/home/palmieri/misc/scipy-0.8.log > > For reasons I think are rather stupid, we use a script sage_fortran which is the > Fortran compiler, so references below to > /Applications/sage_builds/numpy/sage-4.6.alpha2/local/bin/sage_fortran > > is actually just a link to 'gfortran' > > It seems to resolve around the error message about two files "...have the same > architectures (ppc64) and can't be in the same fat output file > > See below. > > creating build/temp.macosx-10.6-i386-2.6/scipy/fftpack > creating build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src > creating build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/dfftpack > compile options: > '-I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include > -c' > sage_fortran:f77: scipy/fftpack/src/dfftpack/dcosqb.f > lipo: /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccHZoMkv.out and > /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccpHWmlC.out have the same > architectures (ppc64) and can't be in the same fat output file > lipo: /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccHZoMkv.out and > /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccpHWmlC.out have the same > architectures (ppc64) and can't be in the same fat output file > error: Command > "/Applications/sage_builds/numpy/sage-4.6.alpha2/local/bin/sage_fortran -Wall > -ffixed-form -fno-second-underscore -arch ppc -arch x86_64 -arch ppc64 -fPIC -O3 > -funroll-loops > -I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include > -c -c scipy/fftpack/src/dfftpack/dcosqb.f -o Are the multiple "-c" options causing issues? From the build log, it looks like "-c" is being added explicitly somewhere. compile options: '-I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include -c' Exactly where did the gfortran compiler come from? What version is it? What architecture is the machine doing the build? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From nico.schloemer at gmail.com Fri Oct 8 15:14:42 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Fri, 8 Oct 2010 21:14:42 +0200 Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? In-Reply-To: References: Message-ID: > If you have time, please file a bug report, Done: http://projects.scipy.org/scipy/ticket/1297. On Fri, Oct 8, 2010 at 6:30 PM, Pauli Virtanen wrote: > Fri, 08 Oct 2010 17:42:10 +0200, Nico Schl?mer wrote: > [clip] >> There's a major drawback, though, which may not be easily fixed: To >> calculate the residual, one has to compute an extra matrix-vector >> multiplication *per step*, which effectively increases the runtime of >> the CG algorithm by a factor of two (!). Ideally -- and classically -- >> the residual is retrieved directly from the CG method itself. Now >> obviously, the callback method does not provide for residual vector >> input, but I'm not quite sure what's the reason for that. > > That can quite likely be easily fixed by making a small change to Scipy. > An interesting point here is that the `gmres` routine on the other hand > only passes the residual norm to the callback, so there would be also > some other things to harmonize here. > > Is there some other information that would be useful in the callback? I > can think of: residual vector, residual 2-norm, and current iterate -- > what else? > > If you have time, please file a bug report, > > ? ?http://projects.scipy.org/scipy/newticket > > so that we won't forget this issue. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From david.kirkby at onetel.net Fri Oct 8 16:14:15 2010 From: david.kirkby at onetel.net (Dr. David Kirkby) Date: Fri, 08 Oct 2010 21:14:15 +0100 Subject: [SciPy-User] Problems building SciPy on OS X due to ppc64 issues In-Reply-To: References: <4CAF5F81.6050800@onetel.net> Message-ID: <4CAF7B97.4010808@onetel.net> On 10/ 8/10 07:23 PM, Robert Kern wrote: > Are the multiple "-c" options causing issues? From the build log, it > looks like "-c" is being added explicitly somewhere. > > compile options: > '-I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include > -c' > > Exactly where did the gfortran compiler come from? What version is it? > What architecture is the machine doing the build? > I'll have to ask, as it's not my own machine where this problem is seen, but it is seen on two OS X systems. I run OpenSolaris myself. I'll get back. dave From fperez.net at gmail.com Fri Oct 8 17:04:37 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Oct 2010 14:04:37 -0700 Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? In-Reply-To: References: Message-ID: On Fri, Oct 8, 2010 at 9:30 AM, Pauli Virtanen wrote: > > That can quite likely be easily fixed by making a small change to Scipy. > An interesting point here is that the `gmres` routine on the other hand > only passes the residual norm to the callback, so there would be also > some other things to harmonize here. Quick question, why is cg marked as deprecated? Class Docstring: scipy.linalg.cg is DEPRECATED!! -- use scipy.sparse.linalg.cg instead Use Conjugate Gradient iteration to solve A x = b While I perfectly understand that in production work, the typical use cases require sparse matrices, there's still plenty of value for a dense implementation of CG in scipy, I would think (especially given how sparse matrices are a little harder to use for a beginner than plain dense numpy arrays). Am I missing something obvious? Thanks, f From pav at iki.fi Fri Oct 8 17:42:47 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 8 Oct 2010 21:42:47 +0000 (UTC) Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? References: Message-ID: Fri, 08 Oct 2010 14:04:37 -0700, Fernando Perez wrote: [clip] > While I perfectly understand that in production work, the typical use > cases require sparse matrices, there's still plenty of value for a dense > implementation of CG in scipy, I would think (especially given how > sparse matrices are a little harder to use for a beginner than plain > dense numpy arrays). > > Am I missing something obvious? There is no dense implementation. The one that was under scipy.linalg is the same as what is in scipy.sparse.linalg -- I guess the misleading location was a reason for moving it. Indeed, :) __all__ = ['bicg','bicgstab','cg','cgs','gmres','qmr'] # Deprecated on January 26, 2008 from scipy.sparse.linalg import isolve from numpy import deprecate for name in __all__: oldfn = getattr(isolve, name) oldname='scipy.linalg.' + name newname='scipy.sparse.linalg.' + name newfn = deprecate(oldfn, oldname=oldname, newname=newname) exec(name + ' = newfn') From fperez.net at gmail.com Fri Oct 8 17:56:35 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Oct 2010 14:56:35 -0700 Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? In-Reply-To: References: Message-ID: On Fri, Oct 8, 2010 at 2:42 PM, Pauli Virtanen wrote: > There is no dense implementation. The one that was under scipy.linalg is > the same as what is in scipy.sparse.linalg -- I guess the misleading > location was a reason for moving it. > > Indeed, :) Ah, thanks :) In that case it might be worth changing the message and warning from: /usr/lib/python2.6/dist-packages/numpy/lib/utils.py:108: DeprecationWarning: scipy.linalg.cg is deprecated, use scipy.sparse.linalg.cg warnings.warn(str1, DeprecationWarning) to simply indicating that the *name* is deprecated, but the actual function lives in the sparse library. Since the function already accepts dense matrices just fine, indicating this should be enough. I'm happy to make the changes if you agree. Cheers, f From pav at iki.fi Fri Oct 8 18:36:35 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 8 Oct 2010 22:36:35 +0000 (UTC) Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? References: Message-ID: Fri, 08 Oct 2010 14:56:35 -0700, Fernando Perez wrote: [clip] > /usr/lib/python2.6/dist-packages/numpy/lib/utils.py:108: > DeprecationWarning: scipy.linalg.cg is deprecated, use > scipy.sparse.linalg.cg > warnings.warn(str1, DeprecationWarning) > > to simply indicating that the *name* is deprecated, but the actual > function lives in the sparse library. Since the function already > accepts dense matrices just fine, indicating this should be enough. > > I'm happy to make the changes if you agree. Those functions are removed in 0.9, so I guess we're OK without changes. Pauli From fperez.net at gmail.com Fri Oct 8 18:54:51 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Oct 2010 15:54:51 -0700 Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? In-Reply-To: References: Message-ID: On Fri, Oct 8, 2010 at 3:36 PM, Pauli Virtanen wrote: > > > Those functions are removed in 0.9, so I guess we're OK without changes. > Ah, even better, nothing to do :) Cheers, f From ralf.gommers at googlemail.com Fri Oct 8 19:21:40 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 9 Oct 2010 07:21:40 +0800 Subject: [SciPy-User] Problems building SciPy on OS X due to ppc64 issues In-Reply-To: <4CAF5F81.6050800@onetel.net> References: <4CAF5F81.6050800@onetel.net> Message-ID: On Sat, Oct 9, 2010 at 2:14 AM, Dr. David Kirkby wrote: > Numpy and SciPy are used in the Sage project. A ticket has been opened to > update > Numpy to 1.5.0 and SciPy to 0.8. > > We hit a problem with Numpy and Solaris, though the number developers have > now > committed a fix for that issue. > > However, we have an issue with people being unable to build on OS X. I > personally do not have an OS X machine, but there's a log of a failed build > here: > > http://sage.math.washington.edu/home/palmieri/misc/scipy-0.8.log > > For reasons I think are rather stupid, we use a script sage_fortran which > is the > Fortran compiler, so references below to > /Applications/sage_builds/numpy/sage-4.6.alpha2/local/bin/sage_fortran > > is actually just a link to 'gfortran' > > It seems to resolve around the error message about two files "...have the > same > architectures (ppc64) and can't be in the same fat output file > Some googling turns up this which seems related to your issue: http://omgili.com/mailinglist/boost-users/lists/boost/org/2E4E2E72-3D6F-41B3-BB5B-0D81145DEA59orchidseedorg.html You are using the 10.6 SDK and gcc 4.2. In the 10.6 SDK the ppc64 architecture is not supported anymore, you want to use 10.4 or 10.5 SDK. Since Python is built with gcc 4.0 you want to do the same if you want C++ support for ppc64 (which you'll need for scipy.sparsetools). The above may be irrelevant for you though, since Sage is distributing binaries for each OS X version separately, right? 10.6 doesn't install on ppc64 machines, so no need to build that arch at all. > See below. > > creating build/temp.macosx-10.6-i386-2.6/scipy/fftpack > creating build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src > creating build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/dfftpack > compile options: > > '-I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include > -c' > sage_fortran:f77: scipy/fftpack/src/dfftpack/dcosqb.f > lipo: /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccHZoMkv.out and > /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccpHWmlC.out have the > same > architectures (ppc64) and can't be in the same fat output file > lipo: /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccHZoMkv.out and > /var/folders/JV/JVYCpshdHd4FFoThuUgD8k+++TI/-Tmp-//ccpHWmlC.out have the > same > architectures (ppc64) and can't be in the same fat output file > error: Command > "/Applications/sage_builds/numpy/sage-4.6.alpha2/local/bin/sage_fortran > -Wall > -ffixed-form -fno-second-underscore -arch ppc -arch x86_64 -arch ppc64 > -fPIC -O3 > -funroll-loops > > -I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include > -c -c scipy/fftpack/src/dfftpack/dcosqb.f -o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/dfftpack/dcosqb.o" failed > with > exit status 1 > Error building scipy. > > Googling has thrown up a few possible ideas for how to solve this. > > > * http://projects.scipy.org/numpy/ticket/1399 > * http://article.gmane.org/gmane.comp.python.scientific.devel/14564 > * http://article.gmane.org/gmane.comp.python.scientific.user/26314 > > I now that > > http://projects.scipy.org/numpy/ticket/1399 > This is a different issue, but that patch could have caused your problem. But if so, the workaround in the email threads above you link to should have resolved it. Looking at these lines of the build log: Fortran f77 compiler: /Applications/sage_builds/numpy/sage-4.6.alpha2/local/bin/sage_fortran -Wall -ffixed-form -fno-second-underscore -arch ppc -arch x86_64 -arch ppc64 -fPIC -O3 -funroll-loops Fortran f90 compiler: /Applications/sage_builds/numpy/sage-4.6.alpha2/local/bin/sage_fortran -Wall -fno-second-underscore -arch ppc -arch x86_64 -arch ppc64 -fPIC -O3 -funroll-loops shows that the i386 (or i686) target is missing, so it's more likely your build script is doing something strange here. > > has been open 8 months, is at the highest priority, but does not seem to > have > been resolved yet. It is unclear if our problem is the same or not. > It was reopened last week, so not exactly 8 months, and should be fixed before 1.5.1rc1 next week. Most likely by just reverting r8453. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.kirkby at onetel.net Fri Oct 8 22:17:09 2010 From: david.kirkby at onetel.net (Dr. David Kirkby) Date: Sat, 09 Oct 2010 03:17:09 +0100 Subject: [SciPy-User] Problems building SciPy on OS X due to ppc64 issues In-Reply-To: References: <4CAF5F81.6050800@onetel.net> Message-ID: <4CAFD0A5.6080509@onetel.net> On 10/ 9/10 12:21 AM, Ralf Gommers wrote: > On Sat, Oct 9, 2010 at 2:14 AM, Dr. David Kirkbywrote: > >> Numpy and SciPy are used in the Sage project. A ticket has been opened to >> update >> Numpy to 1.5.0 and SciPy to 0.8. >> >> We hit a problem with Numpy and Solaris, though the number developers have >> now >> committed a fix for that issue. >> >> However, we have an issue with people being unable to build on OS X. I >> personally do not have an OS X machine, but there's a log of a failed build >> here: >> >> http://sage.math.washington.edu/home/palmieri/misc/scipy-0.8.log >> >> For reasons I think are rather stupid, we use a script sage_fortran which >> is the >> Fortran compiler, so references below to >> /Applications/sage_builds/numpy/sage-4.6.alpha2/local/bin/sage_fortran >> >> is actually just a link to 'gfortran' >> >> It seems to resolve around the error message about two files "...have the >> same >> architectures (ppc64) and can't be in the same fat output file >> > > Some googling turns up this which seems related to your issue: > http://omgili.com/mailinglist/boost-users/lists/boost/org/2E4E2E72-3D6F-41B3-BB5B-0D81145DEA59orchidseedorg.html Thank you for that. I need to get the people with the OS X problems to look at that. As I don't run OS X myself (I prefere Solaris), it's hard to say. I know Sage ships a Fortran compiler for OS X, but I'm not sure if there is more than one of them. On thing Sage tries to do is build binaries which will run on as wider range as platforms as possible. It seems that getting a binary to run on PPC and well as the latest OS X is just impossible. What is odd, is that the code in Sage was working until we upgraded Numpy and SciPy. Dave From lorenzo.isella at gmail.com Sat Oct 9 09:21:15 2010 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sat, 09 Oct 2010 15:21:15 +0200 Subject: [SciPy-User] String Matching in SciPy Message-ID: <4CB06C4B.6030202@gmail.com> Dear All, Please consider a text file like the one you can download from http://dl.dropbox.com/u/5685598/time_series25_.dat where every element is just the result of the application of an hash function. That file stands for a time series whose length I will call n. Consider position i along the file; the "past" of i is given by entries [0:i] along the file, whereas the "future" of i (which includes i itself) is given by the [i:n] positions. My goal is to be able to find the length of the shortest substring in the future of i which has not been already seen in its past. Consider this example series a b c c i 0 1 2 3 which would lead to i 0 1 2 3 L 1 1 1 2 where L is the length I am looking for calculated for various choices of i. In the end of the day, I need a sort of built-in grep function for Python, but the first step is to understand if there is an efficient way to detect whether a certain substring (in the future of i) is a subset of the string giving the past of i. Any suggestion is welcome. Cheers Lorenzo From pav at iki.fi Sat Oct 9 11:03:52 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 9 Oct 2010 15:03:52 +0000 (UTC) Subject: [SciPy-User] String Matching in SciPy References: <4CB06C4B.6030202@gmail.com> Message-ID: Sat, 09 Oct 2010 15:21:15 +0200, Lorenzo Isella wrote: [clip] > where L is the length I am looking for calculated for various choices of > i. In the end of the day, I need a sort of built-in grep function for > Python, but the first step is to understand if there is an efficient way > to detect whether a certain substring (in the future of i) is a subset > of the string giving the past of i. > Any suggestion is welcome. As far as I know, there's no builtin function in Numpy for doing this. There are probably several choices how to proceed, among them: (i) Python's regexp module works also with buffers, so you can directly use it on character arrays: >>> import numpy as np >>> import re >>> x = np.array(list('asdasdasds'), dtype='S1') >>> x array(['a', 's', 'd', 'a', 's', 'd', 'a', 's', 'd', 's'], dtype='|S1') >>> re.search('sda', x[:4]).start() 1 This does not copy the data to a string, so it should be efficient. If you need to find all occurrences, you can do >>> matches = re.finditer('sda', x) >>> offsets = [m.start() for m in matches] >>> offsets [1, 4] If you have a large number of matches, this approach may become less efficient, as it needs to form a Python match object for each match. (ii) Write a simple function in Cython that does the string matching, and returns an integer array of offsets. -- Pauli Virtanen From josef.pktd at gmail.com Sat Oct 9 23:05:58 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 9 Oct 2010 23:05:58 -0400 Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? In-Reply-To: References: Message-ID: On Fri, Oct 8, 2010 at 12:30 PM, Pauli Virtanen wrote: > Fri, 08 Oct 2010 17:42:10 +0200, Nico Schl?mer wrote: > [clip] >> There's a major drawback, though, which may not be easily fixed: To >> calculate the residual, one has to compute an extra matrix-vector >> multiplication *per step*, which effectively increases the runtime of >> the CG algorithm by a factor of two (!). Ideally -- and classically -- >> the residual is retrieved directly from the CG method itself. Now >> obviously, the callback method does not provide for residual vector >> input, but I'm not quite sure what's the reason for that. > > That can quite likely be easily fixed by making a small change to Scipy. > An interesting point here is that the `gmres` routine on the other hand > only passes the residual norm to the callback, so there would be also > some other things to harmonize here. > > Is there some other information that would be useful in the callback? I > can think of: residual vector, residual 2-norm, and current iterate -- > what else? > > If you have time, please file a bug report, > > ? ?http://projects.scipy.org/scipy/newticket > > so that we won't forget this issue. A related idea: Is it possible or is there a trick to tell an optimizer from a callback function to stop? I never used callback function, so I don't know what's possible. I have two usecases in mind: Stata for example switches optimizers during estimation of a statistical model. It's possible to limit the number of iterations, and restart with a different optimizer but it might be useful to switch depending on the state of the optimization. fmin_bfgs doesn't look very robust, quite often it goes of into neverland and I can wait quite some time until I can restart with a different optimizer. It would be useful in this case if we can inspect what's going on and kill it automatically and switch to something more robust. Josef > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cournape at gmail.com Sun Oct 10 02:21:42 2010 From: cournape at gmail.com (David Cournapeau) Date: Sun, 10 Oct 2010 15:21:42 +0900 Subject: [SciPy-User] [ANN] Bento 0.0.4 release Message-ID: Hi, I am pleased to announce the release 0.0.4 for Bento. This release is a significant milestone, as it is able to build both numpy and scipy on some platforms (Linux and mac os x). The main features are: - New hooks to modify, override compilation options on a per extension basis. - Recursive package description support - Option to generate a python module containing all install paths, to avoid using __file__ without depending on complex tools ala pkg_resources. - Improved documentation - Some work toward python 3 support (does not work yet for complex packages) Concerning numpy/scipy bento-based builds, a few interesting facts: - amount of code for both numpy/scipy is smaller than scons-based or distutils-based builds. - by playing a bit with options and using clang, you can build a full debug build of numpy in 5-6 seconds, and ~ 40 seconds for scipy on a recent computer. - _bento_build branches for numpy and scipy are found on my account http://github.com/cournape You can download bento on github: http://github.com/cournape/Bento, and look at the documentation on http://cournape.github.com/Bento/html/index.html. cheers, David _______________________________________________ From pav at iki.fi Sun Oct 10 07:34:23 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 10 Oct 2010 11:34:23 +0000 (UTC) Subject: [SciPy-User] scipy.sparse.linalg.cg statistics? References: Message-ID: Sat, 09 Oct 2010 23:05:58 -0400, josef.pktd wrote: [clip] > A related idea: > > Is it possible or is there a trick to tell an optimizer from a callback > function to stop? > > I never used callback function, so I don't know what's possible. Raising an exception will at least bail out from the optimizer. -- Pauli Virtanen From ognjen.i at gmail.com Sun Oct 10 23:22:45 2010 From: ognjen.i at gmail.com (omnyx) Date: Sun, 10 Oct 2010 20:22:45 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] gettind scipy.quad to report values where integral is divergent Message-ID: <29930881.post@talk.nabble.com> I have a function f(x,y) and I want to do integration in x for different values of y. I have a list of values of y and just run through every one. For some values of y the integral might be diverging and right now, quad only outputs the standard message "Warning: The integral is probably divergent, or slowly convergent." Is there a way to see the specific value of y for which the integral was divergent, along with the output warning message? thanks -- View this message in context: http://old.nabble.com/gettind-scipy.quad-to-report-values-where-integral-is-divergent-tp29930881p29930881.html Sent from the Scipy-User mailing list archive at Nabble.com. From pav at iki.fi Mon Oct 11 04:36:03 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 11 Oct 2010 08:36:03 +0000 (UTC) Subject: [SciPy-User] [SciPy-user] gettind scipy.quad to report values where integral is divergent References: <29930881.post@talk.nabble.com> Message-ID: Sun, 10 Oct 2010 20:22:45 -0700, omnyx wrote: > I have a function f(x,y) and I want to do integration in x for different > values of y. I have a list of values of y and just run through every > one. For some values of y the integral might be diverging and right now, > quad only outputs the standard message > "Warning: The integral is probably divergent, or slowly convergent." > > Is there a way to see the specific value of y for which the integral was > divergent, along with the output warning message? Set the full_output=True option, and look what goes into the `infodict` output argument. http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html#scipy.integrate.quad From Dharhas.Pothina at twdb.state.tx.us Mon Oct 11 08:54:56 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Mon, 11 Oct 2010 07:54:56 -0500 Subject: [SciPy-User] scipy.in website down? Message-ID: <4CB2C2D0.63BA.009B.1@twdb.state.tx.us> Hi, I'm trying to submit an abstract for Scipy India 2010 but it looks like the scipy.in website is down. Can anyone confirm this? - dharhas From vincent at vincentdavis.net Mon Oct 11 10:04:53 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Mon, 11 Oct 2010 08:04:53 -0600 Subject: [SciPy-User] scipy.in website down? In-Reply-To: <4CB2C2D0.63BA.009B.1@twdb.state.tx.us> References: <4CB2C2D0.63BA.009B.1@twdb.state.tx.us> Message-ID: Yes it seems down, I didn't know there was a scipy Inda. Vincent On Mon, Oct 11, 2010 at 6:54 AM, Dharhas Pothina wrote: > Hi, > > I'm trying to submit an abstract for Scipy India 2010 but it looks like the scipy.in website is down. Can anyone confirm this? > > - dharhas > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Thanks Vincent Davis 720-301-3003 From punchagan at gmail.com Mon Oct 11 11:11:12 2010 From: punchagan at gmail.com (Puneeth) Date: Mon, 11 Oct 2010 20:41:12 +0530 Subject: [SciPy-User] scipy.in website down? In-Reply-To: <4CB2C2D0.63BA.009B.1@twdb.state.tx.us> References: <4CB2C2D0.63BA.009B.1@twdb.state.tx.us> Message-ID: On Mon, Oct 11, 2010 at 6:24 PM, Dharhas Pothina wrote: > Hi, > > I'm trying to submit an abstract for Scipy India 2010 but it looks like the scipy.in website is down. Can anyone confirm this? Apparently, there was some server side work being done. It is back up, now. -- Puneeth From punchagan at gmail.com Mon Oct 11 11:11:12 2010 From: punchagan at gmail.com (Puneeth) Date: Mon, 11 Oct 2010 20:41:12 +0530 Subject: [SciPy-User] scipy.in website down? In-Reply-To: <4CB2C2D0.63BA.009B.1@twdb.state.tx.us> References: <4CB2C2D0.63BA.009B.1@twdb.state.tx.us> Message-ID: On Mon, Oct 11, 2010 at 6:24 PM, Dharhas Pothina wrote: > Hi, > > I'm trying to submit an abstract for Scipy India 2010 but it looks like the scipy.in website is down. Can anyone confirm this? Apparently, there was some server side work being done. It is back up, now. -- Puneeth From punchagan at gmail.com Mon Oct 11 11:11:12 2010 From: punchagan at gmail.com (Puneeth) Date: Mon, 11 Oct 2010 20:41:12 +0530 Subject: [SciPy-User] scipy.in website down? In-Reply-To: <4CB2C2D0.63BA.009B.1@twdb.state.tx.us> References: <4CB2C2D0.63BA.009B.1@twdb.state.tx.us> Message-ID: On Mon, Oct 11, 2010 at 6:24 PM, Dharhas Pothina wrote: > Hi, > > I'm trying to submit an abstract for Scipy India 2010 but it looks like the scipy.in website is down. Can anyone confirm this? Apparently, there was some server side work being done. It is back up, now. -- Puneeth From ptittmann at gmail.com Mon Oct 11 14:08:04 2010 From: ptittmann at gmail.com (Peter Tittmann) Date: Mon, 11 Oct 2010 11:08:04 -0700 Subject: [SciPy-User] scipy.KDTree.query ball tree In-Reply-To: References: Message-ID: On Thu, Oct 7, 2010 at 6:43 PM, Anne Archibald wrote: > Hi, > > In your example you are having two problems. One of them is that the > python error message is confusing; this is a method intended to be > called on an existing kdtree instance, and you are calling it > directly. Some languages simply do not allow this, but python does, so > instead you get the strange "missing argument" message; this occurs > because if you don't call it on an instance, you need to supply an > instance as the normally-invisible "self" argument. > > The basic problem you're having, though, is that this method is used > for a "two-tree" query: you use it when you have two kd-trees and you > want to find all the points of one that are near each point of the > other. Think of it as if query_ball were being supplied many points, > but instead of an array you're supplying them stored in a kd-tree. The > reason this exists is that there are some algorithmic shortcuts you > can take when both sets of points are stored in spatial data > structures. But unless you know you want this, this is probably not > the function you want to be using. > > In fact, this is probably not just the wrong function but the wrong > kd-tree implementation; the compiled one, available as cKDTree, is > faster in almost every way; the python implementation is there because > it's more flexible (so that for example you could implement your own > algorithms that walk the trees). > > Anne > > On 7 October 2010 19:51, Peter Tittmann wrote: >> I'm having some trouble understanding how this query_ball_tree method >> works. The documentation says its parameters are: >> >> other : KDTree >> ? ?The tree containing points to search against >> r : positive float >> ? ?The maximum distance >> >> This is immediately strange to me because it would seem that it needs >> a focal point for the search. I understand that this is a search that >> references an existing KDTree as opposed to the query_ball_tree. So i >> go forth and pass it an existing KDTree: >> >> In [53]: test=spatial.KDTree.query_ball_tree(lasKDTree, 10.) >> ------------------------------------------------------------ >> Traceback (most recent call last): >> ?File "", line 1, in >> TypeError: query_ball_tree() takes at least 3 arguments (2 given) >> >> It seems that the documentation is missing a parameter. I found an >> obscure reference here: >> http://article.gmane.org/gmane.comp.python.scientific.user/25517/match=query_ball_tree >> that states the the tree should be the second argument. If someone >> could help clarify what the first argument should be I'd be grateful. >> >> Thanks! >> >> Peter >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > Thanks all. Would it be possible to search the cKDTree with the python query_ball_tree? My application could use the efficiency of the cKDTree but I'm not clear how I query distance instead of # of points using the existing methods in cKDTree. gratefully, Peter From ognjen.i at gmail.com Mon Oct 11 14:28:59 2010 From: ognjen.i at gmail.com (omnyx) Date: Mon, 11 Oct 2010 11:28:59 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] gettind scipy.quad to report values where integral is divergent In-Reply-To: References: <29930881.post@talk.nabble.com> Message-ID: <29936688.post@talk.nabble.com> thanks! Pauli Virtanen-3 wrote: > > Sun, 10 Oct 2010 20:22:45 -0700, omnyx wrote: >> I have a function f(x,y) and I want to do integration in x for different >> values of y. I have a list of values of y and just run through every >> one. For some values of y the integral might be diverging and right now, >> quad only outputs the standard message >> "Warning: The integral is probably divergent, or slowly convergent." >> >> Is there a way to see the specific value of y for which the integral was >> divergent, along with the output warning message? > > Set the full_output=True option, and look what goes into the `infodict` > output argument. > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html#scipy.integrate.quad > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/gettind-scipy.quad-to-report-values-where-integral-is-divergent-tp29930881p29936688.html Sent from the Scipy-User mailing list archive at Nabble.com. From ognjen.i at gmail.com Mon Oct 11 14:35:21 2010 From: ognjen.i at gmail.com (omnyx) Date: Mon, 11 Oct 2010 11:35:21 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Convention and preferred practice when importing scipy Message-ID: <29936740.post@talk.nabble.com> Upon reading http://www.scipy.org/Getting_Started#head-eefd39e27392d16c48eb5350bc05b4b7045ffcd9 is it then true that using import scipy as sc is as fast as import numpy as np ? (sc,np just shortcut names) I only need a couple of scipy built-in functions but too many to actually list them all, and not all are part of numpy. I know I should be avoiding from scipy import * thanks -- View this message in context: http://old.nabble.com/Convention-and-preferred-practice-when-importing-scipy-tp29936740p29936740.html Sent from the Scipy-User mailing list archive at Nabble.com. From robert.kern at gmail.com Mon Oct 11 14:44:33 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 11 Oct 2010 13:44:33 -0500 Subject: [SciPy-User] [SciPy-user] Convention and preferred practice when importing scipy In-Reply-To: <29936740.post@talk.nabble.com> References: <29936740.post@talk.nabble.com> Message-ID: On Mon, Oct 11, 2010 at 13:35, omnyx wrote: > > Upon reading > http://www.scipy.org/Getting_Started#head-eefd39e27392d16c48eb5350bc05b4b7045ffcd9 > > is it then true that using > import scipy as sc > is as fast as > import numpy as np ? > (sc,np just shortcut names) > > I only need a couple of scipy built-in functions but too many to actually > list them all, and not all are part of numpy. I know I should be avoiding > from scipy import * Which functions? A few from the scipy.special subpackage? "import scipy" really imports nothing other than numpy under a different namespace and with some different defaults exposed (for various historical reasons). You almost never want to do that. If you need things from one of the subpackages, you will need to import those subpackages explicitly, e.g. "from scipy import special". That will take a little more time than importing just numpy, but not as much time as it would to import from all of the subpackages one by one. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From ognjen.i at gmail.com Mon Oct 11 14:54:36 2010 From: ognjen.i at gmail.com (omnyx) Date: Mon, 11 Oct 2010 11:54:36 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Convention and preferred practice when importing scipy In-Reply-To: References: <29936740.post@talk.nabble.com> Message-ID: <29936879.post@talk.nabble.com> You're right - I see that some of the functions are actually numpy functions. So, is there a difference between just importing scipy (import scipy) or just numpy (import numpy), except for the scipy.special subpackage package that also come with scipy (that you mentioned). And which is preferred? thanks for the explanation! Robert Kern-2 wrote: > > On Mon, Oct 11, 2010 at 13:35, omnyx wrote: >> >> Upon reading >> http://www.scipy.org/Getting_Started#head-eefd39e27392d16c48eb5350bc05b4b7045ffcd9 >> >> is it then true that using >> import scipy as sc >> is as fast as >> import numpy as np ? >> (sc,np just shortcut names) >> >> I only need a couple of scipy built-in functions but too many to actually >> list them all, and not all are part of numpy. I know I should be avoiding >> from scipy import * > > Which functions? A few from the scipy.special subpackage? "import > scipy" really imports nothing other than numpy under a different > namespace and with some different defaults exposed (for various > historical reasons). You almost never want to do that. If you need > things from one of the subpackages, you will need to import those > subpackages explicitly, e.g. "from scipy import special". That will > take a little more time than importing just numpy, but not as much > time as it would to import from all of the subpackages one by one. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/Convention-and-preferred-practice-when-importing-scipy-tp29936740p29936879.html Sent from the Scipy-User mailing list archive at Nabble.com. From robert.kern at gmail.com Mon Oct 11 15:02:19 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 11 Oct 2010 14:02:19 -0500 Subject: [SciPy-User] [SciPy-user] Convention and preferred practice when importing scipy In-Reply-To: <29936879.post@talk.nabble.com> References: <29936740.post@talk.nabble.com> <29936879.post@talk.nabble.com> Message-ID: On Mon, Oct 11, 2010 at 13:54, omnyx wrote: > > You're right - I see that some of the functions are actually numpy functions. > So, is there a difference between just importing scipy (import scipy) or > just numpy (import numpy), except for the scipy.special subpackage package > that also come with scipy (that you mentioned). > And which is preferred? The scipy/__init__.py file exposes almost everything that numpy/__init__.py does, but it replaces some of the math functions with some alternate versions from numpy.lib.scimath. The reasons for this are historical and shouldn't concern you. If you want those versions, import them from numpy.lib.scimath yourself. If you need stuff from numpy, import it from numpy. If you need stuff from a scipy subpackage (e.g. from scipy.special), import it from that subpackage. Never just do "import scipy". -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From ptittmann at gmail.com Mon Oct 11 21:28:49 2010 From: ptittmann at gmail.com (Peter Tittmann) Date: Mon, 11 Oct 2010 18:28:49 -0700 Subject: [SciPy-User] scipy.ODR example Message-ID: Hi, My intention is to generate a surface normal for point (x,y,z) on a 3 dimensional surface. I would like to use the orthogonal distance regression package to create a surface from the focal point and 2 (or more) nearby coordinate points. I am certain his is a simple task in ODR but I dont exactly understand how to formulate the Model. Any suggestions/examples would be most appreciated. Best regards Peter Tittmann From aarchiba at physics.mcgill.ca Tue Oct 12 01:20:13 2010 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Tue, 12 Oct 2010 07:20:13 +0200 Subject: [SciPy-User] scipy.KDTree.query ball tree In-Reply-To: References: Message-ID: > Would it be possible to search the cKDTree with the python > query_ball_tree? My application could use the efficiency of the > cKDTree ?but I'm not clear how I query distance instead of # of points > using the existing methods in cKDTree. Unfortunately, the cKDTree does not support "give me all neighbours within distance d"; the basic reason is that this must return python lists, which for some reason I didn't want to include in compiled code. This should be fixed, but unfortunately I don't have time to add it right now. (It wouldn't be too hard.) I should check, though: are you looking to query all neighbours of a single point (or unstructured list of points), or of a tree? If the former, what you want is query_ball, not query_ball_tree. It's still possible to use cKDTree to find all neighbours, though it's not as efficient as it might be. The first thing to point out is that the .query method supports an upper limit on the distance. So you can ask for "up to m neighbours within distance d". An array is still returned, so that the missing neighbours must be indicated with infinite distances and invalid indices. But if you were determined, you could run a query with up to (say) ten neighbours and a fixed distance, then re-query any points that actually had the full ten neighbours. Not terribly efficient, but if you have many points still possibly faster than the python KDTree. Or not. Try the python one before going to heroic lengths. Anne From matthew.brett at gmail.com Tue Oct 12 01:40:46 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 11 Oct 2010 22:40:46 -0700 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: <4CAC3518.63BA.009B.1@twdb.state.tx.us> References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CAAE819.63BA.009B.1@twdb.state.tx.us> <4CAC3518.63BA.009B.1@twdb.state.tx.us> Message-ID: Hi, On Wed, Oct 6, 2010 at 6:36 AM, Dharhas Pothina wrote: > > fun fun fun. Oh yes. > I can't see the second 'System' variable in Matlab R14. No, I guess matlab does the same as we do, and just overwrites with the last occurrence of the variable. > In the meanwhile is there a work around to load the rest of the data file into scipy. Yes - I can now read both your files into scipy. I just did 'fieldname', '_1_fieldname', '_2_fieldname' ... for the duplicates, so it's a bit of a hidden feature, but it does seem to work. You need to clone my github copy: git clone git://github.com/matthew-brett/scipy-work.git git checkout -b mio-read-fix origin/mio-read-fix and install scipy from there. Does that work for you? Incidentally, if you see the scipy/io/matlab/mio5.py file you'll see a function 'varmats_from_mat' that takes a given open mat file and returns a list of individual variables as (name, mat_stream) tuples, which you can then read out with loadmat. On the lines of: from scipy.io.matlab.mio5 import varmats_from_mat varmats = varmats_from_mat(open('strange_file.mat', 'rb')) and then first_vars = loadmat(varmats[0][1]) That last will give you the contents of the first variable. > Is it possible from a technical point of view to mimic the matlab load command that lets you specify which variables to load: Yes, almost anything is possible, but, oh, a little bit annoying to implement unless it's really a killer feature for someone. I'm hoping that now we can read your files, maybe it's a bit less killer for you? You can already specify a variable name (not a regexp), so for example: from scipy.io.matlab.mio5 import MatFile5Reader rdr = MatFile5Reader(open('20100819125947.mat', 'rb')) var = rdr.get_variables(['System']) will already return you the first 'System' variable, that would otherwise be overwritten when you read the full set of variables. You are right, that should be in the loadmat API somewhere. See you, Matthew From matthew.brett at gmail.com Tue Oct 12 02:51:16 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 11 Oct 2010 23:51:16 -0700 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CAAE819.63BA.009B.1@twdb.state.tx.us> <4CAC3518.63BA.009B.1@twdb.state.tx.us> Message-ID: Hi again, > You can already specify a variable name (not a regexp), so for example: > > from scipy.io.matlab.mio5 import MatFile5Reader > rdr = MatFile5Reader(open('20100819125947.mat', 'rb')) > var = rdr.get_variables(['System']) > > will already return you the first 'System' variable, that would > otherwise be overwritten when you read the full set of variables. ?You > are right, that should be in the loadmat API somewhere. Actually, I just did add that - in the same branch ('mio-read-fix') In [5]: import scipy.io as sio In [6]: vars = sio.loadmat('20100819125947.mat', variable_names=['System']) will return just the first occurrence of a variable named 'System' in the file. Best, Matthew From georges.schutz at internet.lu Tue Oct 12 02:59:04 2010 From: georges.schutz at internet.lu (Georges Schutz) Date: Tue, 12 Oct 2010 08:59:04 +0200 Subject: [SciPy-User] scikits.timeseries: How to define frequency of 15minutes In-Reply-To: <7A83B122-BB03-4E2F-AAD8-9F2F44666FB4@gmail.com> References: <4BF13906.9030707@internet.lu> <3CC2BB33-A226-48E5-80AE-2AE08E77551A@gmail.com> <7A83B122-BB03-4E2F-AAD8-9F2F44666FB4@gmail.com> Message-ID: I found some time to have a look at the virtual environments and this is a very interesting extension that I did not know yet. Thanks for that orientation. Concerning the sandbox I have a problem as scikits.timeseries is linked against numpy that is installed as binary on my winxp python environment and compiled using MSVC++. I only use MinGW and do not have MSVC++. Is there a possibility to overcome this situation? Do I need to rebuild numpy using MinGW (is this even possible)? What other dependencies will I encounter? I apologise for all this problems. Perhaps I should wait until the new official version of scikits.timeseries comes out. Georges Schutz On 28/09/2010 10:38, Pierre GM wrote: > > On Sep 28, 2010, at 9:40 AM, Georges Schutz wrote: >> >> Thank you for this development. I would like to analyse the sandbox >> branch but how can I do this without impacting my stable >> scikits.timeseries installation. >> I can clone the git repository into an local directory but it seems to >> me that I need to install the the package using >> python setup.py install >> And this will probably impact my stable scikits.timeseries no? > > Yes. > >> As I use Win-XP as OS it is recommended to use the pre-build installer >> (README.txt). What do you suggest me to do for testing and analysing >> your sandbox branch? > > Use a virtual environment. > http://pypi.python.org/pypi/virtualenv > > I advise you to use the following wrappers to simplify handling multiple virtualenvs > http://www.doughellmann.com/projects/virtualenvwrapper/ > > It's quite straightforward, but don't hesitate to contact me if you have any question. From fperez.net at gmail.com Tue Oct 12 03:10:40 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 12 Oct 2010 00:10:40 -0700 Subject: [SciPy-User] [ANN] IPython 0.10.1 is out. Message-ID: Hi all, we've just released IPython 0.10.1, full release notes are below. Downloads in source and windows binary form are available in the usual location: http://ipython.scipy.org/dist/ But since our switch to github, we also get automatic distribution of archives there: http://github.com/ipython/ipython/archives/rel-0.10.1 and we've also started uploading archives to the Python Package Index (which easy_install will use by default): http://pypi.python.org/pypi/ipython so at any time you should find a location with good download speeds. You can find the full documentation at: http://ipython.scipy.org/doc/rel-0.10.1/html/index.html Enjoy! Fernando (on behalf of the whole IPython team) Release 0.10.1 ============== IPython 0.10.1 was released October 11, 2010, over a year after version 0.10. This is mostly a bugfix release, since after version 0.10 was released, the development team's energy has been focused on the 0.11 series. We have nonetheless tried to backport what fixes we could into 0.10.1, as it remains the stable series that many users have in production systems they rely on. Since the 0.11 series changes many APIs in backwards-incompatible ways, we are willing to continue maintaining the 0.10.x series. We don't really have time to actively write new code for 0.10.x, but we are happy to accept patches and pull requests on the IPython `github site`_. If sufficient contributions are made that improve 0.10.1, we will roll them into future releases. For this purpose, we will have a branch called 0.10.2 on github, on which you can base your contributions. .. _github site: http://github.com/ipython For this release, we applied approximately 60 commits totaling a diff of over 7000 lines:: (0.10.1)amirbar[dist]> git diff --oneline rel-0.10.. | wc -l 7296 Highlights of this release: - The only significant new feature is that IPython's parallel computing machinery now supports natively the Sun Grid Engine and LSF schedulers. This work was a joint contribution from Justin Riley, Satra Ghosh and Matthieu Brucher, who put a lot of work into it. We also improved traceback handling in remote tasks, as well as providing better control for remote task IDs. - New IPython Sphinx directive. You can use this directive to mark blocks in reSructuredText documents as containig IPython syntax (including figures) and the will be executed during the build:: .. ipython:: In [2]: plt.figure() # ensure a fresh figure @savefig psimple.png width=4in In [3]: plt.plot([1,2,3]) Out[3]: [] - Various fixes to the standalone ipython-wx application. - We now ship internally the excellent argparse library, graciously licensed under BSD terms by Steven Bethard. Now (2010) that argparse has become part of Python 2.7 this will be less of an issue, but Steven's relicensing allowed us to start updating IPython to using argparse well before Python 2.7. Many thanks! - Robustness improvements so that IPython doesn't crash if the readline library is absent (though obviously a lot of functionality that requires readline will not be available). - Improvements to tab completion in Emacs with Python 2.6. - Logging now supports timestamps (see ``%logstart?`` for full details). - A long-standing and quite annoying bug where parentheses would be added to ``print`` statements, under Python 2.5 and 2.6, was finally fixed. - Improved handling of libreadline on Apple OSX. - Fix ``reload`` method of IPython demos, which was broken. - Fixes for the ipipe/ibrowse system on OSX. - Fixes for Zope profile. - Fix %timeit reporting when the time is longer than 1000s. - Avoid lockups with ? or ?? in SunOS, due to a bug in termios. - The usual assortment of miscellaneous bug fixes and small improvements. The following people contributed to this release (please let us know if we omitted your name and we'll gladly fix this in the notes for the future): * Beni Cherniavsky * Boyd Waters. * David Warde-Farley * Fernando Perez * G?khan Sever * Justin Riley * Kiorky * Laurent Dufrechou * Mark E. Smith * Matthieu Brucher * Satrajit Ghosh * Sebastian Busch * V?clav ?milauer From nico.schloemer at gmail.com Tue Oct 12 04:18:24 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Tue, 12 Oct 2010 10:18:24 +0200 Subject: [SciPy-User] scipy.sparse.csr_matrix: Refills with same graph, different values Message-ID: Hi all, I do computations with a sparse matrix the values of which need to change, but never the sparsity structure. Until now, every time the values change, I tossed away the old matrix, create a new lil_matrix, and then use .tocsr() to get what I want. This seems really wasteful, though, as this way, all the sparsity information is computed anew. I've tried setting the values in the csr_matrix -- one created -- directly, but even though they go into spots where there's a nonzero already, this operation seems slow. Also, I don't know of a way to zero-out a csr_matrix while keeping the sparsity information. Any hints on this? Cheers, Nico From cimrman3 at ntc.zcu.cz Tue Oct 12 04:22:32 2010 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 12 Oct 2010 10:22:32 +0200 (CEST) Subject: [SciPy-User] scipy.sparse.csr_matrix: Refills with same graph, different values In-Reply-To: References: Message-ID: Hi Nico, On Tue, 12 Oct 2010, Nico Schl?mer wrote: > Hi all, > > I do computations with a sparse matrix the values of which need to > change, but never the sparsity structure. > Until now, every time the values change, I tossed away the old matrix, > create a new lil_matrix, and then use .tocsr() to get what I want. > This seems really wasteful, though, as this way, all the sparsity > information is computed anew. > I've tried setting the values in the csr_matrix -- one created -- > directly, but even though they go into spots where there's a nonzero > already, this operation seems slow. Also, I don't know of a way to > zero-out a csr_matrix while keeping the sparsity information. > > Any hints on this? The csr_matrix has a data attribute, so zeroing is easy: mtx.data[:] = 0.0 But then do not perform operations that would create a new matrix, as the zero entries would be thrown away. As for the speed issues, can you be more specific? r. From nico.schloemer at gmail.com Tue Oct 12 04:39:37 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Tue, 12 Oct 2010 10:39:37 +0200 Subject: [SciPy-User] scipy.sparse.csr_matrix: Refills with same graph, different values In-Reply-To: References: Message-ID: Hi, > mtx.data[:] = 0.0 Works, thanks. > But then do not perform operations that would create a new matrix, as the > zero entries would be thrown away. What are those? I do something along the lines of ================================ *snip* ================================ A.data[:] = 0.0 for element in self.mesh.elements: for edge in element.edges: A[ edge[0], edge[0] ] += value A[ edge[0], edge[1] ] += value2 .... ================================ *snap* ================================ but this generates the warning ================================ *snip* ================================ /opt/scipy/0.8.0/lib/python/scipy/sparse/compressed.py:500: SparseEfficiencyWarning: changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. SparseEfficiencyWarning) ================================ *snap* ================================ and matrix filling takes about five times as long as compared to a lil_matrix with subsequent .tocsr(). --Nico On Tue, Oct 12, 2010 at 10:22 AM, Robert Cimrman wrote: > Hi Nico, > > On Tue, 12 Oct 2010, Nico Schl?mer wrote: > >> Hi all, >> >> I do computations with a sparse matrix the values of which need to >> change, but never the sparsity structure. >> Until now, every time the values change, I tossed away the old matrix, >> create a new lil_matrix, and then use .tocsr() to get what I want. >> This seems really wasteful, though, as this way, all the sparsity >> information is computed anew. >> I've tried setting the values in the csr_matrix -- one created -- >> directly, but even though they go into spots where there's a nonzero >> already, this operation seems slow. Also, I don't know of a way to >> zero-out a csr_matrix while keeping the sparsity information. >> >> Any hints on this? > > The csr_matrix has a data attribute, so zeroing is easy: > > mtx.data[:] = 0.0 > > But then do not perform operations that would create a new matrix, as the > zero entries would be thrown away. > > As for the speed issues, can you be more specific? > > r. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From cimrman3 at ntc.zcu.cz Tue Oct 12 04:49:00 2010 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 12 Oct 2010 10:49:00 +0200 (CEST) Subject: [SciPy-User] scipy.sparse.csr_matrix: Refills with same graph, different values In-Reply-To: References: Message-ID: On Tue, 12 Oct 2010, Nico Schl?mer wrote: > Hi, > >> mtx.data[:] = 0.0 > Works, thanks. > >> But then do not perform operations that would create a new matrix, as the >> zero entries would be thrown away. > > What are those? I do something along the lines of > > ================================ *snip* ================================ > A.data[:] = 0.0 > for element in self.mesh.elements: > for edge in element.edges: > A[ edge[0], edge[0] ] += value > A[ edge[0], edge[1] ] += value2 > .... > ================================ *snap* ================================ Finite element assembling, right? > but this generates the warning > > ================================ *snip* ================================ > /opt/scipy/0.8.0/lib/python/scipy/sparse/compressed.py:500: > SparseEfficiencyWarning: changing the sparsity structure of a > csr_matrix is expensive. lil_matrix is more efficient. > SparseEfficiencyWarning) > ================================ *snap* ================================ > > and matrix filling takes about five times as long as compared to a > lil_matrix with subsequent .tocsr(). It seems that some of the entries you expect to exist do not exist. The following works for me (scipy 0.9.0.dev6812): In [1]: import scipy.sparse as sps In [2]: a = sps.csr_matrix(sps.eye(5, 5)) In [3]: a Out[3]: <5x5 sparse matrix of type '' with 5 stored elements in Compressed Sparse Row format> In [4]: a.data[:] = 0 In [5]: a Out[5]: <5x5 sparse matrix of type '' with 5 stored elements in Compressed Sparse Row format> In [6]: print a ------> print(a) (0, 0) 0.0 (1, 1) 0.0 (2, 2) 0.0 (3, 3) 0.0 (4, 4) 0.0 In [7]: a[2, 2] += 4 In [8]: a Out[8]: <5x5 sparse matrix of type '' with 5 stored elements in Compressed Sparse Row format> In [9]: print a ------> print(a) (0, 0) 0.0 (1, 1) 0.0 (2, 2) 4.0 (3, 3) 0.0 (4, 4) 0.0 r. > On Tue, Oct 12, 2010 at 10:22 AM, Robert Cimrman wrote: >> Hi Nico, >> >> On Tue, 12 Oct 2010, Nico Schl?mer wrote: >> >>> Hi all, >>> >>> I do computations with a sparse matrix the values of which need to >>> change, but never the sparsity structure. >>> Until now, every time the values change, I tossed away the old matrix, >>> create a new lil_matrix, and then use .tocsr() to get what I want. >>> This seems really wasteful, though, as this way, all the sparsity >>> information is computed anew. >>> I've tried setting the values in the csr_matrix -- one created -- >>> directly, but even though they go into spots where there's a nonzero >>> already, this operation seems slow. Also, I don't know of a way to >>> zero-out a csr_matrix while keeping the sparsity information. >>> >>> Any hints on this? >> >> The csr_matrix has a data attribute, so zeroing is easy: >> >> mtx.data[:] = 0.0 >> >> But then do not perform operations that would create a new matrix, as the >> zero entries would be thrown away. >> >> As for the speed issues, can you be more specific? >> >> r. >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nico.schloemer at gmail.com Tue Oct 12 05:42:04 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Tue, 12 Oct 2010 11:42:04 +0200 Subject: [SciPy-User] scipy.sparse.csr_matrix: Refills with same graph, different values In-Reply-To: References: Message-ID: > Finite element assembling, right? Finite volumes, but yeah, the structure is really similar. > It seems that some of the entries you expect to exist do not exist. The > following works for me (scipy 0.9.0.dev6812): Correct. At the creating of the lil_matrix, some of the values were in fact 0 and got stripped upon .tocsr(). I can exclude this case now and run the code without the performance warning. Still, it runs a lot slower when the csr_matrix is filled directly. The following snippet may help making clear what I'm talking about: The second loop takes about twenty (!) times as long when running this on my computer. =================================== *snip* =================================== # -*- coding: utf-8 -*- from scipy import sparse import time n = 10000 # ------------------------------------------------------------------------------ starttime = time.clock() A = sparse.lil_matrix( ( n, n ) ) for k in range( 1, n-1 ): A[k,k-1] -= 0.5 A[k,k] += 1.0 A[k,k+1] -= 0.5 A = A.tocsr() endtime = time.clock() print endtime - starttime # ------------------------------------------------------------------------------ A.data[:] = 0.0 starttime = time.clock() for k in range( 1, n-1 ): A[k,k-1] -= 0.5 A[k,k] += 1.0 A[k,k+1] -= 0.5 endtime = time.clock() print endtime - starttime # ------------------------------------------------------------------------------ =================================== *snap* =================================== --Nico From cimrman3 at ntc.zcu.cz Tue Oct 12 06:36:59 2010 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 12 Oct 2010 12:36:59 +0200 (CEST) Subject: [SciPy-User] scipy.sparse.csr_matrix: Refills with same graph, different values In-Reply-To: References: Message-ID: On Tue, 12 Oct 2010, Nico Schl?mer wrote: >> Finite element assembling, right? > > Finite volumes, but yeah, the structure is really similar. Oh yes. >> It seems that some of the entries you expect to exist do not exist. The >> following works for me (scipy 0.9.0.dev6812): > > Correct. At the creating of the lil_matrix, some of the values were in > fact 0 and got stripped upon .tocsr(). I can exclude this case now and > run the code without the performance warning. > Still, it runs a lot slower when the csr_matrix is filled directly. > The following snippet may help making clear what I'm talking about: > The second loop takes about twenty (!) times as long when running this > on my computer. Same on mine. IMHO it's because element access in CS* matrices is rather complex to allow some fancy indexing and slices, so accessing a single element has a significant overhead. LIL, on the other hand is designed to behave well with single element access. r. > =================================== *snip* =================================== > # -*- coding: utf-8 -*- > > from scipy import sparse > import time > > n = 10000 > > # ------------------------------------------------------------------------------ > starttime = time.clock() > A = sparse.lil_matrix( ( n, n ) ) > for k in range( 1, n-1 ): > A[k,k-1] -= 0.5 > A[k,k] += 1.0 > A[k,k+1] -= 0.5 > A = A.tocsr() > endtime = time.clock() > print endtime - starttime > # ------------------------------------------------------------------------------ > A.data[:] = 0.0 > starttime = time.clock() > for k in range( 1, n-1 ): > A[k,k-1] -= 0.5 > A[k,k] += 1.0 > A[k,k+1] -= 0.5 > endtime = time.clock() > print endtime - starttime > # ------------------------------------------------------------------------------ > =================================== *snap* =================================== From bblais at bryant.edu Tue Oct 12 08:12:41 2010 From: bblais at bryant.edu (Brian Blais) Date: Tue, 12 Oct 2010 08:12:41 -0400 Subject: [SciPy-User] odeint with saturation? Message-ID: <1DF088E3-790C-4382-A53D-5F4313CB5B70@bryant.edu> Hello, I am using odeint to solve some diff eqs, and it works great, but some of my cases have saturating values. In many cases a value can't go negative (if it does, it should just be set equal to zero). It doesn't seem as if odeint can do this, but is there an easy or preferred way of solving that kind of system? thanks! Brian Blais -- Brian Blais bblais at bryant.edu http://web.bryant.edu/~bblais http://bblais.blogspot.com/ From Dharhas.Pothina at twdb.state.tx.us Tue Oct 12 09:12:23 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Tue, 12 Oct 2010 08:12:23 -0500 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CAAE819.63BA.009B.1@twdb.state.tx.us> <4CAC3518.63BA.009B.1@twdb.state.tx.us> Message-ID: <4CB41867.63BA.009B.1@twdb.state.tx.us> Mathew, Many thanks. I'll test this out as soon as I can. I think for now I will go with the MatFile5Reader approach and just load the variables I need, since from what I understand this approach will work with the current version of scipy. I need to deploy this script across several users and I prefer to install release versions of Scipy on their machines. I'll transition to loadmat once that makes it into the stable release. Will these fixes make it into the next release of scipy? - dharhas >>> Matthew Brett 10/12/2010 1:51 AM >>> Hi again, > You can already specify a variable name (not a regexp), so for example: > > from scipy.io.matlab.mio5 import MatFile5Reader > rdr = MatFile5Reader(open('20100819125947.mat', 'rb')) > var = rdr.get_variables(['System']) > > will already return you the first 'System' variable, that would > otherwise be overwritten when you read the full set of variables. You > are right, that should be in the loadmat API somewhere. Actually, I just did add that - in the same branch ('mio-read-fix') In [5]: import scipy.io as sio In [6]: vars = sio.loadmat('20100819125947.mat', variable_names=['System']) will return just the first occurrence of a variable named 'System' in the file. Best, Matthew _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From Torkild.Tveraa at nina.no Tue Oct 12 09:10:04 2010 From: Torkild.Tveraa at nina.no (Tveraa, Torkild) Date: Tue, 12 Oct 2010 13:10:04 +0000 Subject: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b In-Reply-To: <1DF088E3-790C-4382-A53D-5F4313CB5B70@bryant.edu> References: <1DF088E3-790C-4382-A53D-5F4313CB5B70@bryant.edu> Message-ID: <1C2E7B0273BE4B4BB75944275286AC1A25CF0C@NINSRV05.nina.no> Dear All, I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-) Cheers, Torkild ------------------------------------------------------- import numpy import pylab from scipy import * from scipy import optimize ## This is y-data: y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0])) ## This is x-data: t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353])) ## This is the equation: fitfunc = lambda p, x: p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1) ## errfunc = lambda p, x, y: fitfunc(p,x) -y guess = [0, max(y_data), 0.1, 140, -0.1, 270] bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ] ## This seems to work ok: p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0) print 'Estimates from leastsq \n', p2,success ## But this does not: best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2) From matthew.brett at gmail.com Tue Oct 12 15:34:52 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 12 Oct 2010 12:34:52 -0700 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: <4CB41867.63BA.009B.1@twdb.state.tx.us> References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CAAE819.63BA.009B.1@twdb.state.tx.us> <4CAC3518.63BA.009B.1@twdb.state.tx.us> <4CB41867.63BA.009B.1@twdb.state.tx.us> Message-ID: Hi, > Many thanks. I'll test this out as soon as I can. I think for now I will go with the MatFile5Reader approach and just load the variables I need, since from what I understand this approach will work with the current version of scipy. I need to deploy this script across several users and I prefer to install release versions of Scipy on their machines. I'll transition to loadmat once that makes it into the stable release. Ah - no - so there are three separate problems in your example file: a) Duplicate variable names b) Strings that have declared length > 0, and allocated storage of 0. c) Structs with duplicate fieldnames. For a) you can do something about this in current scipy if you want the first (rather than the last) version of a particular variable, by doing the MatFileReader trick. But that's your smallest problem because b) and c) crash the scipy reader. So, you can probably pull out any variables not suffering from b) or c), but that's all - with current scipy. > Will these fixes make it into the next release of scipy? Yes, they should be. I will push them into the current svn version for others to try and we'll see what damage I have done ;) Cheers, Matthew From Dharhas.Pothina at twdb.state.tx.us Tue Oct 12 16:00:22 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Tue, 12 Oct 2010 15:00:22 -0500 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CAAE819.63BA.009B.1@twdb.state.tx.us> <4CAC3518.63BA.009B.1@twdb.state.tx.us> <4CB41867.63BA.009B.1@twdb.state.tx.us> Message-ID: <4CB47806.63BA.009B.1@twdb.state.tx.us> > Ah - no - so there are three separate problems in your example file: > > a) Duplicate variable names > b) Strings that have declared length > 0, and allocated storage of 0. > c) Structs with duplicate fieldnames. > > For a) you can do something about this in current scipy if you want > the first (rather than the last) version of a particular variable, by > doing the MatFileReader trick. But that's your smallest problem > because b) and c) crash the scipy reader. So, you can probably pull > out any variables not suffering from b) or c), but that's all - with > current scipy. >From my initial examination of the data files all of these issues occur with variables I am not planning to use in my script, so I *should* be ok. Again thanks for the sleuthing and the fixes. I think the responses I get from this community are an order of magnitude more helpful than those I get for products we have commercial support contracts for. - dharhas From matthew.brett at gmail.com Tue Oct 12 16:11:35 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 12 Oct 2010 13:11:35 -0700 Subject: [SciPy-User] scipy.io.loadmat error when trying to read a .mat file In-Reply-To: <4CB47806.63BA.009B.1@twdb.state.tx.us> References: <4CA99095.63BA.009B.1@twdb.state.tx.us> <4CA9AB60.63BA.009B.1@twdb.state.tx.us> <4CAAE819.63BA.009B.1@twdb.state.tx.us> <4CAC3518.63BA.009B.1@twdb.state.tx.us> <4CB41867.63BA.009B.1@twdb.state.tx.us> <4CB47806.63BA.009B.1@twdb.state.tx.us> Message-ID: Hi, > Again thanks for the sleuthing and the fixes. I think the responses I get from this community are an order of magnitude more helpful than those I get for products we have commercial support contracts for. Well - I hope we can live up to that recommendation ;). Best, Matthew From dagss at student.matnat.uio.no Wed Oct 13 02:01:07 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 13 Oct 2010 08:01:07 +0200 Subject: [SciPy-User] scipy.sparse.csr_matrix: Refills with same graph, different values In-Reply-To: References: Message-ID: <4CB54B23.7080708@student.matnat.uio.no> On 10/12/2010 12:36 PM, Robert Cimrman wrote: > On Tue, 12 Oct 2010, Nico Schl?mer wrote: > >>> Finite element assembling, right? >> >> Finite volumes, but yeah, the structure is really similar. > > Oh yes. > >>> It seems that some of the entries you expect to exist do not exist. The >>> following works for me (scipy 0.9.0.dev6812): >> >> Correct. At the creating of the lil_matrix, some of the values were in >> fact 0 and got stripped upon .tocsr(). I can exclude this case now and >> run the code without the performance warning. >> Still, it runs a lot slower when the csr_matrix is filled directly. >> The following snippet may help making clear what I'm talking about: >> The second loop takes about twenty (!) times as long when running this >> on my computer. > > Same on mine. IMHO it's because element access in CS* matrices is > rather complex to allow some fancy indexing and slices, so accessing a > single element has a significant overhead. LIL, on the other hand is > designed to behave well with single element access. This may be because of Python overhead in this case. For instance, there's this line leading to allocating a new temporary boolean array (__setitem__ in compressed.py): indxs = np.where(minor_index == self.indices[start:end])[0] Cythonizing the item assignment code (and throwing in the special cases for CSC/CSR in another way) may give back that factor of twenty...this could be done in an external routine as well, no need to modify SciPy, just operate on the data, indices, indptr arrays. Dag Sverre > > r. > >> =================================== *snip* >> =================================== >> # -*- coding: utf-8 -*- >> >> from scipy import sparse >> import time >> >> n = 10000 >> >> # >> ------------------------------------------------------------------------------ >> >> starttime = time.clock() >> A = sparse.lil_matrix( ( n, n ) ) >> for k in range( 1, n-1 ): >> A[k,k-1] -= 0.5 >> A[k,k] += 1.0 >> A[k,k+1] -= 0.5 >> A = A.tocsr() >> endtime = time.clock() >> print endtime - starttime >> # >> ------------------------------------------------------------------------------ >> >> A.data[:] = 0.0 >> starttime = time.clock() >> for k in range( 1, n-1 ): >> A[k,k-1] -= 0.5 >> A[k,k] += 1.0 >> A[k,k+1] -= 0.5 >> endtime = time.clock() >> print endtime - starttime >> # >> ------------------------------------------------------------------------------ >> >> =================================== *snap* >> =================================== > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Wed Oct 13 09:09:46 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 13 Oct 2010 09:09:46 -0400 Subject: [SciPy-User] any idea what this message means? Message-ID: /usr/lib/python2.6/site-packages/scikits.cuda-0.02- py2.6.egg/scikits/__init__.py:1: UserWarning: Module daemon was already imported from /home/nbecker/scma-ldpc-lbd/test/daemon.pyc, but /usr/lib/python2.6/site-packages is being added to sys.path __import__('pkg_resources').declare_namespace(__name__) From nico.schloemer at gmail.com Wed Oct 13 09:55:42 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Wed, 13 Oct 2010 15:55:42 +0200 Subject: [SciPy-User] scipy.sparse.linalg.eigen error code? Message-ID: Hi, I'm using scipy.sparse.linalg.eigen to compute the lowest magnitude eigenvalue of a matrix, and I noticed that sometimes the code would return 0+0j where I really didn't expect it. Turns out that tweaking the number of iterations results in some more meaningful value here, suggesting that the underlying Arnoldi (ARPACK?) iteration failed with the maxiter value previously given. As far as I can see, there's no way to tell that the iteration actually failed. Is that correct? Cheers, Nico From nico.schloemer at gmail.com Wed Oct 13 10:21:21 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Wed, 13 Oct 2010 16:21:21 +0200 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? Message-ID: Hi, I thought I give lobpcg a shot, and tried ====================== *snip* ====================== from scipy.sparse.linalg import lobpcg from scipy.sparse import identity import numpy as np n = 10 X = np.zeros( (n,1) ) A = identity( n ) lobpcg( A, X ) ====================== *snap* ====================== On my machine, this yields ====================== *snip* ====================== Traceback (most recent call last): File "logpcg_test.py", line 8, in lobpcg( A, X ) File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", line 304, in lobpcg blockVectorX, blockVectorBX = b_orthonormalize( B, blockVectorX ) File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", line 130, in b_orthonormalize gramVBV = sla.cholesky( gramVBV ) File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", line 66, in cholesky c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, clean=True) File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", line 24, in _cholesky raise LinAlgError("%d-th leading minor not positive definite" % info) numpy.linalg.linalg.LinAlgError: 1-th leading minor not positive definite ====================== *snap* ====================== Fail! Am I missing a library, or is that routine broken? Cheers, Nico From josef.pktd at gmail.com Wed Oct 13 10:29:27 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 13 Oct 2010 10:29:27 -0400 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: Message-ID: On Wed, Oct 13, 2010 at 10:21 AM, Nico Schl?mer wrote: > Hi, > > I thought I give lobpcg a shot, and tried > > ====================== *snip* ====================== > from scipy.sparse.linalg import lobpcg > from scipy.sparse import identity > import numpy as np > > n = 10 > X = np.zeros( (n,1) ) > A = identity( n ) > lobpcg( A, X ) > ====================== *snap* ====================== > > On my machine, this yields > > ====================== *snip* ====================== > Traceback (most recent call last): > ?File "logpcg_test.py", line 8, in > ? ?lobpcg( A, X ) > ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", > line 304, in lobpcg > ? ?blockVectorX, blockVectorBX = b_orthonormalize( B, blockVectorX ) > ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", > line 130, in b_orthonormalize > ? ?gramVBV = sla.cholesky( gramVBV ) > ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", > line 66, in cholesky > ? ?c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, clean=True) > ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", > line 24, in _cholesky > ? ?raise LinAlgError("%d-th leading minor not positive definite" % info) > numpy.linalg.linalg.LinAlgError: 1-th leading minor not positive definite > ====================== *snap* ====================== > > Fail! > > Am I missing a library, or is that routine broken? It looks like a bug if X is all zeros. If at least 1 element of X is non-zero, it seems to work. Josef > > Cheers, > Nico > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nico.schloemer at gmail.com Wed Oct 13 10:50:49 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Wed, 13 Oct 2010 16:50:49 +0200 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: Message-ID: Really? I tried ones() instead, and got (with verbosity=10) ======================= *snip* ======================= Solving generalized eigenvalue problem with preconditioning matrix size 10 block size 1 No constraints iteration 0 [ True] current block size: 1 eigenvalue: [ 100.] residual norms: [ 990.] iteration 1 [ True] current block size: 1 eigenvalue: [ 0.] residual norms: [ 9.60596010e+12] iteration 2 [ True] current block size: 1 eigenvalue: [ 0.] residual norms: [ 1.63581388e+65] iteration 3 Warning: invalid value encountered in multiply [False] Warning: invalid value encountered in multiply final eigenvalue: [ 0.] final residual norms: [ nan] ======================= *snap* ======================= We're still talking about the identity matrix, so I don't expect this breakdown to be inherent in the method. Cheers, Nico On Wed, Oct 13, 2010 at 4:29 PM, wrote: > On Wed, Oct 13, 2010 at 10:21 AM, Nico Schl?mer > wrote: >> Hi, >> >> I thought I give lobpcg a shot, and tried >> >> ====================== *snip* ====================== >> from scipy.sparse.linalg import lobpcg >> from scipy.sparse import identity >> import numpy as np >> >> n = 10 >> X = np.zeros( (n,1) ) >> A = identity( n ) >> lobpcg( A, X ) >> ====================== *snap* ====================== >> >> On my machine, this yields >> >> ====================== *snip* ====================== >> Traceback (most recent call last): >> ?File "logpcg_test.py", line 8, in >> ? ?lobpcg( A, X ) >> ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >> line 304, in lobpcg >> ? ?blockVectorX, blockVectorBX = b_orthonormalize( B, blockVectorX ) >> ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >> line 130, in b_orthonormalize >> ? ?gramVBV = sla.cholesky( gramVBV ) >> ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >> line 66, in cholesky >> ? ?c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, clean=True) >> ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >> line 24, in _cholesky >> ? ?raise LinAlgError("%d-th leading minor not positive definite" % info) >> numpy.linalg.linalg.LinAlgError: 1-th leading minor not positive definite >> ====================== *snap* ====================== >> >> Fail! >> >> Am I missing a library, or is that routine broken? > > It looks like a bug if X is all zeros. If at least 1 element of X is > non-zero, it seems to work. > > Josef > >> >> Cheers, >> Nico >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nico.schloemer at gmail.com Wed Oct 13 10:58:03 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Wed, 13 Oct 2010 16:58:03 +0200 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: Message-ID: In fact, for general X = np.random.rand( n,1 ) this thing always blows up -- look at the residual norms. Bug report? Nico On Wed, Oct 13, 2010 at 4:50 PM, Nico Schl?mer wrote: > Really? I tried ones() instead, and got (with verbosity=10) > > ======================= *snip* ======================= > Solving generalized eigenvalue problem with preconditioning > > matrix size 10 > block size 1 > > No constraints > > > iteration 0 > [ True] > current block size: 1 > eigenvalue: [ 100.] > residual norms: [ 990.] > iteration 1 > [ True] > current block size: 1 > eigenvalue: [ 0.] > residual norms: [ ?9.60596010e+12] > iteration 2 > [ True] > current block size: 1 > eigenvalue: [ 0.] > residual norms: [ ?1.63581388e+65] > iteration 3 > Warning: invalid value encountered in multiply > [False] > Warning: invalid value encountered in multiply > final eigenvalue: [ 0.] > final residual norms: [ nan] > ======================= *snap* ======================= > > We're still talking about the identity matrix, so I don't expect this > breakdown to be inherent in the method. > > Cheers, > Nico > > > > On Wed, Oct 13, 2010 at 4:29 PM, ? wrote: >> On Wed, Oct 13, 2010 at 10:21 AM, Nico Schl?mer >> wrote: >>> Hi, >>> >>> I thought I give lobpcg a shot, and tried >>> >>> ====================== *snip* ====================== >>> from scipy.sparse.linalg import lobpcg >>> from scipy.sparse import identity >>> import numpy as np >>> >>> n = 10 >>> X = np.zeros( (n,1) ) >>> A = identity( n ) >>> lobpcg( A, X ) >>> ====================== *snap* ====================== >>> >>> On my machine, this yields >>> >>> ====================== *snip* ====================== >>> Traceback (most recent call last): >>> ?File "logpcg_test.py", line 8, in >>> ? ?lobpcg( A, X ) >>> ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>> line 304, in lobpcg >>> ? ?blockVectorX, blockVectorBX = b_orthonormalize( B, blockVectorX ) >>> ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>> line 130, in b_orthonormalize >>> ? ?gramVBV = sla.cholesky( gramVBV ) >>> ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>> line 66, in cholesky >>> ? ?c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, clean=True) >>> ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>> line 24, in _cholesky >>> ? ?raise LinAlgError("%d-th leading minor not positive definite" % info) >>> numpy.linalg.linalg.LinAlgError: 1-th leading minor not positive definite >>> ====================== *snap* ====================== >>> >>> Fail! >>> >>> Am I missing a library, or is that routine broken? >> >> It looks like a bug if X is all zeros. If at least 1 element of X is >> non-zero, it seems to work. >> >> Josef >> >>> >>> Cheers, >>> Nico >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From kcrisman at gmail.com Wed Oct 13 11:16:04 2010 From: kcrisman at gmail.com (Karl-Dieter Crisman) Date: Wed, 13 Oct 2010 11:16:04 -0400 Subject: [SciPy-User] Problems building SciPy on OS X due to ppc64 issues Message-ID: Hi, sorry for the delay. This is on Intel OS X 10.6 (more than one machine). The previous poster knows a lot about build stuff, but doesn't happen to have access to this type of system. We have tried the latest fixes to Numpy which just remove ppc64 (or do something smarter), and although that still might end up being an issue, it isn't this problem. I think that Robert's is the correct analysis. We are using gfortran-4.2 on Mac OS X 10.6 as an included binary in Sage to do fortran compiling (which works fine for R and Numpy, and until recently was fine for Scipy as well). Also, I should point out that g95 works great for Scipy 0.8 on our OS X 10.4 machines (we are working on eventually migrating to gfortran, but unfortunately this is difficult because of Sage's "batteries included" philosophy), so different fortran compilers shouldn't be the issue per se, but the extra option appears to be - compiling object code twice, is that correct? I know very little about how compiler options are set, so I don't know where this would be coming from. Where does this "compile options" line usually come from in a typical Scipy build? Is it somewhere in a makefile or something (again, excuse my ignorance if this makes no sense, I'm only guessing blindly). Karl-Dieter Crisman > -I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include > -c -c scipy/fftpack/src/dfftpack/dcosqb.f -o Are the multiple "-c" options causing issues? From the build log, it looks like "-c" is being added explicitly somewhere. compile options: '-I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include -c' Exactly where did the gfortran compiler come from? What version is it? What architecture is the machine doing the build? -- Robert Kern From josef.pktd at gmail.com Wed Oct 13 11:32:31 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 13 Oct 2010 11:32:31 -0400 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: Message-ID: On Wed, Oct 13, 2010 at 10:58 AM, Nico Schl?mer wrote: > In fact, for general > ?X = np.random.rand( n,1 ) > this thing always blows up -- look at the residual norms. > > Bug report? > Nico > > > > On Wed, Oct 13, 2010 at 4:50 PM, Nico Schl?mer wrote: >> Really? I tried ones() instead, and got (with verbosity=10) >> >> ======================= *snip* ======================= >> Solving generalized eigenvalue problem with preconditioning >> >> matrix size 10 >> block size 1 >> >> No constraints >> >> >> iteration 0 >> [ True] >> current block size: 1 >> eigenvalue: [ 100.] >> residual norms: [ 990.] >> iteration 1 >> [ True] >> current block size: 1 >> eigenvalue: [ 0.] >> residual norms: [ ?9.60596010e+12] >> iteration 2 >> [ True] >> current block size: 1 >> eigenvalue: [ 0.] >> residual norms: [ ?1.63581388e+65] >> iteration 3 >> Warning: invalid value encountered in multiply >> [False] >> Warning: invalid value encountered in multiply >> final eigenvalue: [ 0.] >> final residual norms: [ nan] >> ======================= *snap* ======================= >> >> We're still talking about the identity matrix, so I don't expect this >> breakdown to be inherent in the method. >> >> Cheers, >> Nico noisy identity matrix seems to work lobpcg( np.diag(1+1e-6*np.random.randn(10)), np.random.randn(10,1), verbosityLevel=10) lobpcg( sparse.csr_matrix(np.diag(1+1e-4*np.random.randn(10))), np.random.randn(10,1), verbosityLevel=10) I'm not sure what this means (since I'm no expert on this) >>> X = np.zeros( (n,1) ) >>> X[-3:]=.1 >>> lobpcg( A, X, verbosityLevel=10) Solving generalized eigenvalue problem with preconditioning matrix size 10 block size 1 No constraints iteration 0 [False] final eigenvalue: [ 1.] final residual norms: [ 1.92296269e-16] (array([ 1.]), array([[ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ], [ 0.57735027], [ 0.57735027], [ 0.57735027]])) final residual norm = 0. >>> X = np.ones( (n,1) ) >>> lobpcg( A, X, verbosityLevel=10) Solving generalized eigenvalue problem with preconditioning matrix size 10 block size 1 No constraints iteration 0 [False] final eigenvalue: [ 1.] final residual norms: [ 0.] (array([ 1.]), array([[ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777]])) I have no idea if there are some inherent problems with starting values and whether lobpcg is supposed to converge from any starting values. Josef >> >> >> >> On Wed, Oct 13, 2010 at 4:29 PM, ? wrote: >>> On Wed, Oct 13, 2010 at 10:21 AM, Nico Schl?mer >>> wrote: >>>> Hi, >>>> >>>> I thought I give lobpcg a shot, and tried >>>> >>>> ====================== *snip* ====================== >>>> from scipy.sparse.linalg import lobpcg >>>> from scipy.sparse import identity >>>> import numpy as np >>>> >>>> n = 10 >>>> X = np.zeros( (n,1) ) >>>> A = identity( n ) >>>> lobpcg( A, X ) >>>> ====================== *snap* ====================== >>>> >>>> On my machine, this yields >>>> >>>> ====================== *snip* ====================== >>>> Traceback (most recent call last): >>>> ?File "logpcg_test.py", line 8, in >>>> ? ?lobpcg( A, X ) >>>> ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>>> line 304, in lobpcg >>>> ? ?blockVectorX, blockVectorBX = b_orthonormalize( B, blockVectorX ) >>>> ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>>> line 130, in b_orthonormalize >>>> ? ?gramVBV = sla.cholesky( gramVBV ) >>>> ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>>> line 66, in cholesky >>>> ? ?c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, clean=True) >>>> ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>>> line 24, in _cholesky >>>> ? ?raise LinAlgError("%d-th leading minor not positive definite" % info) >>>> numpy.linalg.linalg.LinAlgError: 1-th leading minor not positive definite >>>> ====================== *snap* ====================== >>>> >>>> Fail! >>>> >>>> Am I missing a library, or is that routine broken? >>> >>> It looks like a bug if X is all zeros. If at least 1 element of X is >>> non-zero, it seems to work. >>> >>> Josef >>> >>>> >>>> Cheers, >>>> Nico >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From waleriantunes at gmail.com Wed Oct 13 11:47:12 2010 From: waleriantunes at gmail.com (=?ISO-8859-1?Q?Wal=E9ria_Antunes_David?=) Date: Wed, 13 Oct 2010 12:47:12 -0300 Subject: [SciPy-User] Integral Equation Message-ID: Hi all, Can anyone help me? I need to pass this integral equation for scipy.integrate pack for python. Well, i need to compile this equation in Python, but I do not know how to assemble it in python. Can anyone help me? Thanks, Waleria -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: integral_equation.jpg Type: image/jpeg Size: 11105 bytes Desc: not available URL: From nico.schloemer at gmail.com Wed Oct 13 12:57:57 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Wed, 13 Oct 2010 18:57:57 +0200 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: Message-ID: Possibly the devs know -- how to get back to them? --Nico On Wed, Oct 13, 2010 at 5:32 PM, wrote: > On Wed, Oct 13, 2010 at 10:58 AM, Nico Schl?mer > wrote: >> In fact, for general >> ?X = np.random.rand( n,1 ) >> this thing always blows up -- look at the residual norms. >> >> Bug report? >> Nico >> >> >> >> On Wed, Oct 13, 2010 at 4:50 PM, Nico Schl?mer wrote: >>> Really? I tried ones() instead, and got (with verbosity=10) >>> >>> ======================= *snip* ======================= >>> Solving generalized eigenvalue problem with preconditioning >>> >>> matrix size 10 >>> block size 1 >>> >>> No constraints >>> >>> >>> iteration 0 >>> [ True] >>> current block size: 1 >>> eigenvalue: [ 100.] >>> residual norms: [ 990.] >>> iteration 1 >>> [ True] >>> current block size: 1 >>> eigenvalue: [ 0.] >>> residual norms: [ ?9.60596010e+12] >>> iteration 2 >>> [ True] >>> current block size: 1 >>> eigenvalue: [ 0.] >>> residual norms: [ ?1.63581388e+65] >>> iteration 3 >>> Warning: invalid value encountered in multiply >>> [False] >>> Warning: invalid value encountered in multiply >>> final eigenvalue: [ 0.] >>> final residual norms: [ nan] >>> ======================= *snap* ======================= >>> >>> We're still talking about the identity matrix, so I don't expect this >>> breakdown to be inherent in the method. >>> >>> Cheers, >>> Nico > > noisy identity matrix seems to work > lobpcg( np.diag(1+1e-6*np.random.randn(10)), np.random.randn(10,1), > verbosityLevel=10) > lobpcg( sparse.csr_matrix(np.diag(1+1e-4*np.random.randn(10))), > np.random.randn(10,1), verbosityLevel=10) > > I'm not sure what this means (since I'm no expert on this) > >>>> X = np.zeros( (n,1) ) >>>> X[-3:]=.1 >>>> lobpcg( A, X, verbosityLevel=10) > Solving generalized eigenvalue problem with preconditioning > > matrix size 10 > block size 1 > > No constraints > > > iteration 0 > [False] > final eigenvalue: [ 1.] > final residual norms: [ ?1.92296269e-16] > (array([ 1.]), array([[ 0. ? ? ? ?], > ? ? ? [ 0. ? ? ? ?], > ? ? ? [ 0. ? ? ? ?], > ? ? ? [ 0. ? ? ? ?], > ? ? ? [ 0. ? ? ? ?], > ? ? ? [ 0. ? ? ? ?], > ? ? ? [ 0. ? ? ? ?], > ? ? ? [ 0.57735027], > ? ? ? [ 0.57735027], > ? ? ? [ 0.57735027]])) > > > final residual norm = 0. > >>>> X = np.ones( (n,1) ) >>>> lobpcg( A, X, verbosityLevel=10) > Solving generalized eigenvalue problem with preconditioning > > matrix size 10 > block size 1 > > No constraints > > > iteration 0 > [False] > final eigenvalue: [ 1.] > final residual norms: [ 0.] > (array([ 1.]), array([[ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777]])) > > I have no idea if there are some inherent problems with starting > values and whether lobpcg is supposed to converge from any starting > values. > > Josef > > >>> >>> >>> >>> On Wed, Oct 13, 2010 at 4:29 PM, ? wrote: >>>> On Wed, Oct 13, 2010 at 10:21 AM, Nico Schl?mer >>>> wrote: >>>>> Hi, >>>>> >>>>> I thought I give lobpcg a shot, and tried >>>>> >>>>> ====================== *snip* ====================== >>>>> from scipy.sparse.linalg import lobpcg >>>>> from scipy.sparse import identity >>>>> import numpy as np >>>>> >>>>> n = 10 >>>>> X = np.zeros( (n,1) ) >>>>> A = identity( n ) >>>>> lobpcg( A, X ) >>>>> ====================== *snap* ====================== >>>>> >>>>> On my machine, this yields >>>>> >>>>> ====================== *snip* ====================== >>>>> Traceback (most recent call last): >>>>> ?File "logpcg_test.py", line 8, in >>>>> ? ?lobpcg( A, X ) >>>>> ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>>>> line 304, in lobpcg >>>>> ? ?blockVectorX, blockVectorBX = b_orthonormalize( B, blockVectorX ) >>>>> ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>>>> line 130, in b_orthonormalize >>>>> ? ?gramVBV = sla.cholesky( gramVBV ) >>>>> ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>>>> line 66, in cholesky >>>>> ? ?c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, clean=True) >>>>> ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>>>> line 24, in _cholesky >>>>> ? ?raise LinAlgError("%d-th leading minor not positive definite" % info) >>>>> numpy.linalg.linalg.LinAlgError: 1-th leading minor not positive definite >>>>> ====================== *snap* ====================== >>>>> >>>>> Fail! >>>>> >>>>> Am I missing a library, or is that routine broken? >>>> >>>> It looks like a bug if X is all zeros. If at least 1 element of X is >>>> non-zero, it seems to work. >>>> >>>> Josef >>>> >>>>> >>>>> Cheers, >>>>> Nico >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cimrman3 at ntc.zcu.cz Wed Oct 13 13:09:35 2010 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 13 Oct 2010 19:09:35 +0200 (CEST) Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: Message-ID: On Wed, 13 Oct 2010, Nico Schl?mer wrote: > Possibly the devs know -- how to get back to them? If I recall correctly, any non-zero X should be ok. It actually works for me: In [20]: X = np.random.rand( n,1 ) In [21]: lobpcg( A, X ) Out[21]: (array([ 1.]), array([[ 0.24881513], [ 0.35018185], [ 0.47060887], [ 0.0544507 ], [ 0.2243659 ], [ 0.22597527], [ 0.55431323], [ 0.40576857], [ 0.05081581], [ 0.12299473]])) In [23]: lobpcg( A, X ) Out[23]: (array([ 1.]), array([[ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777], [ 0.31622777]])) The eigenvalue is correctly computed (lambda = 1). Since A is a unit matrix and we are solving A x = lambda x, any nonzero x is an eigenvector. r. > --Nico > > > On Wed, Oct 13, 2010 at 5:32 PM, wrote: >> On Wed, Oct 13, 2010 at 10:58 AM, Nico Schl?mer >> wrote: >>> In fact, for general >>> ?X = np.random.rand( n,1 ) >>> this thing always blows up -- look at the residual norms. >>> >>> Bug report? >>> Nico >>> >>> >>> >>> On Wed, Oct 13, 2010 at 4:50 PM, Nico Schl?mer wrote: >>>> Really? I tried ones() instead, and got (with verbosity=10) >>>> >>>> ======================= *snip* ======================= >>>> Solving generalized eigenvalue problem with preconditioning >>>> >>>> matrix size 10 >>>> block size 1 >>>> >>>> No constraints >>>> >>>> >>>> iteration 0 >>>> [ True] >>>> current block size: 1 >>>> eigenvalue: [ 100.] >>>> residual norms: [ 990.] >>>> iteration 1 >>>> [ True] >>>> current block size: 1 >>>> eigenvalue: [ 0.] >>>> residual norms: [ ?9.60596010e+12] >>>> iteration 2 >>>> [ True] >>>> current block size: 1 >>>> eigenvalue: [ 0.] >>>> residual norms: [ ?1.63581388e+65] >>>> iteration 3 >>>> Warning: invalid value encountered in multiply >>>> [False] >>>> Warning: invalid value encountered in multiply >>>> final eigenvalue: [ 0.] >>>> final residual norms: [ nan] >>>> ======================= *snap* ======================= >>>> >>>> We're still talking about the identity matrix, so I don't expect this >>>> breakdown to be inherent in the method. >>>> >>>> Cheers, >>>> Nico >> >> noisy identity matrix seems to work >> lobpcg( np.diag(1+1e-6*np.random.randn(10)), np.random.randn(10,1), >> verbosityLevel=10) >> lobpcg( sparse.csr_matrix(np.diag(1+1e-4*np.random.randn(10))), >> np.random.randn(10,1), verbosityLevel=10) >> >> I'm not sure what this means (since I'm no expert on this) >> >>>>> X = np.zeros( (n,1) ) >>>>> X[-3:]=.1 >>>>> lobpcg( A, X, verbosityLevel=10) >> Solving generalized eigenvalue problem with preconditioning >> >> matrix size 10 >> block size 1 >> >> No constraints >> >> >> iteration 0 >> [False] >> final eigenvalue: [ 1.] >> final residual norms: [ ?1.92296269e-16] >> (array([ 1.]), array([[ 0. ? ? ? ?], >> ? ? ? [ 0. ? ? ? ?], >> ? ? ? [ 0. ? ? ? ?], >> ? ? ? [ 0. ? ? ? ?], >> ? ? ? [ 0. ? ? ? ?], >> ? ? ? [ 0. ? ? ? ?], >> ? ? ? [ 0. ? ? ? ?], >> ? ? ? [ 0.57735027], >> ? ? ? [ 0.57735027], >> ? ? ? [ 0.57735027]])) >> >> >> final residual norm = 0. >> >>>>> X = np.ones( (n,1) ) >>>>> lobpcg( A, X, verbosityLevel=10) >> Solving generalized eigenvalue problem with preconditioning >> >> matrix size 10 >> block size 1 >> >> No constraints >> >> >> iteration 0 >> [False] >> final eigenvalue: [ 1.] >> final residual norms: [ 0.] >> (array([ 1.]), array([[ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777]])) >> >> I have no idea if there are some inherent problems with starting >> values and whether lobpcg is supposed to converge from any starting >> values. >> >> Josef >> >> >>>> >>>> >>>> >>>> On Wed, Oct 13, 2010 at 4:29 PM, ? wrote: >>>>> On Wed, Oct 13, 2010 at 10:21 AM, Nico Schl?mer >>>>> wrote: >>>>>> Hi, >>>>>> >>>>>> I thought I give lobpcg a shot, and tried >>>>>> >>>>>> ====================== *snip* ====================== >>>>>> from scipy.sparse.linalg import lobpcg >>>>>> from scipy.sparse import identity >>>>>> import numpy as np >>>>>> >>>>>> n = 10 >>>>>> X = np.zeros( (n,1) ) >>>>>> A = identity( n ) >>>>>> lobpcg( A, X ) >>>>>> ====================== *snap* ====================== >>>>>> >>>>>> On my machine, this yields >>>>>> >>>>>> ====================== *snip* ====================== >>>>>> Traceback (most recent call last): >>>>>> ?File "logpcg_test.py", line 8, in >>>>>> ? ?lobpcg( A, X ) >>>>>> ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>>>>> line 304, in lobpcg >>>>>> ? ?blockVectorX, blockVectorBX = b_orthonormalize( B, blockVectorX ) >>>>>> ?File "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>>>>> line 130, in b_orthonormalize >>>>>> ? ?gramVBV = sla.cholesky( gramVBV ) >>>>>> ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>>>>> line 66, in cholesky >>>>>> ? ?c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, clean=True) >>>>>> ?File "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>>>>> line 24, in _cholesky >>>>>> ? ?raise LinAlgError("%d-th leading minor not positive definite" % info) >>>>>> numpy.linalg.linalg.LinAlgError: 1-th leading minor not positive definite >>>>>> ====================== *snap* ====================== >>>>>> >>>>>> Fail! >>>>>> >>>>>> Am I missing a library, or is that routine broken? >>>>> >>>>> It looks like a bug if X is all zeros. If at least 1 element of X is >>>>> non-zero, it seems to work. >>>>> >>>>> Josef >>>>> >>>>>> >>>>>> Cheers, >>>>>> Nico >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cimrman3 at ntc.zcu.cz Wed Oct 13 13:14:57 2010 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 13 Oct 2010 19:14:57 +0200 (CEST) Subject: [SciPy-User] scipy.sparse.linalg.eigen error code? In-Reply-To: References: Message-ID: On Wed, 13 Oct 2010, Nico Schl?mer wrote: > Hi, > > I'm using scipy.sparse.linalg.eigen to compute the lowest magnitude > eigenvalue of a matrix, and I noticed that sometimes the code would > return 0+0j where I really didn't expect it. Turns out that tweaking > the number of iterations results in some more meaningful value here, > suggesting that the underlying Arnoldi (ARPACK?) iteration failed with > the maxiter value previously given. As far as I can see, there's no > way to tell that the iteration actually failed. > > Is that correct? I have stumbled upon this problem just yesterday, so I can confirm that. Sometimes it helped to request more than one eigenvalue, or tweaking the ncv parameter. r. From massimodisasha at gmail.com Wed Oct 13 13:52:44 2010 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Wed, 13 Oct 2010 19:52:44 +0200 Subject: [SciPy-User] numpy and scipy on mac osx 10.6.4 Message-ID: <1B3C0B66-7F0A-46DE-A801-63012EAEC42B@gmail.com> Hello All i recently installed OSX from scratch on a new mac i'm using the system python that come preinstalled on osx 10.6.4 tring to build numpy form git repository i have no errors, the test gived me : the full build log is : http://www.geofemengineering.it/epilog/numpy_log.txt the nose test : http://www.geofemengineering.it/epilog/numpy_test.txt but tring to build scipy i'm having this error : http://www.geofemengineering.it/epilog/scipy_log.txt have you any clu on what's wrong ? few weeks ago i succesfull build both numnpy and scipy without errors, thanks for any help! Massimo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Oct 13 14:05:33 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 13 Oct 2010 11:05:33 -0700 Subject: [SciPy-User] numpy and scipy on mac osx 10.6.4 In-Reply-To: <1B3C0B66-7F0A-46DE-A801-63012EAEC42B@gmail.com> References: <1B3C0B66-7F0A-46DE-A801-63012EAEC42B@gmail.com> Message-ID: Hi, On Wed, Oct 13, 2010 at 10:52 AM, Massimo Di Stefano wrote: > Hello All > i recently installed OSX from scratch on a new mac > i'm using the system python that come preinstalled on osx 10.6.4 > tring to build numpy form git repository i have no errors, the test gived me Would you mind trying to build numpy and scipy again but using my slightly patched numpy version: git clone git://github.com/matthew-brett/numpy.git git checkout farchs-from-c cd numpy python setup.py install etc I am hoping that my patch will fix your problem, Best, Matthew From massimodisasha at gmail.com Wed Oct 13 14:28:02 2010 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Wed, 13 Oct 2010 20:28:02 +0200 Subject: [SciPy-User] numpy and scipy on mac osx 10.6.4 In-Reply-To: References: <1B3C0B66-7F0A-46DE-A801-63012EAEC42B@gmail.com> Message-ID: Hi, i checked out the repository, the command : " eckout farchs-from-c " needs to be run inside the numpy directory ? there is the full log (chackout, build, test) : http://http.geofemengineering.it/epilog/numpy_log_test seems fortran is broken again, i'm using the fortran that comes form the R research site (as suggested on the scipy site) thanks ! Massimo. Il giorno 13/ott/2010, alle ore 20.05, Matthew Brett ha scritto: > Hi, > > On Wed, Oct 13, 2010 at 10:52 AM, Massimo Di Stefano > wrote: >> Hello All >> i recently installed OSX from scratch on a new mac >> i'm using the system python that come preinstalled on osx 10.6.4 >> tring to build numpy form git repository i have no errors, the test gived me > > Would you mind trying to build numpy and scipy again but using my > slightly patched numpy version: > > git clone git://github.com/matthew-brett/numpy.git > git checkout farchs-from-c > cd numpy > python setup.py install > etc > > I am hoping that my patch will fix your problem, > > Best, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From pav at iki.fi Wed Oct 13 14:37:09 2010 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 13 Oct 2010 18:37:09 +0000 (UTC) Subject: [SciPy-User] scipy.sparse.linalg.eigen error code? References: Message-ID: Wed, 13 Oct 2010 15:55:42 +0200, Nico Schl?mer wrote: > I'm using scipy.sparse.linalg.eigen to compute the lowest magnitude > eigenvalue of a matrix, and I noticed that sometimes the code would > return 0+0j where I really didn't expect it. Turns out that tweaking the > number of iterations results in some more meaningful value here, > suggesting that the underlying Arnoldi (ARPACK?) iteration failed with > the maxiter value previously given. As far as I can see, there's no way > to tell that the iteration actually failed. > > Is that correct? Yep. It seems to try to raise a warning, though, elif self.info == -1: warnings.warn("Maximum number of iterations taken: %s" % self.iparam[2]) But this is bad practice. You'll only see the warning *once*, and there is indeed no way to catch it. The Arpack interface seems to need some fixing, too. Exceeding the number of iterations is really an error condition. The interface also apparently doesn't check that ncv is sensible, so in some cases (small matrices) it can give "Error -3". As a work-around, you can probably use def eigen_fixed(A, k=6, M=None, sigma=None, which='LM', v0=None, ncv=None, maxiter=None, tol=0, return_eigenvectors=True): import numpy as np from scipy.sparse.linalg.interface import aslinearoperator from scipy.sparse.linalg.eigen.arpack.arpack import _UnsymmetricArpackParams A = aslinearoperator(A) if A.shape[0] != A.shape[1]: raise ValueError('expected square matrix (shape=%s)' % (A.shape,)) n = A.shape[0] matvec = lambda x : A.matvec(x) params = _UnsymmetricArpackParams(n, k, A.dtype.char, matvec, sigma, ncv, v0, maxiter, which, tol) if M is not None: raise NotImplementedError("generalized eigenproblem not supported yet") while not params.converged: params.iterate() if params.info == -1: raise RuntimeError("Did not converge") return params.extract(return_eigenvectors) -- Pauli Virtanen From nico.schloemer at gmail.com Wed Oct 13 15:23:19 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Wed, 13 Oct 2010 21:23:19 +0200 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: Message-ID: > It actually works for > me: Okay, nice. I tried to run the script on another machine too, where it appears to work alright. This exact same code fails on my own machine locally, meaning I'm probably missing some crucial dependencies (of Scipy?). Using Scipy 0.8.0 on Gentoo. Any idea what that could possibly be? Cheers, Nico On Wed, Oct 13, 2010 at 7:09 PM, Robert Cimrman wrote: > On Wed, 13 Oct 2010, Nico Schl?mer wrote: >> >> Possibly the devs know -- how to get back to them? > > If I recall correctly, any non-zero X should be ok. It actually works for > me: > > In [20]: X = np.random.rand( n,1 ) > > In [21]: lobpcg( A, X ) > Out[21]: > (array([ 1.]), > ?array([[ 0.24881513], > ? ? ? [ 0.35018185], > ? ? ? [ 0.47060887], > ? ? ? [ 0.0544507 ], > ? ? ? [ 0.2243659 ], > ? ? ? [ 0.22597527], > ? ? ? [ 0.55431323], > ? ? ? [ 0.40576857], > ? ? ? [ 0.05081581], > ? ? ? [ 0.12299473]])) > > In [23]: lobpcg( A, X ) > Out[23]: > (array([ 1.]), > ?array([[ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777], > ? ? ? [ 0.31622777]])) > > The eigenvalue is correctly computed (lambda = 1). Since A is a unit matrix > and we are solving A x = lambda x, any nonzero x is an eigenvector. > > r. > >> --Nico >> >> >> On Wed, Oct 13, 2010 at 5:32 PM, ? wrote: >>> >>> On Wed, Oct 13, 2010 at 10:58 AM, Nico Schl?mer >>> wrote: >>>> >>>> In fact, for general >>>> ?X = np.random.rand( n,1 ) >>>> this thing always blows up -- look at the residual norms. >>>> >>>> Bug report? >>>> Nico >>>> >>>> >>>> >>>> On Wed, Oct 13, 2010 at 4:50 PM, Nico Schl?mer >>>> wrote: >>>>> >>>>> Really? I tried ones() instead, and got (with verbosity=10) >>>>> >>>>> ======================= *snip* ======================= >>>>> Solving generalized eigenvalue problem with preconditioning >>>>> >>>>> matrix size 10 >>>>> block size 1 >>>>> >>>>> No constraints >>>>> >>>>> >>>>> iteration 0 >>>>> [ True] >>>>> current block size: 1 >>>>> eigenvalue: [ 100.] >>>>> residual norms: [ 990.] >>>>> iteration 1 >>>>> [ True] >>>>> current block size: 1 >>>>> eigenvalue: [ 0.] >>>>> residual norms: [ ?9.60596010e+12] >>>>> iteration 2 >>>>> [ True] >>>>> current block size: 1 >>>>> eigenvalue: [ 0.] >>>>> residual norms: [ ?1.63581388e+65] >>>>> iteration 3 >>>>> Warning: invalid value encountered in multiply >>>>> [False] >>>>> Warning: invalid value encountered in multiply >>>>> final eigenvalue: [ 0.] >>>>> final residual norms: [ nan] >>>>> ======================= *snap* ======================= >>>>> >>>>> We're still talking about the identity matrix, so I don't expect this >>>>> breakdown to be inherent in the method. >>>>> >>>>> Cheers, >>>>> Nico >>> >>> noisy identity matrix seems to work >>> lobpcg( np.diag(1+1e-6*np.random.randn(10)), np.random.randn(10,1), >>> verbosityLevel=10) >>> lobpcg( sparse.csr_matrix(np.diag(1+1e-4*np.random.randn(10))), >>> np.random.randn(10,1), verbosityLevel=10) >>> >>> I'm not sure what this means (since I'm no expert on this) >>> >>>>>> X = np.zeros( (n,1) ) >>>>>> X[-3:]=.1 >>>>>> lobpcg( A, X, verbosityLevel=10) >>> >>> Solving generalized eigenvalue problem with preconditioning >>> >>> matrix size 10 >>> block size 1 >>> >>> No constraints >>> >>> >>> iteration 0 >>> [False] >>> final eigenvalue: [ 1.] >>> final residual norms: [ ?1.92296269e-16] >>> (array([ 1.]), array([[ 0. ? ? ? ?], >>> ? ? ? [ 0. ? ? ? ?], >>> ? ? ? [ 0. ? ? ? ?], >>> ? ? ? [ 0. ? ? ? ?], >>> ? ? ? [ 0. ? ? ? ?], >>> ? ? ? [ 0. ? ? ? ?], >>> ? ? ? [ 0. ? ? ? ?], >>> ? ? ? [ 0.57735027], >>> ? ? ? [ 0.57735027], >>> ? ? ? [ 0.57735027]])) >>> >>> >>> final residual norm = 0. >>> >>>>>> X = np.ones( (n,1) ) >>>>>> lobpcg( A, X, verbosityLevel=10) >>> >>> Solving generalized eigenvalue problem with preconditioning >>> >>> matrix size 10 >>> block size 1 >>> >>> No constraints >>> >>> >>> iteration 0 >>> [False] >>> final eigenvalue: [ 1.] >>> final residual norms: [ 0.] >>> (array([ 1.]), array([[ 0.31622777], >>> ? ? ? [ 0.31622777], >>> ? ? ? [ 0.31622777], >>> ? ? ? [ 0.31622777], >>> ? ? ? [ 0.31622777], >>> ? ? ? [ 0.31622777], >>> ? ? ? [ 0.31622777], >>> ? ? ? [ 0.31622777], >>> ? ? ? [ 0.31622777], >>> ? ? ? [ 0.31622777]])) >>> >>> I have no idea if there are some inherent problems with starting >>> values and whether lobpcg is supposed to converge from any starting >>> values. >>> >>> Josef >>> >>> >>>>> >>>>> >>>>> >>>>> On Wed, Oct 13, 2010 at 4:29 PM, ? wrote: >>>>>> >>>>>> On Wed, Oct 13, 2010 at 10:21 AM, Nico Schl?mer >>>>>> wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I thought I give lobpcg a shot, and tried >>>>>>> >>>>>>> ====================== *snip* ====================== >>>>>>> from scipy.sparse.linalg import lobpcg >>>>>>> from scipy.sparse import identity >>>>>>> import numpy as np >>>>>>> >>>>>>> n = 10 >>>>>>> X = np.zeros( (n,1) ) >>>>>>> A = identity( n ) >>>>>>> lobpcg( A, X ) >>>>>>> ====================== *snap* ====================== >>>>>>> >>>>>>> On my machine, this yields >>>>>>> >>>>>>> ====================== *snip* ====================== >>>>>>> Traceback (most recent call last): >>>>>>> ?File "logpcg_test.py", line 8, in >>>>>>> ? ?lobpcg( A, X ) >>>>>>> ?File >>>>>>> "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>>>>>> line 304, in lobpcg >>>>>>> ? ?blockVectorX, blockVectorBX = b_orthonormalize( B, blockVectorX ) >>>>>>> ?File >>>>>>> "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>>>>>> line 130, in b_orthonormalize >>>>>>> ? ?gramVBV = sla.cholesky( gramVBV ) >>>>>>> ?File >>>>>>> "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>>>>>> line 66, in cholesky >>>>>>> ? ?c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, >>>>>>> clean=True) >>>>>>> ?File >>>>>>> "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>>>>>> line 24, in _cholesky >>>>>>> ? ?raise LinAlgError("%d-th leading minor not positive definite" % >>>>>>> info) >>>>>>> numpy.linalg.linalg.LinAlgError: 1-th leading minor not positive >>>>>>> definite >>>>>>> ====================== *snap* ====================== >>>>>>> >>>>>>> Fail! >>>>>>> >>>>>>> Am I missing a library, or is that routine broken? >>>>>> >>>>>> It looks like a bug if X is all zeros. If at least 1 element of X is >>>>>> non-zero, it seems to work. >>>>>> >>>>>> Josef >>>>>> >>>>>>> >>>>>>> Cheers, >>>>>>> Nico >>>>>>> _______________________________________________ >>>>>>> SciPy-User mailing list >>>>>>> SciPy-User at scipy.org >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From matthew.brett at gmail.com Wed Oct 13 15:30:09 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 13 Oct 2010 12:30:09 -0700 Subject: [SciPy-User] numpy and scipy on mac osx 10.6.4 In-Reply-To: References: <1B3C0B66-7F0A-46DE-A801-63012EAEC42B@gmail.com> Message-ID: Hi, > the command : > > "git checkout farchs-from-c ?" > > needs to be run inside the numpy directory ? Right - thanks for the correction: > there is the full log (chackout, build, test) : > > http://http.geofemengineering.it/epilog/numpy_log_test I think you meant http://www.geofemengineering.it/epilog/numpy_log_test ? You're getting errors like these: ====================================================================== ERROR: test_return_character.TestF90ReturnCharacter.test_all ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg/nose/case.py", line 367, in setUp try_run(self.inst, ('setup', 'setUp')) File "/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg/nose/util.py", line 491, in try_run return func() File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", line 341, in setUp module_name=self.module_name) File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", line 73, in wrapper memo[key] = func(*a, **kw) File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", line 156, in build_code module_name=module_name) File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", line 73, in wrapper memo[key] = func(*a, **kw) File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", line 137, in build_module __import__(module_name) ImportError: dlopen(/var/folders/1H/1HF3mbiVFOOWnbfLVKDmc++++TI/-Tmp-/tmp87fgzl/_test_ext_module_5403.so, 2): no suitable image found. Did find: /var/folders/1H/1HF3mbiVFOOWnbfLVKDmc++++TI/-Tmp-/tmp87fgzl/_test_ext_module_5403.so: mach-o, but wrong architecture > seems fortran is broken again, i'm using the fortran that comes form the R research site (as suggested on the scipy site) I hope that someone who knows what they are doing can help you. If no-one offers, then see if you can extract the compilation flags from the f2py build process somehow? Lacking experience of f2py, I ended up hacking the _installed_ version of numpy/f2py/tests/utils.py around line 125 to add print 'build stdout: ', out to see what flags it was using. Best, Matthew From nico.schloemer at gmail.com Wed Oct 13 15:42:16 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Wed, 13 Oct 2010 21:42:16 +0200 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: Message-ID: Aaaand another thing: The matrix I'm looking at is in fact complex-valued, but Hermitian, for which LOBPCG should work. However: ========================== *snip* ========================== /opt/scipy/0.8.0/lib/python/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py:457: ComplexWarning: Casting complex values to real discards the imaginary part eigBlockVector = np.asarray( eigBlockVector[:,ii].astype( np.float64 ) ) ========================== *snap* ========================== And the computed result is not correct. :/ --Nico On Wed, Oct 13, 2010 at 9:23 PM, Nico Schl?mer wrote: >> It actually works for >> me: > > Okay, nice. I tried to run the script on another machine too, where it > appears to work alright. > This exact same code fails on my own machine locally, meaning I'm > probably missing some crucial dependencies (of Scipy?). Using Scipy > 0.8.0 on Gentoo. > Any idea what that could possibly be? > > Cheers, > Nico > > > > > On Wed, Oct 13, 2010 at 7:09 PM, Robert Cimrman wrote: >> On Wed, 13 Oct 2010, Nico Schl?mer wrote: >>> >>> Possibly the devs know -- how to get back to them? >> >> If I recall correctly, any non-zero X should be ok. It actually works for >> me: >> >> In [20]: X = np.random.rand( n,1 ) >> >> In [21]: lobpcg( A, X ) >> Out[21]: >> (array([ 1.]), >> ?array([[ 0.24881513], >> ? ? ? [ 0.35018185], >> ? ? ? [ 0.47060887], >> ? ? ? [ 0.0544507 ], >> ? ? ? [ 0.2243659 ], >> ? ? ? [ 0.22597527], >> ? ? ? [ 0.55431323], >> ? ? ? [ 0.40576857], >> ? ? ? [ 0.05081581], >> ? ? ? [ 0.12299473]])) >> >> In [23]: lobpcg( A, X ) >> Out[23]: >> (array([ 1.]), >> ?array([[ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777], >> ? ? ? [ 0.31622777]])) >> >> The eigenvalue is correctly computed (lambda = 1). Since A is a unit matrix >> and we are solving A x = lambda x, any nonzero x is an eigenvector. >> >> r. >> >>> --Nico >>> >>> >>> On Wed, Oct 13, 2010 at 5:32 PM, ? wrote: >>>> >>>> On Wed, Oct 13, 2010 at 10:58 AM, Nico Schl?mer >>>> wrote: >>>>> >>>>> In fact, for general >>>>> ?X = np.random.rand( n,1 ) >>>>> this thing always blows up -- look at the residual norms. >>>>> >>>>> Bug report? >>>>> Nico >>>>> >>>>> >>>>> >>>>> On Wed, Oct 13, 2010 at 4:50 PM, Nico Schl?mer >>>>> wrote: >>>>>> >>>>>> Really? I tried ones() instead, and got (with verbosity=10) >>>>>> >>>>>> ======================= *snip* ======================= >>>>>> Solving generalized eigenvalue problem with preconditioning >>>>>> >>>>>> matrix size 10 >>>>>> block size 1 >>>>>> >>>>>> No constraints >>>>>> >>>>>> >>>>>> iteration 0 >>>>>> [ True] >>>>>> current block size: 1 >>>>>> eigenvalue: [ 100.] >>>>>> residual norms: [ 990.] >>>>>> iteration 1 >>>>>> [ True] >>>>>> current block size: 1 >>>>>> eigenvalue: [ 0.] >>>>>> residual norms: [ ?9.60596010e+12] >>>>>> iteration 2 >>>>>> [ True] >>>>>> current block size: 1 >>>>>> eigenvalue: [ 0.] >>>>>> residual norms: [ ?1.63581388e+65] >>>>>> iteration 3 >>>>>> Warning: invalid value encountered in multiply >>>>>> [False] >>>>>> Warning: invalid value encountered in multiply >>>>>> final eigenvalue: [ 0.] >>>>>> final residual norms: [ nan] >>>>>> ======================= *snap* ======================= >>>>>> >>>>>> We're still talking about the identity matrix, so I don't expect this >>>>>> breakdown to be inherent in the method. >>>>>> >>>>>> Cheers, >>>>>> Nico >>>> >>>> noisy identity matrix seems to work >>>> lobpcg( np.diag(1+1e-6*np.random.randn(10)), np.random.randn(10,1), >>>> verbosityLevel=10) >>>> lobpcg( sparse.csr_matrix(np.diag(1+1e-4*np.random.randn(10))), >>>> np.random.randn(10,1), verbosityLevel=10) >>>> >>>> I'm not sure what this means (since I'm no expert on this) >>>> >>>>>>> X = np.zeros( (n,1) ) >>>>>>> X[-3:]=.1 >>>>>>> lobpcg( A, X, verbosityLevel=10) >>>> >>>> Solving generalized eigenvalue problem with preconditioning >>>> >>>> matrix size 10 >>>> block size 1 >>>> >>>> No constraints >>>> >>>> >>>> iteration 0 >>>> [False] >>>> final eigenvalue: [ 1.] >>>> final residual norms: [ ?1.92296269e-16] >>>> (array([ 1.]), array([[ 0. ? ? ? ?], >>>> ? ? ? [ 0. ? ? ? ?], >>>> ? ? ? [ 0. ? ? ? ?], >>>> ? ? ? [ 0. ? ? ? ?], >>>> ? ? ? [ 0. ? ? ? ?], >>>> ? ? ? [ 0. ? ? ? ?], >>>> ? ? ? [ 0. ? ? ? ?], >>>> ? ? ? [ 0.57735027], >>>> ? ? ? [ 0.57735027], >>>> ? ? ? [ 0.57735027]])) >>>> >>>> >>>> final residual norm = 0. >>>> >>>>>>> X = np.ones( (n,1) ) >>>>>>> lobpcg( A, X, verbosityLevel=10) >>>> >>>> Solving generalized eigenvalue problem with preconditioning >>>> >>>> matrix size 10 >>>> block size 1 >>>> >>>> No constraints >>>> >>>> >>>> iteration 0 >>>> [False] >>>> final eigenvalue: [ 1.] >>>> final residual norms: [ 0.] >>>> (array([ 1.]), array([[ 0.31622777], >>>> ? ? ? [ 0.31622777], >>>> ? ? ? [ 0.31622777], >>>> ? ? ? [ 0.31622777], >>>> ? ? ? [ 0.31622777], >>>> ? ? ? [ 0.31622777], >>>> ? ? ? [ 0.31622777], >>>> ? ? ? [ 0.31622777], >>>> ? ? ? [ 0.31622777], >>>> ? ? ? [ 0.31622777]])) >>>> >>>> I have no idea if there are some inherent problems with starting >>>> values and whether lobpcg is supposed to converge from any starting >>>> values. >>>> >>>> Josef >>>> >>>> >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Oct 13, 2010 at 4:29 PM, ? wrote: >>>>>>> >>>>>>> On Wed, Oct 13, 2010 at 10:21 AM, Nico Schl?mer >>>>>>> wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I thought I give lobpcg a shot, and tried >>>>>>>> >>>>>>>> ====================== *snip* ====================== >>>>>>>> from scipy.sparse.linalg import lobpcg >>>>>>>> from scipy.sparse import identity >>>>>>>> import numpy as np >>>>>>>> >>>>>>>> n = 10 >>>>>>>> X = np.zeros( (n,1) ) >>>>>>>> A = identity( n ) >>>>>>>> lobpcg( A, X ) >>>>>>>> ====================== *snap* ====================== >>>>>>>> >>>>>>>> On my machine, this yields >>>>>>>> >>>>>>>> ====================== *snip* ====================== >>>>>>>> Traceback (most recent call last): >>>>>>>> ?File "logpcg_test.py", line 8, in >>>>>>>> ? ?lobpcg( A, X ) >>>>>>>> ?File >>>>>>>> "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>>>>>>> line 304, in lobpcg >>>>>>>> ? ?blockVectorX, blockVectorBX = b_orthonormalize( B, blockVectorX ) >>>>>>>> ?File >>>>>>>> "/usr/lib64/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py", >>>>>>>> line 130, in b_orthonormalize >>>>>>>> ? ?gramVBV = sla.cholesky( gramVBV ) >>>>>>>> ?File >>>>>>>> "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>>>>>>> line 66, in cholesky >>>>>>>> ? ?c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, >>>>>>>> clean=True) >>>>>>>> ?File >>>>>>>> "/usr/lib64/python2.6/site-packages/scipy/linalg/decomp_cholesky.py", >>>>>>>> line 24, in _cholesky >>>>>>>> ? ?raise LinAlgError("%d-th leading minor not positive definite" % >>>>>>>> info) >>>>>>>> numpy.linalg.linalg.LinAlgError: 1-th leading minor not positive >>>>>>>> definite >>>>>>>> ====================== *snap* ====================== >>>>>>>> >>>>>>>> Fail! >>>>>>>> >>>>>>>> Am I missing a library, or is that routine broken? >>>>>>> >>>>>>> It looks like a bug if X is all zeros. If at least 1 element of X is >>>>>>> non-zero, it seems to work. >>>>>>> >>>>>>> Josef >>>>>>> >>>>>>>> >>>>>>>> Cheers, >>>>>>>> Nico >>>>>>>> _______________________________________________ >>>>>>>> SciPy-User mailing list >>>>>>>> SciPy-User at scipy.org >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> SciPy-User mailing list >>>>>>> SciPy-User at scipy.org >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>> >>>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > From pav at iki.fi Wed Oct 13 15:46:50 2010 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 13 Oct 2010 19:46:50 +0000 (UTC) Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? References: Message-ID: Wed, 13 Oct 2010 21:23:19 +0200, Nico Schl?mer wrote: >> It actually works for me: > > Okay, nice. I tried to run the script on another machine too, where it > appears to work alright. Did you use a random matrix as input? If yes, did you set the seed first, so that it is testing the same matrix? > This exact same code fails on my own machine locally, meaning I'm > probably missing some crucial dependencies (of Scipy?). Using Scipy > 0.8.0 on Gentoo. > Any idea what that could possibly be? The LOBPCG code in Scipy is pretty self-contained. So I can't guess much else than issues with BLAS or LAPACK? I'd maybe make a copy of lobpcg.py and put some checks in the symeig routine. But this is just guessing, I don't know in detail how LOBPCG works. -- Pauli Virtanen From pav at iki.fi Wed Oct 13 15:50:29 2010 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 13 Oct 2010 19:50:29 +0000 (UTC) Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? References: Message-ID: Wed, 13 Oct 2010 21:23:19 +0200, Nico Schl?mer wrote: > Okay, nice. I tried to run the script on another machine too, where it > appears to work alright. > This exact same code fails on my own machine locally, meaning I'm > probably missing some crucial dependencies (of Scipy?). Using Scipy > 0.8.0 on Gentoo. > Any idea what that could possibly be? Could you maybe write and post a self-contained test case here? And ideally file a bug ticket with it. -- Pauli Virtanen From nico.schloemer at gmail.com Wed Oct 13 16:30:52 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Wed, 13 Oct 2010 22:30:52 +0200 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: Message-ID: > Did you use a random matrix as input? If yes, did you set the seed first, > so that it is testing the same matrix? I'm using the identity matrix, as in =========================== *snip* =========================== from scipy.sparse.linalg import lobpcg from scipy.sparse import identity import numpy as np n = 5 X = np.ones( (n, 1) ) A = identity( n ) lobpcg( A, X, verbosityLevel = 10 ) =========================== *snap* =========================== Bug report at . Cheers, Nico On Wed, Oct 13, 2010 at 9:46 PM, Pauli Virtanen wrote: > Wed, 13 Oct 2010 21:23:19 +0200, Nico Schl?mer wrote: >>> It actually works for me: >> >> Okay, nice. I tried to run the script on another machine too, where it >> appears to work alright. > > Did you use a random matrix as input? If yes, did you set the seed first, > so that it is testing the same matrix? > >> This exact same code fails on my own machine locally, meaning I'm >> probably missing some crucial dependencies (of Scipy?). Using Scipy >> 0.8.0 on Gentoo. >> Any idea what that could possibly be? > > The LOBPCG code in Scipy is pretty self-contained. So I can't guess much > else than issues with BLAS or LAPACK? > > I'd maybe make a copy of lobpcg.py and put some checks in the symeig > routine. But this is just guessing, I don't know in detail how LOBPCG > works. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From bsouthey at gmail.com Wed Oct 13 16:48:28 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 13 Oct 2010 15:48:28 -0500 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: Message-ID: <4CB61B1C.1010600@gmail.com> On 10/13/2010 03:30 PM, Nico Schl?mer wrote: >> Did you use a random matrix as input? If yes, did you set the seed first, >> so that it is testing the same matrix? > I'm using the identity matrix, as in > > =========================== *snip* =========================== > from scipy.sparse.linalg import lobpcg > from scipy.sparse import identity > import numpy as np > > n = 5 > X = np.ones( (n, 1) ) > A = identity( n ) > lobpcg( A, X, verbosityLevel = 10 ) > =========================== *snap* =========================== > > Bug report at. > > Cheers, > Nico > > > > On Wed, Oct 13, 2010 at 9:46 PM, Pauli Virtanen wrote: >> Wed, 13 Oct 2010 21:23:19 +0200, Nico Schl?mer wrote: >>>> It actually works for me: >>> Okay, nice. I tried to run the script on another machine too, where it >>> appears to work alright. >> Did you use a random matrix as input? If yes, did you set the seed first, >> so that it is testing the same matrix? >> >>> This exact same code fails on my own machine locally, meaning I'm >>> probably missing some crucial dependencies (of Scipy?). Using Scipy >>> 0.8.0 on Gentoo. >>> Any idea what that could possibly be? >> The LOBPCG code in Scipy is pretty self-contained. So I can't guess much >> else than issues with BLAS or LAPACK? >> >> I'd maybe make a copy of lobpcg.py and put some checks in the symeig >> routine. But this is just guessing, I don't know in detail how LOBPCG >> works. >> >> -- >> Pauli Virtanen >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Due to past history, I see eigenvalue related problems I suspect lapack/blas/altas so please check your installation these packages. Can you please provide details on your system? Like OS (gentoo is insufficient), 32 vs 64 bit, python version, numpy version, lapack/blas/altas versions etc., such the output from np.show_config() Also, have you tried the dev versions of scipy? Really you need to find someone else to confirm this. Bruce From nico.schloemer at gmail.com Wed Oct 13 16:57:49 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Wed, 13 Oct 2010 22:57:49 +0200 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: <4CB61B1C.1010600@gmail.com> References: <4CB61B1C.1010600@gmail.com> Message-ID: Bisection action yields: Line 131ff: print 'gramVBV', gramVBV sla.inv( gramVBV, overwrite_a = True ) print 'gramVBV', gramVBV return gramVBV [[ 2.23606798]] gramVBV [[ 2.23606798]] in the first iteration where it should be gramVBV [[ 2.23606798]] gramVBV [[ 0.4472136]] Right? It looks like "overwrite_a = True" is of no effect on my machine. A quick check independent of lobpcg confirms this. --Nico On Wed, Oct 13, 2010 at 10:48 PM, Bruce Southey wrote: > ?On 10/13/2010 03:30 PM, Nico Schl?mer wrote: >>> Did you use a random matrix as input? If yes, did you set the seed first, >>> so that it is testing the same matrix? >> I'm using the identity matrix, as in >> >> =========================== *snip* =========================== >> from scipy.sparse.linalg import lobpcg >> from scipy.sparse import identity >> import numpy as np >> >> n = 5 >> X = np.ones( (n, 1) ) >> A = identity( n ) >> lobpcg( A, X, verbosityLevel = 10 ) >> =========================== *snap* =========================== >> >> Bug report at. >> >> Cheers, >> Nico >> >> >> >> On Wed, Oct 13, 2010 at 9:46 PM, Pauli Virtanen ?wrote: >>> Wed, 13 Oct 2010 21:23:19 +0200, Nico Schl?mer wrote: >>>>> It actually works for me: >>>> Okay, nice. I tried to run the script on another machine too, where it >>>> appears to work alright. >>> Did you use a random matrix as input? If yes, did you set the seed first, >>> so that it is testing the same matrix? >>> >>>> This exact same code fails on my own machine locally, meaning I'm >>>> probably missing some crucial dependencies (of Scipy?). Using Scipy >>>> 0.8.0 on Gentoo. >>>> Any idea what that could possibly be? >>> The LOBPCG code in Scipy is pretty self-contained. So I can't guess much >>> else than issues with BLAS or LAPACK? >>> >>> I'd maybe make a copy of lobpcg.py and put some checks in the symeig >>> routine. But this is just guessing, I don't know in detail how LOBPCG >>> works. >>> >>> -- >>> Pauli Virtanen >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > Due to past history, I see eigenvalue related problems I suspect > lapack/blas/altas so please check your installation these packages. > > Can you please provide details on your system? > Like OS (gentoo is insufficient), 32 vs 64 bit, python version, numpy > version, lapack/blas/altas versions etc., > such the output from np.show_config() > > Also, have you tried the dev versions of scipy? > > Really you need to find someone else to confirm this. > > Bruce > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Wed Oct 13 17:34:26 2010 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 13 Oct 2010 21:34:26 +0000 (UTC) Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? References: <4CB61B1C.1010600@gmail.com> Message-ID: Wed, 13 Oct 2010 22:57:49 +0200, Nico Schl?mer wrote: > Bisection action yields: > > Line 131ff: > > print 'gramVBV', gramVBV > sla.inv( gramVBV, overwrite_a = True ) > print 'gramVBV', gramVBV Well spotted, that piece of code is wrong. overwrite_a=True in general does not guarantee that the output is overwritten, it just gives the routine the possibility to do so if possible. For example, if the input matrix is in C-order but your system has only Fortran-lapack available (or vice versa), a copy is made. I think this may be the difference in your case between the different machines. (Of course, for the inverse you could just invert the transpose, and transpose after inverse, but the routine here is not smart enough to check for that possibility.) -- Pauli Virtanen From nico.schloemer at gmail.com Wed Oct 13 17:41:52 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Wed, 13 Oct 2010 23:41:52 +0200 Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: <4CB61B1C.1010600@gmail.com> Message-ID: Indeed. gramVBV = sla.inv( gramVBV, overwrite_a = True ) has the code run fine on all machines now (for the identity matrix that is). Will do more tests and report back. Cheers, Nico On Wed, Oct 13, 2010 at 11:34 PM, Pauli Virtanen wrote: > Wed, 13 Oct 2010 22:57:49 +0200, Nico Schl?mer wrote: >> Bisection action yields: >> >> Line 131ff: >> >> ? ? print 'gramVBV', gramVBV >> ? ? sla.inv( gramVBV, overwrite_a = True ) >> ? ? print 'gramVBV', gramVBV > > Well spotted, that piece of code is wrong. > > overwrite_a=True in general does not guarantee that the output is > overwritten, it just gives the routine the possibility to do so if > possible. > > For example, if the input matrix is in C-order but your system has only > Fortran-lapack available (or vice versa), a copy is made. I think this > may be the difference in your case between the different machines. > > (Of course, for the inverse you could just invert the transpose, and > transpose after inverse, but the routine here is not smart enough to > check for that possibility.) > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From massimodisasha at gmail.com Wed Oct 13 17:43:26 2010 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Wed, 13 Oct 2010 23:43:26 +0200 Subject: [SciPy-User] numpy and scipy on mac osx 10.6.4 In-Reply-To: References: <1B3C0B66-7F0A-46DE-A801-63012EAEC42B@gmail.com> Message-ID: <0485804B-44BA-4127-9A44-FE2DF0EC6F5F@gmail.com> Hi, things ar going better now, i used an up to date python 2.7 (i386 + x86_64) available here [ http://pub.jbhannah.net/python/2.7/ ] file /usr/local/bin/python /usr/local/bin/python: Mach-O universal binary with 2 architectures /usr/local/bin/python (for architecture i386): Mach-O executable i386 /usr/local/bin/python (for architecture x86_64): Mach-O 64-bit executable x86_64 thanks to the author :-) to build numpy i forced the flags : export LDFLAGS="-arch x86_64 -arch i386 -Wall -undefined dynamic_lookup -bundle" export FFLAGS="-arch x86_64 -arch i386" sudo python setup.py install tring to build numpy (your git version) and testing it i have : Ran 3079 tests in 43.420s OK (KNOWNFAIL=4, SKIP=1) cool! :-) the full log are : build log : http://www.geofemengineering.it/epilog/numpy_build.txt test log : http://www.geofemengineering.it/epilog/numpy_test.txt Now building scipy, it build fine without errors, but running test i gote few errors : Ran 1731 tests in 866.721s FAILED (KNOWNFAIL=2, SKIP=6, errors=14) full log are : build log : http://www.geofemengineering.it/epilog/scipy_build.txt test log : http://www.geofemengineering.it/epilog/scipy_test.txt there are some test that fails :-/ maybe this can give you any clue on what's going wrong in scipy.linalg ? ld /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/_fftpack.so ld: warning: -arch not specified ld: warning: in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/_fftpack.so, file was built for i386 which is not the architecture being linked (x86_64) ld: could not find entry point "start" (perhaps missing crt1.o) for inferred architecture x86_64 epy:~ epy$ thanks! Il giorno 13/ott/2010, alle ore 21.30, Matthew Brett ha scritto: > Hi, > >> the command : >> >> "git checkout farchs-from-c " >> >> needs to be run inside the numpy directory ? > > Right - thanks for the correction: > >> there is the full log (chackout, build, test) : >> >> http://http.geofemengineering.it/epilog/numpy_log_test > > I think you meant http://www.geofemengineering.it/epilog/numpy_log_test ? > > You're getting errors like these: > > ====================================================================== > ERROR: test_return_character.TestF90ReturnCharacter.test_all > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg/nose/case.py", > line 367, in setUp > try_run(self.inst, ('setup', 'setUp')) > File "/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg/nose/util.py", > line 491, in try_run > return func() > File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", > line 341, in setUp > module_name=self.module_name) > File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", > line 73, in wrapper > memo[key] = func(*a, **kw) > File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", > line 156, in build_code > module_name=module_name) > File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", > line 73, in wrapper > memo[key] = func(*a, **kw) > File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", > line 137, in build_module > __import__(module_name) > ImportError: dlopen(/var/folders/1H/1HF3mbiVFOOWnbfLVKDmc++++TI/-Tmp-/tmp87fgzl/_test_ext_module_5403.so, > 2): no suitable image found. Did find: > /var/folders/1H/1HF3mbiVFOOWnbfLVKDmc++++TI/-Tmp-/tmp87fgzl/_test_ext_module_5403.so: > mach-o, but wrong architecture > >> seems fortran is broken again, i'm using the fortran that comes form the R research site (as suggested on the scipy site) > > I hope that someone who knows what they are doing can help you. > > If no-one offers, then see if you can extract the compilation flags > from the f2py build process somehow? > > Lacking experience of f2py, I ended up hacking the _installed_ version > of numpy/f2py/tests/utils.py around line 125 to add > > print 'build stdout: ', out > > to see what flags it was using. > > Best, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From mihailo at ku.edu Wed Oct 13 19:57:39 2010 From: mihailo at ku.edu (mihailo) Date: Wed, 13 Oct 2010 16:57:39 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Extracting particular modules out of scipy Message-ID: <29955564.post@talk.nabble.com> Hello, I am new to python and scipy and I recently started working on a project which requires much numerical calculations, including ode solving, interpolating and integrating. My project is intended to be available to the general public. Here is my trouble: Scipy is a big package and I don't want to tell people they need to install entire scipy before they can run my code (if I don't have to). What is the best way to extract only modules I actually need from scipy (interpolate, odeint and integrate)? Thank you! Mihailo -- View this message in context: http://old.nabble.com/Extracting-particular-modules-out-of-scipy-tp29955564p29955564.html Sent from the Scipy-User mailing list archive at Nabble.com. From robert.kern at gmail.com Wed Oct 13 19:59:36 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Oct 2010 18:59:36 -0500 Subject: [SciPy-User] [SciPy-user] Extracting particular modules out of scipy In-Reply-To: <29955564.post@talk.nabble.com> References: <29955564.post@talk.nabble.com> Message-ID: On Wed, Oct 13, 2010 at 18:57, mihailo wrote: > > Hello, > > I am new to python and scipy and I recently started working on a project > which requires much numerical calculations, including ode solving, > interpolating and integrating. My project is intended to be available to the > general public. Here is my trouble: Scipy is a big package and I don't want > to tell people they need to install entire scipy before they can run my code > (if I don't have to). What is the best way to extract only modules I > actually need from scipy (interpolate, odeint and integrate)? Telling people to install a subset of scipy is even harder than telling them to install all of scipy. And if you are going to build them binaries yourself, you might as well build all of scipy, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From cimrman3 at ntc.zcu.cz Thu Oct 14 02:53:02 2010 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 14 Oct 2010 08:53:02 +0200 (CEST) Subject: [SciPy-User] scipy.sparse.linalg.lobpcg broken? In-Reply-To: References: <4CB61B1C.1010600@gmail.com> Message-ID: On Wed, 13 Oct 2010, Pauli Virtanen wrote: > Wed, 13 Oct 2010 22:57:49 +0200, Nico Schl?mer wrote: >> Bisection action yields: >> >> Line 131ff: >> >> print 'gramVBV', gramVBV >> sla.inv( gramVBV, overwrite_a = True ) >> print 'gramVBV', gramVBV > > Well spotted, that piece of code is wrong. > > overwrite_a=True in general does not guarantee that the output is > overwritten, it just gives the routine the possibility to do so if > possible. Ah, ok, I was not aware of that. It is not mentioned in the docstring. > For example, if the input matrix is in C-order but your system has only > Fortran-lapack available (or vice versa), a copy is made. I think this > may be the difference in your case between the different machines. Yes, this makes sense. Anyway, a note in the inv() doc would be nice. r. > (Of course, for the inverse you could just invert the transpose, and > transpose after inverse, but the routine here is not smart enough to > check for that possibility.) From nico.schloemer at gmail.com Thu Oct 14 03:40:55 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Thu, 14 Oct 2010 09:40:55 +0200 Subject: [SciPy-User] vdot vs. dot Message-ID: Hi, I played around with .dot() and .vdot() a bit, and noticed that -- as opposed to what I expected -- they behave quite differently. Consider the output of ===================== *snip* ===================== import numpy as np n = 10 k = 3 X = np.random.rand( k, 10 ) + 1j * np.random.rand( k, 10 ) Y = np.random.rand( 10, k ) + 1j * np.random.rand( 10, k ) print np.dot( X.conjugate(), Y ) print print np.vdot( X, Y ) ===================== *snap* ===================== vdot() returns a *scalar* here, and I've got no idea how it is computed. Why isn't vdot( x, y ) == dot( x.conjugate(), y )? Cheers, Nico From pav at iki.fi Thu Oct 14 05:00:01 2010 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 14 Oct 2010 09:00:01 +0000 (UTC) Subject: [SciPy-User] vdot vs. dot References: Message-ID: Thu, 14 Oct 2010 09:40:55 +0200, Nico Schl?mer wrote: [clip] > vdot() returns a *scalar* here, and I've got no idea how it is computed. It's >>> print np.dot( X.ravel().conjugate(), Y.ravel() ) (16.8356363983+3.7240082208j) And the documentation is wrong, although the examples do show that it flattens the inputs. > Why isn't vdot( x, y ) == dot( x.conjugate(), y )? No idea, but it's probably always been like this. Pauli From devicerandom at gmail.com Thu Oct 14 05:11:00 2010 From: devicerandom at gmail.com (ms) Date: Thu, 14 Oct 2010 10:11:00 +0100 Subject: [SciPy-User] Integral Equation In-Reply-To: References: Message-ID: <4CB6C924.3030306@gmail.com> On 13/10/10 16:47, Wal?ria Antunes David wrote: > Hi all, > > Can anyone help me? I need to pass this integral equation for > scipy.integrate pack for python. > Well, i need to compile this equation in Python, but I do not know how to > assemble it in python. Can anyone help me? What problems do you have with the equation? Can you show us the code you wrote so far and the error messages? Without that, it's unlikely you can receive help -we do not do people's homework. thanks, m. > Thanks, > Waleria > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From devicerandom at gmail.com Thu Oct 14 05:12:42 2010 From: devicerandom at gmail.com (ms) Date: Thu, 14 Oct 2010 10:12:42 +0100 Subject: [SciPy-User] Integral Equation In-Reply-To: References: Message-ID: <4CB6C98A.5020503@gmail.com> On 13/10/10 16:47, Wal?ria Antunes David wrote: > Hi all, > > Can anyone help me? I need to pass this integral equation for > scipy.integrate pack for python. > Well, i need to compile this equation in Python, but I do not know how to > assemble it in python. Can anyone help me? Anyway, most information is here: http://www.scipy.org/SciPyPackages/Integrate It is a bit terse, but should get you started. > Thanks, > Waleria > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From anders_kk at hotmail.com Thu Oct 14 06:13:46 2010 From: anders_kk at hotmail.com (Anders Kabell Kristensen) Date: Thu, 14 Oct 2010 12:13:46 +0200 Subject: [SciPy-User] Which BLAS implementation is my SciPy library compiled against? Message-ID: Hi there, I'm using the machines at my local university department system and wonder how I can check which implementation of the BLAS interface SciPy was compiled against? I know it uses some BLAS implementation, since: id(numpy.core.multiarray.dot) and id(numpy.dot) give different results. That should be proof, right? So, how do I look up which impl. of BLAS is used - on Linux and Mac OS. Thanks a lot, Anders K -------------- next part -------------- An HTML attachment was scrubbed... URL: From nico.schloemer at gmail.com Thu Oct 14 06:18:09 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Thu, 14 Oct 2010 12:18:09 +0200 Subject: [SciPy-User] vdot vs. dot In-Reply-To: References: Message-ID: > No idea, but it's probably always been like this. Oh well, I'd consider this a bug in the design of vdot(), but probably not something we could fix that easily. Maybe a new function (cdot()?) could take care of this. In any case, the documentation needs updating here. Cheers, Nico On Thu, Oct 14, 2010 at 11:00 AM, Pauli Virtanen wrote: > Thu, 14 Oct 2010 09:40:55 +0200, Nico Schl?mer wrote: > [clip] >> vdot() returns a *scalar* here, and I've got no idea how it is computed. > > It's > > ? ? ? ?>>> print np.dot( X.ravel().conjugate(), Y.ravel() ) > ? ? ? ?(16.8356363983+3.7240082208j) > > And the documentation is wrong, although the examples do show that it > flattens the inputs. > >> Why isn't ?vdot( x, y ) == dot( x.conjugate(), y )? > > No idea, but it's probably always been like this. > > ? ? ? ?Pauli > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Thu Oct 14 06:41:15 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 14 Oct 2010 06:41:15 -0400 Subject: [SciPy-User] Which BLAS implementation is my SciPy library compiled against? In-Reply-To: References: Message-ID: On Thu, Oct 14, 2010 at 6:13 AM, Anders Kabell Kristensen wrote: > Hi there, > I'm using the machines at my local university department system and wonder > how I can check which implementation of the BLAS interface SciPy was > compiled against? I know it uses some BLAS implementation, since: > id(numpy.core.multiarray.dot) and id(numpy.dot) give different results. That > should be proof, right? > > So, how do I look up which impl. of BLAS is used - on Linux and Mac OS. >>> import scipy >>> scipy.show_config() Josef > > Thanks a lot, > Anders K > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From massimodisasha at gmail.com Thu Oct 14 08:03:19 2010 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Thu, 14 Oct 2010 14:03:19 +0200 Subject: [SciPy-User] numpy and scipy on mac osx 10.6.4 In-Reply-To: References: <1B3C0B66-7F0A-46DE-A801-63012EAEC42B@gmail.com> Message-ID: i'm tring to figure out my problems .. but no clue yet :-/ actually this is the error i'm having with scipy : >>> from scipy.fftpack import fft Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/__init__.py", line 10, in from basic import * File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/basic.py", line 11, in import _fftpack ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/_fftpack.so, 2): no suitable image found. Did find: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/_fftpack.so: mach-o, but wrong architecture >>> have you any clu on how to fix it ? it seems to berelated to fftw libraries, i alsio tried to build libfftw from source using both archs flag : http://lists.apple.com/archives/xcode-users/2007/Apr/msg00127.html i used --arch i386 and --arch x86_64 (the same i used to build numpy) epy:scipy epy$ file /usr/local/lib/libfftw3f.a /usr/local/lib/libfftw3f.a: Mach-O universal binary with 2 architectures /usr/local/lib/libfftw3f.a (for architecture i386): current ar archive random library /usr/local/lib/libfftw3f.a (for architecture x86_64): current ar archive random library epy:scipy epy$ then i rebuild scipy .. but the error persists (seems scipy uses some internal fftw .. or maybe the problem is not related to fftw ?) tring an updatedb and locate libfftw i can see that osx don't has any libfftw (locate found only libfftw i manually installed tring to fix the scipy issue) please i'm without any clue on how to go ahead, i'm available for any test or provide log to you thanks a lot ! Massimo. Il giorno 13/ott/2010, alle ore 21.30, Matthew Brett ha scritto: > Hi, > >> the command : >> >> "git checkout farchs-from-c " >> >> needs to be run inside the numpy directory ? > > Right - thanks for the correction: > >> there is the full log (chackout, build, test) : >> >> http://http.geofemengineering.it/epilog/numpy_log_test > > I think you meant http://www.geofemengineering.it/epilog/numpy_log_test ? > > You're getting errors like these: > > ====================================================================== > ERROR: test_return_character.TestF90ReturnCharacter.test_all > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg/nose/case.py", > line 367, in setUp > try_run(self.inst, ('setup', 'setUp')) > File "/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg/nose/util.py", > line 491, in try_run > return func() > File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", > line 341, in setUp > module_name=self.module_name) > File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", > line 73, in wrapper > memo[key] = func(*a, **kw) > File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", > line 156, in build_code > module_name=module_name) > File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", > line 73, in wrapper > memo[key] = func(*a, **kw) > File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", > line 137, in build_module > __import__(module_name) > ImportError: dlopen(/var/folders/1H/1HF3mbiVFOOWnbfLVKDmc++++TI/-Tmp-/tmp87fgzl/_test_ext_module_5403.so, > 2): no suitable image found. Did find: > /var/folders/1H/1HF3mbiVFOOWnbfLVKDmc++++TI/-Tmp-/tmp87fgzl/_test_ext_module_5403.so: > mach-o, but wrong architecture > >> seems fortran is broken again, i'm using the fortran that comes form the R research site (as suggested on the scipy site) > > I hope that someone who knows what they are doing can help you. > > If no-one offers, then see if you can extract the compilation flags > from the f2py build process somehow? > > Lacking experience of f2py, I ended up hacking the _installed_ version > of numpy/f2py/tests/utils.py around line 125 to add > > print 'build stdout: ', out > > to see what flags it was using. > > Best, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From vincent at vincentdavis.net Thu Oct 14 08:50:46 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Thu, 14 Oct 2010 06:50:46 -0600 Subject: [SciPy-User] numpy and scipy on mac osx 10.6.4 In-Reply-To: References: <1B3C0B66-7F0A-46DE-A801-63012EAEC42B@gmail.com> Message-ID: On Thu, Oct 14, 2010 at 6:03 AM, Massimo Di Stefano wrote: > i'm tring to figure out my problems .. but no clue yet :-/ > > > actually this is the error i'm having with scipy : > > >>>> from scipy.fftpack import fft > Traceback (most recent call last): > ?File "", line 1, in > ?File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/__init__.py", line 10, in > ? ?from basic import * > ?File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/basic.py", line 11, in > ? ?import _fftpack > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/_fftpack.so, 2): no suitable image found. ?Did find: > ? ? ? ?/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/_fftpack.so: mach-o, but wrong architecture >>>> > > > > have you any clu on how to fix it ? Well I very recently installed on 10.6.4 py27 numpy and scipy from source. I did not do the py27 batteries included (I should have) The build script looked good to me for the py27 you installed I was going to try it today. This is what I have been doing to install numpy as scipy. (don't copy and paste, the some of the quote and "-" characters are not the right ascii) I am using the att gfortran linked to on the numpy site. I am going to run through it again today and will let you know numscon: Needed to build numpy and scipy 1. git clone http://github.com/cournape/numscons.git 2. py27 setup.py build 3. py27 setup.py install nose: Needed to run numpy.test() for py3 http://python-nose.googlecode.com/svn/branches/ 1. hg clone https://python-nose.googlecode.com/hg/ python-nose 2. py27 setup.py build 3. py27 setup.py install Numpy 1..5.x source 1. LDFLAGS="-arch x86_64" FFLAGS=?-arch x86_64" py27 setupscons.py scons 2. LDFLAGS=?-arch x86_64? FFLAGS=?-arch x86_64? py27 setupscons.py scons install Scipy: ?0.8.0b1? 1. LDFLAGS=?-arch x86_64? FFLAGS=?-arch x86_64? py27 setupscons.py scons 2. LDFLAGS=?-arch x86_64? FFLAGS=?-arch x86_64? py27 setupscons.py scons install Vincent > > it seems to berelated to fftw libraries, > i alsio tried to build libfftw from source using both archs flag : > > http://lists.apple.com/archives/xcode-users/2007/Apr/msg00127.html > > i used --arch i386 and --arch x86_64 (the same i used to build numpy) > > > epy:scipy epy$ file /usr/local/lib/libfftw3f.a > /usr/local/lib/libfftw3f.a: Mach-O universal binary with 2 architectures > /usr/local/lib/libfftw3f.a (for architecture i386): ? ? current ar archive random library > /usr/local/lib/libfftw3f.a (for architecture x86_64): ? current ar archive random library > epy:scipy epy$ > > > > then i rebuild scipy .. but the error persists > > (seems scipy uses some internal fftw .. or maybe the problem is not related to fftw ?) > > tring an updatedb and locate libfftw > i can see that osx don't has any libfftw > (locate found only libfftw i manually installed tring to fix the scipy issue) > > > please i'm without any clue on how to go ahead, > > i'm available for any test or provide log to you > > thanks a lot ! > > Massimo. > > > Il giorno 13/ott/2010, alle ore 21.30, Matthew Brett ha scritto: > >> Hi, >> >>> the command : >>> >>> "git checkout farchs-from-c ?" >>> >>> needs to be run inside the numpy directory ? >> >> Right - thanks for the correction: >> >>> there is the full log (chackout, build, test) : >>> >>> http://http.geofemengineering.it/epilog/numpy_log_test >> >> I think you meant http://www.geofemengineering.it/epilog/numpy_log_test ? >> >> You're getting errors like these: >> >> ====================================================================== >> ERROR: test_return_character.TestF90ReturnCharacter.test_all >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> ?File "/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg/nose/case.py", >> line 367, in setUp >> ? ?try_run(self.inst, ('setup', 'setUp')) >> ?File "/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg/nose/util.py", >> line 491, in try_run >> ? ?return func() >> ?File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", >> line 341, in setUp >> ? ?module_name=self.module_name) >> ?File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", >> line 73, in wrapper >> ? ?memo[key] = func(*a, **kw) >> ?File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", >> line 156, in build_code >> ? ?module_name=module_name) >> ?File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", >> line 73, in wrapper >> ? ?memo[key] = func(*a, **kw) >> ?File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", >> line 137, in build_module >> ? ?__import__(module_name) >> ImportError: dlopen(/var/folders/1H/1HF3mbiVFOOWnbfLVKDmc++++TI/-Tmp-/tmp87fgzl/_test_ext_module_5403.so, >> 2): no suitable image found. ?Did find: >> ? ? ? /var/folders/1H/1HF3mbiVFOOWnbfLVKDmc++++TI/-Tmp-/tmp87fgzl/_test_ext_module_5403.so: >> mach-o, but wrong architecture >> >>> seems fortran is broken again, i'm using the fortran that comes form the R research site (as suggested on the scipy site) >> >> I hope that someone who knows what they are doing can help you. >> >> If no-one offers, then see if you can extract the compilation flags >> from the f2py build process somehow? >> >> Lacking experience of f2py, I ended up hacking the _installed_ version >> of numpy/f2py/tests/utils.py around line 125 to add >> >> print 'build stdout: ', out >> >> to see what flags it was using. >> >> Best, >> >> Matthew >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Thanks Vincent Davis 720-301-3003 From waleriantunes at gmail.com Thu Oct 14 09:00:41 2010 From: waleriantunes at gmail.com (=?ISO-8859-1?Q?Wal=E9ria_Antunes_David?=) Date: Thu, 14 Oct 2010 10:00:41 -0300 Subject: [SciPy-User] Integral Equation In-Reply-To: <4CB6C98A.5020503@gmail.com> References: <4CB6C98A.5020503@gmail.com> Message-ID: I can't send my code, because i don't know how to do. Thanks, On Thu, Oct 14, 2010 at 6:12 AM, ms wrote: > On 13/10/10 16:47, Wal?ria Antunes David wrote: > > Hi all, > > > > Can anyone help me? I need to pass this integral equation for > > scipy.integrate pack for python. > > Well, i need to compile this equation in Python, but I do not know how to > > assemble it in python. Can anyone help me? > > Anyway, most information is here: > http://www.scipy.org/SciPyPackages/Integrate > > It is a bit terse, but should get you started. > > > Thanks, > > Waleria > > > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Oct 14 09:53:56 2010 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 14 Oct 2010 13:53:56 +0000 (UTC) Subject: [SciPy-User] Integral Equation References: <4CB6C98A.5020503@gmail.com> Message-ID: Thu, 14 Oct 2010 10:12:42 +0100, ms wrote: > On 13/10/10 16:47, Wal?ria Antunes David wrote: >> Hi all, >> >> Can anyone help me? I need to pass this integral equation for >> scipy.integrate pack for python. >> Well, i need to compile this equation in Python, but I do not know how >> to assemble it in python. Can anyone help me? > > Anyway, most information is here: > http://www.scipy.org/SciPyPackages/Integrate This is maybe more up-to-date: http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html http://docs.scipy.org/doc/scipy/reference/integrate.html From silva at lma.cnrs-mrs.fr Thu Oct 14 10:18:29 2010 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Thu, 14 Oct 2010 11:18:29 -0300 Subject: [SciPy-User] Integral Equation In-Reply-To: References: <4CB6C98A.5020503@gmail.com> Message-ID: <1287065910.3110.3.camel@Portable-s2m.cnrs-mrs.fr> Le jeudi 14 octobre 2010 ? 10:00 -0300, Wal?ria Antunes David a ?crit : > I can't send my code, because i don't know how to do. The expression you gave in your first post wasn't an equation! You have to find a formulation equivalent to your equation that is suitable to tools available in scipy.integrate or scipy.optimize (root finding functions). From nico.schloemer at gmail.com Thu Oct 14 10:30:15 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Thu, 14 Oct 2010 16:30:15 +0200 Subject: [SciPy-User] Q: Provide .H for numpy arrays? Message-ID: > In any case, I think that you need to raise this issue on the list for discussion. Raise! Now here's what for discussion: I noticed that one difference between numpy arrays and matrices is that ".H" (transpose + conjugation) is only implemented for matrices. ".T", however, being structurally completely equivalent, is implemented for both. While an actual use case for ".H" would be mass dot-products for multivectors. Right now, I guess what most people go with is ".T.conjugate()" where it's needed. Something that may play a role here is the fact that .vdot() does -- as opposed to .dot() -- not allow for dot-products with multivectors. Opinions? Cheers, Nico From mihailo at ku.edu Wed Oct 13 14:18:53 2010 From: mihailo at ku.edu (mihailo) Date: Wed, 13 Oct 2010 11:18:53 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Extracting particular modules out of scipy Message-ID: <29955564.post@talk.nabble.com> Hello, I am new to python and scipy and I recently started working on a project which requires much numerical calculations, including ode solving, interpolating and integrating. My project is intended to be available to the general public. Here is my trouble: Scipy is a big package and I don't want to tell people they need to install entire scipy before they can run my code (if I don't have to). What is the best way to extract only modules I actually need from scipy (interpolate, odeint and integrate)? Thank you! Mihailo -- View this message in context: http://old.nabble.com/Extracting-particular-modules-out-of-scipy-tp29955564p29955564.html Sent from the Scipy-User mailing list archive at Nabble.com. From Chris.Barker at noaa.gov Thu Oct 14 12:07:31 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 14 Oct 2010 09:07:31 -0700 Subject: [SciPy-User] [SciPy-user] Extracting particular modules out of scipy In-Reply-To: <29955564.post@talk.nabble.com> References: <29955564.post@talk.nabble.com> Message-ID: <4CB72AC3.5080508@noaa.gov> On 10/13/10 11:18 AM, mihailo wrote: > I am new to python and scipy and I recently started working on a project > which requires much numerical calculations, including ode solving, > interpolating and integrating. My project is intended to be available to the > general public. Here is my trouble: Scipy is a big package and I don't want > to tell people they need to install entire scipy before they can run my code > (if I don't have to). What is the best way to extract only modules I > actually need from scipy (interpolate, odeint and integrate)? Just pull them out by hand and make your own package. If you need a lot, and some of what you need is fortran compiled, then there is little point, but I've done it when I needed a few functions, didn't need anything fortran compiled, and fortran compiling was causing me pain. How tight is your users bandwidth/disk space? Note that you'll also probably have to clean up some imports that are no longer valid. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From dagss at student.matnat.uio.no Thu Oct 14 13:05:28 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 14 Oct 2010 19:05:28 +0200 Subject: [SciPy-User] Q: Provide .H for numpy arrays? In-Reply-To: References: Message-ID: <4CB73858.20308@student.matnat.uio.no> On 10/14/2010 04:30 PM, Nico Schl?mer wrote: >> In any case, I think that you need to raise this issue on the list for discussion. >> > Raise! > > Now here's what for discussion: > > I noticed that one difference between numpy arrays and matrices is > that ".H" (transpose + conjugation) is only implemented for matrices. > ".T", however, being structurally completely equivalent, is > implemented for both. > While an actual use case for ".H" would be mass dot-products for > multivectors. Right now, I guess what most people go with is > ".T.conjugate()" where it's needed. > > Something that may play a role here is the fact that .vdot() does -- > as opposed to .dot() -- not allow for dot-products with multivectors. > Does this belong on the numpy-discuss list? I think the proposal needs further details. ".T" is NOT completely equivalent because no copying takes place. Modifying "arr.T" modifies "arr" as well, while the ".H" of the matrix class makes a copy of the data. There is an alternative. Each array view could have a flag saying whether it is conjugated or not, and then "arr.H" would return a "conjugated view". This would be much more useful. Any routines actually accessing the data (item assignment, storing to disk, ufuncs...) would have special cases added to do the conjugation in the operations instead of having to copy the data. This would play very nice with constructs such as np.dot(arr.H, arr), because the underlying BLAS can take flag to conjugate the data (which to my knowledge is not available from NumPy currently). Of course, it is likely a lot of work. But the existance of the possibility of this path in the long run makes me negative towards the proposal of just implementing "arr.H" the easy way (making a copy) in the short run because it would make it impossible to introduce something much more useful later on. A naive implementation of "arr.H" would not work well with gigabyte-sized arrays on most computers, and is always available as "arr.T.conjugate()" anyway, which is more explicit about making a copy. Just my cents, Dag Sverre From massimodisasha at gmail.com Thu Oct 14 13:15:40 2010 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Thu, 14 Oct 2010 19:15:40 +0200 Subject: [SciPy-User] numpy and scipy on mac osx 10.6.4 In-Reply-To: References: <1B3C0B66-7F0A-46DE-A801-63012EAEC42B@gmail.com> Message-ID: <60FC8048-C134-4AB9-90C1-157CE540978C@gmail.com> Thanks infinite Vincent ! using the stable version fro both numpy and scipy and using scons as build system, all the test are ok and the fft error is fixed ! i tried to buld with scons the git and svn verions but both gived me the problem i posted so i suppose something goes wrong in the developer version. if you need to use me as guinea pig to test the latest version i will be happy to provide build time and log results thanks a lot for your preciouse help ! Ciao, Massimo. Il giorno 14/ott/2010, alle ore 14.50, Vincent Davis ha scritto: > On Thu, Oct 14, 2010 at 6:03 AM, Massimo Di Stefano > wrote: >> i'm tring to figure out my problems .. but no clue yet :-/ >> >> >> actually this is the error i'm having with scipy : >> >> >>>>> from scipy.fftpack import fft >> Traceback (most recent call last): >> File "", line 1, in >> File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/__init__.py", line 10, in >> from basic import * >> File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/basic.py", line 11, in >> import _fftpack >> ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/_fftpack.so, 2): no suitable image found. Did find: >> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/fftpack/_fftpack.so: mach-o, but wrong architecture >>>>> >> >> >> >> have you any clu on how to fix it ? > > Well I very recently installed on 10.6.4 py27 numpy and scipy from > source. I did not do the py27 batteries included (I should have) The > build script looked good to me for the py27 you installed I was going > to try it today. > > This is what I have been doing to install numpy as scipy. > (don't copy and paste, the some of the quote and "-" characters are > not the right ascii) > I am using the att gfortran linked to on the numpy site. > > I am going to run through it again today and will let you know > > numscon: Needed to build numpy and scipy > > 1. git clone http://github.com/cournape/numscons.git > 2. py27 setup.py build > 3. py27 setup.py install > > nose: Needed to run numpy.test() > for py3 http://python-nose.googlecode.com/svn/branches/ > > 1. hg clone https://python-nose.googlecode.com/hg/ python-nose > 2. py27 setup.py build > 3. py27 setup.py install > > Numpy 1..5.x source > > 1. LDFLAGS="-arch x86_64" FFLAGS=?-arch x86_64" py27 setupscons.py scons > 2. LDFLAGS=?-arch x86_64? FFLAGS=?-arch x86_64? py27 setupscons.py > scons install > > Scipy: ?0.8.0b1? > > 1. LDFLAGS=?-arch x86_64? FFLAGS=?-arch x86_64? py27 setupscons.py scons > 2. LDFLAGS=?-arch x86_64? FFLAGS=?-arch x86_64? py27 setupscons.py > scons install > > Vincent >> >> it seems to berelated to fftw libraries, >> i alsio tried to build libfftw from source using both archs flag : >> >> http://lists.apple.com/archives/xcode-users/2007/Apr/msg00127.html >> >> i used --arch i386 and --arch x86_64 (the same i used to build numpy) >> >> >> epy:scipy epy$ file /usr/local/lib/libfftw3f.a >> /usr/local/lib/libfftw3f.a: Mach-O universal binary with 2 architectures >> /usr/local/lib/libfftw3f.a (for architecture i386): current ar archive random library >> /usr/local/lib/libfftw3f.a (for architecture x86_64): current ar archive random library >> epy:scipy epy$ >> >> >> >> then i rebuild scipy .. but the error persists >> >> (seems scipy uses some internal fftw .. or maybe the problem is not related to fftw ?) >> >> tring an updatedb and locate libfftw >> i can see that osx don't has any libfftw >> (locate found only libfftw i manually installed tring to fix the scipy issue) >> >> >> please i'm without any clue on how to go ahead, >> >> i'm available for any test or provide log to you >> >> thanks a lot ! >> >> Massimo. >> >> >> Il giorno 13/ott/2010, alle ore 21.30, Matthew Brett ha scritto: >> >>> Hi, >>> >>>> the command : >>>> >>>> "git checkout farchs-from-c " >>>> >>>> needs to be run inside the numpy directory ? >>> >>> Right - thanks for the correction: >>> >>>> there is the full log (chackout, build, test) : >>>> >>>> http://http.geofemengineering.it/epilog/numpy_log_test >>> >>> I think you meant http://www.geofemengineering.it/epilog/numpy_log_test ? >>> >>> You're getting errors like these: >>> >>> ====================================================================== >>> ERROR: test_return_character.TestF90ReturnCharacter.test_all >>> ---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File "/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg/nose/case.py", >>> line 367, in setUp >>> try_run(self.inst, ('setup', 'setUp')) >>> File "/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg/nose/util.py", >>> line 491, in try_run >>> return func() >>> File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", >>> line 341, in setUp >>> module_name=self.module_name) >>> File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", >>> line 73, in wrapper >>> memo[key] = func(*a, **kw) >>> File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", >>> line 156, in build_code >>> module_name=module_name) >>> File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", >>> line 73, in wrapper >>> memo[key] = func(*a, **kw) >>> File "/Library/Python/2.6/site-packages/numpy/f2py/tests/util.py", >>> line 137, in build_module >>> __import__(module_name) >>> ImportError: dlopen(/var/folders/1H/1HF3mbiVFOOWnbfLVKDmc++++TI/-Tmp-/tmp87fgzl/_test_ext_module_5403.so, >>> 2): no suitable image found. Did find: >>> /var/folders/1H/1HF3mbiVFOOWnbfLVKDmc++++TI/-Tmp-/tmp87fgzl/_test_ext_module_5403.so: >>> mach-o, but wrong architecture >>> >>>> seems fortran is broken again, i'm using the fortran that comes form the R research site (as suggested on the scipy site) >>> >>> I hope that someone who knows what they are doing can help you. >>> >>> If no-one offers, then see if you can extract the compilation flags >>> from the f2py build process somehow? >>> >>> Lacking experience of f2py, I ended up hacking the _installed_ version >>> of numpy/f2py/tests/utils.py around line 125 to add >>> >>> print 'build stdout: ', out >>> >>> to see what flags it was using. >>> >>> Best, >>> >>> Matthew >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Thanks > Vincent Davis > 720-301-3003 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From dagss at student.matnat.uio.no Thu Oct 14 14:27:58 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 14 Oct 2010 20:27:58 +0200 Subject: [SciPy-User] Q: Provide .H for numpy arrays? In-Reply-To: <4CB73858.20308@student.matnat.uio.no> References: <4CB73858.20308@student.matnat.uio.no> Message-ID: <4CB74BAE.6060303@student.matnat.uio.no> On 10/14/2010 07:05 PM, Dag Sverre Seljebotn wrote: > On 10/14/2010 04:30 PM, Nico Schl?mer wrote: > >>> In any case, I think that you need to raise this issue on the list for discussion. >>> >>> >> Raise! >> >> Now here's what for discussion: >> >> I noticed that one difference between numpy arrays and matrices is >> that ".H" (transpose + conjugation) is only implemented for matrices. >> ".T", however, being structurally completely equivalent, is >> implemented for both. >> While an actual use case for ".H" would be mass dot-products for >> multivectors. Right now, I guess what most people go with is >> ".T.conjugate()" where it's needed. >> >> Something that may play a role here is the fact that .vdot() does -- >> as opposed to .dot() -- not allow for dot-products with multivectors. >> >> > Does this belong on the numpy-discuss list? > > I think the proposal needs further details. ".T" is NOT completely > equivalent because no copying takes place. Modifying "arr.T" modifies > "arr" as well, while the ".H" of the matrix class makes a copy of the data. > > There is an alternative. Each array view could have a flag saying > whether it is conjugated or not, and then "arr.H" would return a > "conjugated view". This would be much more useful. Any routines actually > accessing the data (item assignment, storing to disk, ufuncs...) would > have special cases added to do the conjugation in the operations instead > of having to copy the data. > > This would play very nice with constructs such as np.dot(arr.H, arr), > because the underlying BLAS can take flag to conjugate the data (which > to my knowledge is not available from NumPy currently). > Correction: It's available in vdot. Dag Sverre > Of course, it is likely a lot of work. But the existance of the > possibility of this path in the long run makes me negative towards the > proposal of just implementing "arr.H" the easy way (making a copy) in > the short run because it would make it impossible to introduce something > much more useful later on. A naive implementation of "arr.H" would not > work well with gigabyte-sized arrays on most computers, and is always > available as "arr.T.conjugate()" anyway, which is more explicit about > making a copy. > > Just my cents, > > Dag Sverre > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From fperez.net at gmail.com Thu Oct 14 14:51:28 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 14 Oct 2010 11:51:28 -0700 Subject: [SciPy-User] Q: Provide .H for numpy arrays? In-Reply-To: <4CB73858.20308@student.matnat.uio.no> References: <4CB73858.20308@student.matnat.uio.no> Message-ID: On Thu, Oct 14, 2010 at 10:05 AM, Dag Sverre Seljebotn wrote: > Of course, it is likely a lot of work. But the existance of the > possibility of this path in the long run makes me negative towards the > proposal of just implementing "arr.H" the easy way (making a copy) in > the short run because it would make it impossible to introduce something > much more useful later on. A naive implementation of "arr.H" would not > work well with gigabyte-sized arrays on most computers, and is always > available as "arr.T.conjugate()" anyway, which is more explicit about > making a copy. +1 to your view: anything making an explicit copy of something potentially large better have a very visible '()' in there. Regards, f From fperez.net at gmail.com Thu Oct 14 14:51:28 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 14 Oct 2010 11:51:28 -0700 Subject: [SciPy-User] Q: Provide .H for numpy arrays? In-Reply-To: <4CB73858.20308@student.matnat.uio.no> References: <4CB73858.20308@student.matnat.uio.no> Message-ID: On Thu, Oct 14, 2010 at 10:05 AM, Dag Sverre Seljebotn wrote: > Of course, it is likely a lot of work. But the existance of the > possibility of this path in the long run makes me negative towards the > proposal of just implementing "arr.H" the easy way (making a copy) in > the short run because it would make it impossible to introduce something > much more useful later on. A naive implementation of "arr.H" would not > work well with gigabyte-sized arrays on most computers, and is always > available as "arr.T.conjugate()" anyway, which is more explicit about > making a copy. +1 to your view: anything making an explicit copy of something potentially large better have a very visible '()' in there. Regards, f From nico.schloemer at gmail.com Thu Oct 14 15:18:56 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Thu, 14 Oct 2010 21:18:56 +0200 Subject: [SciPy-User] Q: Provide .H for numpy arrays? In-Reply-To: <4CB73858.20308@student.matnat.uio.no> References: <4CB73858.20308@student.matnat.uio.no> Message-ID: > I think the proposal needs further details. ".T" is NOT completely > equivalent because no copying takes place. That's right, I was more thinking about semantic, mathematical equivalence. Looking at the code, this is certainly a different matter about which I, as a Scipy user, don't know much. As mentioned in the previous mail, at this moment there appears to be no possibility to efficiently block-dot-product two complex-valued blockvectors as vdot() doesn't work for blocks, and np.dot( z.T.conjugate(), z ) unnecessarily creates and intermediate vector. Correct? The suggested view option would certainly help out here. Cheers, Nico On Thu, Oct 14, 2010 at 7:05 PM, Dag Sverre Seljebotn wrote: > On 10/14/2010 04:30 PM, Nico Schl?mer wrote: >>> In any case, I think that you need to raise this issue on the list for discussion. >>> >> Raise! >> >> Now here's what for discussion: >> >> I noticed that one difference between numpy arrays and matrices is >> that ".H" (transpose + conjugation) is only implemented for matrices. >> ".T", however, being structurally completely equivalent, is >> implemented for both. >> While an actual use case for ".H" would be mass dot-products for >> multivectors. Right now, I guess what most people go with is >> ".T.conjugate()" where it's needed. >> >> Something that may play a role here is the fact that .vdot() does -- >> as opposed to .dot() -- not allow for dot-products with multivectors. >> > > Does this belong on the numpy-discuss list? > > I think the proposal needs further details. ".T" is NOT completely > equivalent because no copying takes place. Modifying "arr.T" modifies > "arr" as well, while the ".H" of the matrix class makes a copy of the data. > > There is an alternative. Each array view could have a flag saying > whether it is conjugated or not, and then "arr.H" would return a > "conjugated view". This would be much more useful. Any routines actually > accessing the data (item assignment, storing to disk, ufuncs...) would > have special cases added to do the conjugation in the operations instead > of having to copy the data. > > This would play very nice with constructs such as np.dot(arr.H, arr), > because the underlying BLAS can take flag to conjugate the data (which > to my knowledge is not available from NumPy currently). > > Of course, it is likely a lot of work. But the existance of the > possibility of this path in the long run makes me negative towards the > proposal of just implementing "arr.H" the easy way (making a copy) in > the short run because it would make it impossible to introduce something > much more useful later on. A naive implementation of "arr.H" would not > work well with gigabyte-sized arrays on most computers, and is always > available as "arr.T.conjugate()" anyway, which is more explicit about > making a copy. > > Just my cents, > > Dag Sverre > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Thu Oct 14 15:50:37 2010 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 14 Oct 2010 19:50:37 +0000 (UTC) Subject: [SciPy-User] Q: Provide .H for numpy arrays? References: <4CB73858.20308@student.matnat.uio.no> Message-ID: Thu, 14 Oct 2010 21:18:56 +0200, Nico Schl?mer wrote: >> I think the proposal needs further details. ".T" is NOT completely >> equivalent because no copying takes place. > > That's right, I was more thinking about semantic, mathematical > equivalence. Looking at the code, this is certainly a different matter > about which I, as a Scipy user, don't know much. > > As mentioned in the previous mail, at this moment there appears to be no > possibility to efficiently block-dot-product two complex-valued > blockvectors as vdot() doesn't work for blocks, and > > np.dot( z.T.conjugate(), z ) > > unnecessarily creates and intermediate vector. Correct? The suggested > view option would certainly help out here. You can probably write a couple of lines function calling ZGEMM from scipy.lib.blas. But yes, nothing out of the box. -- Pauli Virtanen From jsseabold at gmail.com Thu Oct 14 17:21:22 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 14 Oct 2010 17:21:22 -0400 Subject: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b In-Reply-To: <1C2E7B0273BE4B4BB75944275286AC1A25CF0C@NINSRV05.nina.no> References: <1DF088E3-790C-4382-A53D-5F4313CB5B70@bryant.edu> <1C2E7B0273BE4B4BB75944275286AC1A25CF0C@NINSRV05.nina.no> Message-ID: On Tue, Oct 12, 2010 at 9:10 AM, Tveraa, Torkild wrote: > Dear All, > > I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-) > > ? ? ? ?Cheers, > ? ? ? ?Torkild > > ------------------------------------------------------- > import numpy > import pylab > from scipy import * > from scipy import optimize > > ## This is y-data: > y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0])) > > ## This is x-data: > t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353])) > > ## This is the equation: > fitfunc = lambda p, x: ? ?p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1) > > ## > errfunc = lambda p, x, y: fitfunc(p,x) -y > > guess = [0, max(y_data), 0.1, 140, -0.1, 270] > > bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ] > > ## This seems to work ok: > p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0) > print 'Estimates from leastsq \n', p2,success > > > ## But this does not: > best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2) The minimization routines, I believe, in fmin expect a function that maps from to a scalar. So you need to tell fmin_l_bfgs that you want to minimize the sum of squared errors, optimze.leastsq assumes this. So just define one more function that sums the squared errors and minimize it errfuncsumsq = lambda p, x, y: np.sum(errfunc(p,x,y)**2) Now, run it without bounds to make sure we get the same thing boundsnone = [(None,None)]*6 Notice that you also have to tell fmin_l_bfgs_b to approximate the gradient or else it assumes that your objective function also returns its gradient best, val, d = optimize.fmin_l_bfgs_b(errfuncsum, guess, approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2) p2 array([ 6.79548883e-02, 3.68922503e-01, 7.55565728e-02, 1.41378227e+02, 2.91307814e+00, 2.70608242e+02]) best array([ 6.79585333e-02, -2.33026316e-01, -7.55409880e-02, 1.41388265e+02, -1.36069434e+00, 2.70160779e+02]) Cheers, Skipper From jsseabold at gmail.com Thu Oct 14 17:32:42 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 14 Oct 2010 17:32:42 -0400 Subject: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b In-Reply-To: References: <1DF088E3-790C-4382-A53D-5F4313CB5B70@bryant.edu> <1C2E7B0273BE4B4BB75944275286AC1A25CF0C@NINSRV05.nina.no> Message-ID: On Thu, Oct 14, 2010 at 5:21 PM, Skipper Seabold wrote: > On Tue, Oct 12, 2010 at 9:10 AM, Tveraa, Torkild wrote: >> Dear All, >> >> I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-) >> >> ? ? ? ?Cheers, >> ? ? ? ?Torkild >> >> ------------------------------------------------------- >> import numpy >> import pylab >> from scipy import * >> from scipy import optimize >> >> ## This is y-data: >> y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0])) >> >> ## This is x-data: >> t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353])) >> >> ## This is the equation: >> fitfunc = lambda p, x: ? ?p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1) >> >> ## >> errfunc = lambda p, x, y: fitfunc(p,x) -y >> >> guess = [0, max(y_data), 0.1, 140, -0.1, 270] >> >> bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ] >> >> ## This seems to work ok: >> p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0) >> print 'Estimates from leastsq \n', p2,success >> >> >> ## But this does not: >> best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2) > > The minimization routines, I believe, in fmin expect a function that > maps from to a scalar. ?So you need to tell fmin_l_bfgs that you want > to minimize the sum of squared errors, optimze.leastsq assumes this. > So just define one more function that sums the squared errors and > minimize it > > errfuncsumsq = lambda p, x, y: np.sum(errfunc(p,x,y)**2) > > Now, run it without bounds to make sure we get the same thing > > boundsnone = [(None,None)]*6 > > Notice that you also have to tell fmin_l_bfgs_b to approximate the > gradient or else it assumes that your objective function also returns > its gradient > > best, val, d = optimize.fmin_l_bfgs_b(errfuncsum, guess, > approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2) > > p2 > array([ ?6.79548883e-02, ? 3.68922503e-01, ? 7.55565728e-02, > ? ? ? ? 1.41378227e+02, ? 2.91307814e+00, ? 2.70608242e+02]) > > best > array([ ?6.79585333e-02, ?-2.33026316e-01, ?-7.55409880e-02, > ? ? ? ? 1.41388265e+02, ?-1.36069434e+00, ? 2.70160779e+02]) > I just realized that these don't come up with the same thing. I don't have an answer for why yet. Skipper From jsseabold at gmail.com Thu Oct 14 17:58:50 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 14 Oct 2010 17:58:50 -0400 Subject: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b In-Reply-To: References: <1DF088E3-790C-4382-A53D-5F4313CB5B70@bryant.edu> <1C2E7B0273BE4B4BB75944275286AC1A25CF0C@NINSRV05.nina.no> Message-ID: On Thu, Oct 14, 2010 at 5:32 PM, Skipper Seabold wrote: > On Thu, Oct 14, 2010 at 5:21 PM, Skipper Seabold wrote: >> On Tue, Oct 12, 2010 at 9:10 AM, Tveraa, Torkild wrote: >>> Dear All, >>> >>> I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-) >>> >>> ? ? ? ?Cheers, >>> ? ? ? ?Torkild >>> >>> ------------------------------------------------------- >>> import numpy >>> import pylab >>> from scipy import * >>> from scipy import optimize >>> >>> ## This is y-data: >>> y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0])) >>> >>> ## This is x-data: >>> t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353])) >>> >>> ## This is the equation: >>> fitfunc = lambda p, x: ? ?p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1) >>> >>> ## >>> errfunc = lambda p, x, y: fitfunc(p,x) -y >>> >>> guess = [0, max(y_data), 0.1, 140, -0.1, 270] >>> >>> bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ] >>> >>> ## This seems to work ok: >>> p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0) >>> print 'Estimates from leastsq \n', p2,success >>> >>> >>> ## But this does not: >>> best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2) >> >> The minimization routines, I believe, in fmin expect a function that >> maps from to a scalar. ?So you need to tell fmin_l_bfgs that you want >> to minimize the sum of squared errors, optimze.leastsq assumes this. >> So just define one more function that sums the squared errors and >> minimize it >> >> errfuncsumsq = lambda p, x, y: np.sum(errfunc(p,x,y)**2) >> >> Now, run it without bounds to make sure we get the same thing >> >> boundsnone = [(None,None)]*6 >> >> Notice that you also have to tell fmin_l_bfgs_b to approximate the >> gradient or else it assumes that your objective function also returns >> its gradient >> >> best, val, d = optimize.fmin_l_bfgs_b(errfuncsum, guess, >> approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2) >> >> p2 >> array([ ?6.79548883e-02, ? 3.68922503e-01, ? 7.55565728e-02, >> ? ? ? ? 1.41378227e+02, ? 2.91307814e+00, ? 2.70608242e+02]) >> >> best >> array([ ?6.79585333e-02, ?-2.33026316e-01, ?-7.55409880e-02, >> ? ? ? ? 1.41388265e+02, ?-1.36069434e+00, ? 2.70160779e+02]) >> > > I just realized that these don't come up with the same thing. ?I don't > have an answer for why yet. > > Skipper > Oh, ret = optimize.leastsq(errfunc, guess, args=(t,y_data)) ret2 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2) fitfunc(ret[0],t) array([ 0.0690421 , 0.0731951 , 0.08481868, 0.14388978, 0.199337 , 0.30971974, 0.33570587, 0.3602918 , 0.36414477, 0.36777158, 0.36874788, 0.36881958, 0.14080121, 0.06794499, 0.06795339, 0.0679536 , 0.0679545 , 0.06795477, 0.06795485]) fitfunc(ret2[0],t) array([ 0.06904625, 0.07319943, 0.0848205 , 0.14386744, 0.19929593, 0.30968735, 0.3356897 , 0.36029973, 0.3641578 , 0.36779021, 0.36876834, 0.3688402 , 0.14077023, 0.06795562, 0.06795703, 0.06795724, 0.06795815, 0.06795842, 0.0679585 ]) errfuncsumsq(ret[0], t, y_data) 0.079297668259408899 errfuncsumsq(ret2[0], t, y_data) 0.079298042836826454 Skipper From seb.haase at gmail.com Fri Oct 15 04:31:22 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Fri, 15 Oct 2010 10:31:22 +0200 Subject: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b In-Reply-To: References: <1DF088E3-790C-4382-A53D-5F4313CB5B70@bryant.edu> <1C2E7B0273BE4B4BB75944275286AC1A25CF0C@NINSRV05.nina.no> Message-ID: On Thu, Oct 14, 2010 at 11:58 PM, Skipper Seabold wrote: > On Thu, Oct 14, 2010 at 5:32 PM, Skipper Seabold wrote: >> On Thu, Oct 14, 2010 at 5:21 PM, Skipper Seabold wrote: >>> On Tue, Oct 12, 2010 at 9:10 AM, Tveraa, Torkild wrote: >>>> Dear All, >>>> >>>> I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-) >>>> >>>> ? ? ? ?Cheers, >>>> ? ? ? ?Torkild >>>> >>>> ------------------------------------------------------- >>>> import numpy >>>> import pylab >>>> from scipy import * >>>> from scipy import optimize >>>> >>>> ## This is y-data: >>>> y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0])) >>>> >>>> ## This is x-data: >>>> t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353])) >>>> >>>> ## This is the equation: >>>> fitfunc = lambda p, x: ? ?p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1) >>>> >>>> ## >>>> errfunc = lambda p, x, y: fitfunc(p,x) -y >>>> >>>> guess = [0, max(y_data), 0.1, 140, -0.1, 270] >>>> >>>> bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ] >>>> >>>> ## This seems to work ok: >>>> p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0) >>>> print 'Estimates from leastsq \n', p2,success >>>> >>>> >>>> ## But this does not: >>>> best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2) >>> >>> The minimization routines, I believe, in fmin expect a function that >>> maps from to a scalar. ?So you need to tell fmin_l_bfgs that you want >>> to minimize the sum of squared errors, optimze.leastsq assumes this. >>> So just define one more function that sums the squared errors and >>> minimize it >>> >>> errfuncsumsq = lambda p, x, y: np.sum(errfunc(p,x,y)**2) >>> >>> Now, run it without bounds to make sure we get the same thing >>> >>> boundsnone = [(None,None)]*6 >>> >>> Notice that you also have to tell fmin_l_bfgs_b to approximate the >>> gradient or else it assumes that your objective function also returns >>> its gradient >>> >>> best, val, d = optimize.fmin_l_bfgs_b(errfuncsum, guess, >>> approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2) >>> >>> p2 >>> array([ ?6.79548883e-02, ? 3.68922503e-01, ? 7.55565728e-02, >>> ? ? ? ? 1.41378227e+02, ? 2.91307814e+00, ? 2.70608242e+02]) >>> >>> best >>> array([ ?6.79585333e-02, ?-2.33026316e-01, ?-7.55409880e-02, >>> ? ? ? ? 1.41388265e+02, ?-1.36069434e+00, ? 2.70160779e+02]) >>> >> >> I just realized that these don't come up with the same thing. ?I don't >> have an answer for why yet. >> >> Skipper >> > > Oh, > > ret = optimize.leastsq(errfunc, guess, args=(t,y_data)) > > ret2 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True, > bounds=boundsnone, args=(t, y_data), iprint=2) > > fitfunc(ret[0],t) > array([ 0.0690421 , ?0.0731951 , ?0.08481868, ?0.14388978, ?0.199337 ?, > ? ? ? ?0.30971974, ?0.33570587, ?0.3602918 , ?0.36414477, ?0.36777158, > ? ? ? ?0.36874788, ?0.36881958, ?0.14080121, ?0.06794499, ?0.06795339, > ? ? ? ?0.0679536 , ?0.0679545 , ?0.06795477, ?0.06795485]) > > fitfunc(ret2[0],t) > array([ 0.06904625, ?0.07319943, ?0.0848205 , ?0.14386744, ?0.19929593, > ? ? ? ?0.30968735, ?0.3356897 , ?0.36029973, ?0.3641578 , ?0.36779021, > ? ? ? ?0.36876834, ?0.3688402 , ?0.14077023, ?0.06795562, ?0.06795703, > ? ? ? ?0.06795724, ?0.06795815, ?0.06795842, ?0.0679585 ]) > > errfuncsumsq(ret[0], t, y_data) > 0.079297668259408899 > > errfuncsumsq(ret2[0], t, y_data) > 0.079298042836826454 > Very nice example ! However, is it correct, that *within* the given bounds the result is just a constant ? >>> ret3 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True, bounds=bounds, args=(t, y_data), iprint=2) >>> ret3 ([ 1.00000000e-01 1.00000000e-01 5.00000000e-02 1.39979309e+02 -5.00000000e-02 2.70003604e+02], 0.55408092, {'warnflag': 0, 'task': 'CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL', 'grad': array([-4.41837799, 1.03037829, 0. , 0. , 0. , 0. ]), 'funcalls': 2}) >>> >>> errfuncsumsq(ret3[0], t, y_data) 0.55408092 >>> >>> >>> bounds [(-0.2, 0.1), (0.1, 0.97), (0.05, 0.8), (120, 190), (-0.8, -0.05), (200, 300)] >>> fitfunc(ret3[0],t) [ 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1] >>> (Note: fitfunc above used wronge 't' instead of 'x', correct is: >>> fitfunc = lambda p, x: p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(x-p[3])))) + (1/(1+exp(p[4]*(x-p[5])))) -1) ) Thanks, Sebastian Haase From tapadamocho at yahoo.com Fri Oct 15 11:25:12 2010 From: tapadamocho at yahoo.com (Alex Luso) Date: Fri, 15 Oct 2010 08:25:12 -0700 (PDT) Subject: [SciPy-User] Problem installing scipy on 10.6.4, python 2.7 Message-ID: <81942.22876.qm@web52503.mail.re2.yahoo.com> Hi, I am trying to install Scipy from source on Python 2.7, Mac OS X 10.6.4 Despite following the instructions on the install page (i.e. use gcc++-4.0, etc, gfortran install), the install fails as follows: -- snip -- many warnings ... ld: in /usr/lib/libSystem.B.dylib, missing required architecture ppc64 in file for architecture ppc64 collect2: ld returned 1 exit status ld: warning: object file compiled with -mlong-branch which is no longer needed. To remove this warning, recompile without -mlong-branch: /usr/local/lib/gcc/powerpc-apple-darwin8/4.2.3/crt3.o lipo: can't open input file: /var/folders/4F/4F3fgpofFGSlvDrh+OmSHE+++TI/-Tmp-//cco4e1PE.out (No such file or directory) error: Command "/usr/local/bin/gfortran -Wall -arch ppc -arch i686 -arch x86_64 -arch ppc64 -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.5-fat3-2.7/build/src.macosx-10.5-fat3-2.7/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/zfft.o build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/drfft.o build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/zrfft.o build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.5-fat3-2.7/build/src.macosx-10.5-fat3-2.7/scipy/fftpack/src/dct.o build/temp.macosx-10.5-fat3-2.7/build/src.macosx-10.5-fat3-2.7/fortranobject.o -Lbuild/temp.macosx-10.5-fat3-2.7 -ldfftpack -lfftpack -lgfortran -o build/lib.macosx-10.5-fat3-2.7/scipy/fftpack/_fftpack.so" failed with exit status 1 -- snip -- As a test, I tried to manually re-run that last offending command (the gfortran one) after manually removing -arch ppc64, no error is reported. Is that a clue? Info: Mac OS 10.6.4 I am using Python binary from python.org (the 64-bit version of it): $ python --version Python 2.7 $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) $ gfortran --version GNU Fortran (GCC) 4.2.3 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Fri Oct 15 11:32:37 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 15 Oct 2010 23:32:37 +0800 Subject: [SciPy-User] Problem installing scipy on 10.6.4, python 2.7 In-Reply-To: <81942.22876.qm@web52503.mail.re2.yahoo.com> References: <81942.22876.qm@web52503.mail.re2.yahoo.com> Message-ID: On Fri, Oct 15, 2010 at 11:25 PM, Alex Luso wrote: > > Hi, > > I am trying to install Scipy from source on Python 2.7, Mac OS X 10.6.4 > > Despite following the instructions on the install page (i.e. use > gcc++-4.0, etc, gfortran install), the install fails as follows: > > -- snip -- > many warnings > ... > ld: in /usr/lib/libSystem.B.dylib, missing required architecture ppc64 in > file for architecture ppc64 > collect2: ld returned 1 exit status > ld: warning: object file compiled with -mlong-branch which is no longer > needed. To remove this warning, recompile without -mlong-branch: > /usr/local/lib/gcc/powerpc-apple-darwin8/4.2.3/crt3.o > lipo: can't open input file: > /var/folders/4F/4F3fgpofFGSlvDrh+OmSHE+++TI/-Tmp-//cco4e1PE.out (No such > file or directory) > error: Command "/usr/local/bin/gfortran -Wall -arch ppc -arch i686 -arch > x86_64 -arch ppc64 -Wall -undefined dynamic_lookup -bundle > build/temp.macosx-10.5-fat3-2.7/build/src.macosx-10.5-fat3-2.7/scipy/fftpack/_fftpackmodule.o > build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/zfft.o > build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/drfft.o > build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/zrfft.o > build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/zfftnd.o > build/temp.macosx-10.5-fat3-2.7/build/src.macosx-10.5-fat3-2.7/scipy/fftpack/src/dct.o > build/temp.macosx-10.5-fat3-2.7/build/src.macosx-10.5-fat3-2.7/fortranobject.o > -Lbuild/temp.macosx-10.5-fat3-2.7 -ldfftpack -lfftpack -lgfortran -o > build/lib.macosx-10.5-fat3-2.7/scipy/fftpack/_fftpack.so" failed with exit > status 1 > -- snip -- > > As a test, I tried to manually re-run that last offending command (the > gfortran one) after manually removing -arch ppc64, no error is reported. Is > that a clue? > It is, see http://projects.scipy.org/numpy/ticket/1399. The fix at http://github.com/matthew-brett/numpy/compare/numpy:master...farchs-from-cshould fix this problem. Cheers, Ralf > Info: > Mac OS 10.6.4 > I am using Python binary from python.org (the 64-bit version of it): > $ python --version > Python 2.7 > > $ gcc --version > i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) > > $ gfortran --version > GNU Fortran (GCC) 4.2.3 > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Fri Oct 15 11:53:41 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 15 Oct 2010 11:53:41 -0400 Subject: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b In-Reply-To: References: <1DF088E3-790C-4382-A53D-5F4313CB5B70@bryant.edu> <1C2E7B0273BE4B4BB75944275286AC1A25CF0C@NINSRV05.nina.no> Message-ID: On Fri, Oct 15, 2010 at 4:31 AM, Sebastian Haase wrote: > On Thu, Oct 14, 2010 at 11:58 PM, Skipper Seabold wrote: >> On Thu, Oct 14, 2010 at 5:32 PM, Skipper Seabold wrote: >>> On Thu, Oct 14, 2010 at 5:21 PM, Skipper Seabold wrote: >>>> On Tue, Oct 12, 2010 at 9:10 AM, Tveraa, Torkild wrote: >>>>> Dear All, >>>>> >>>>> I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-) >>>>> >>>>> ? ? ? ?Cheers, >>>>> ? ? ? ?Torkild >>>>> >>>>> ------------------------------------------------------- >>>>> import numpy >>>>> import pylab >>>>> from scipy import * >>>>> from scipy import optimize >>>>> >>>>> ## This is y-data: >>>>> y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0])) >>>>> >>>>> ## This is x-data: >>>>> t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353])) >>>>> >>>>> ## This is the equation: >>>>> fitfunc = lambda p, x: ? ?p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1) >>>>> >>>>> ## >>>>> errfunc = lambda p, x, y: fitfunc(p,x) -y >>>>> >>>>> guess = [0, max(y_data), 0.1, 140, -0.1, 270] >>>>> >>>>> bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ] >>>>> >>>>> ## This seems to work ok: >>>>> p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0) >>>>> print 'Estimates from leastsq \n', p2,success >>>>> >>>>> >>>>> ## But this does not: >>>>> best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2) >>>> >>>> The minimization routines, I believe, in fmin expect a function that >>>> maps from to a scalar. ?So you need to tell fmin_l_bfgs that you want >>>> to minimize the sum of squared errors, optimze.leastsq assumes this. >>>> So just define one more function that sums the squared errors and >>>> minimize it >>>> >>>> errfuncsumsq = lambda p, x, y: np.sum(errfunc(p,x,y)**2) >>>> >>>> Now, run it without bounds to make sure we get the same thing >>>> >>>> boundsnone = [(None,None)]*6 >>>> >>>> Notice that you also have to tell fmin_l_bfgs_b to approximate the >>>> gradient or else it assumes that your objective function also returns >>>> its gradient >>>> >>>> best, val, d = optimize.fmin_l_bfgs_b(errfuncsum, guess, >>>> approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2) >>>> >>>> p2 >>>> array([ ?6.79548883e-02, ? 3.68922503e-01, ? 7.55565728e-02, >>>> ? ? ? ? 1.41378227e+02, ? 2.91307814e+00, ? 2.70608242e+02]) >>>> >>>> best >>>> array([ ?6.79585333e-02, ?-2.33026316e-01, ?-7.55409880e-02, >>>> ? ? ? ? 1.41388265e+02, ?-1.36069434e+00, ? 2.70160779e+02]) >>>> >>> >>> I just realized that these don't come up with the same thing. ?I don't >>> have an answer for why yet. >>> >>> Skipper >>> >> >> Oh, >> >> ret = optimize.leastsq(errfunc, guess, args=(t,y_data)) >> >> ret2 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True, >> bounds=boundsnone, args=(t, y_data), iprint=2) >> >> fitfunc(ret[0],t) >> array([ 0.0690421 , ?0.0731951 , ?0.08481868, ?0.14388978, ?0.199337 ?, >> ? ? ? ?0.30971974, ?0.33570587, ?0.3602918 , ?0.36414477, ?0.36777158, >> ? ? ? ?0.36874788, ?0.36881958, ?0.14080121, ?0.06794499, ?0.06795339, >> ? ? ? ?0.0679536 , ?0.0679545 , ?0.06795477, ?0.06795485]) >> >> fitfunc(ret2[0],t) >> array([ 0.06904625, ?0.07319943, ?0.0848205 , ?0.14386744, ?0.19929593, >> ? ? ? ?0.30968735, ?0.3356897 , ?0.36029973, ?0.3641578 , ?0.36779021, >> ? ? ? ?0.36876834, ?0.3688402 , ?0.14077023, ?0.06795562, ?0.06795703, >> ? ? ? ?0.06795724, ?0.06795815, ?0.06795842, ?0.0679585 ]) >> >> errfuncsumsq(ret[0], t, y_data) >> 0.079297668259408899 >> >> errfuncsumsq(ret2[0], t, y_data) >> 0.079298042836826454 >> > > Very nice example ! > However, is it correct, that *within* the given bounds the result is > just a constant ? >>>> ret3 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True, bounds=bounds, args=(t, y_data), iprint=2) >>>> ret3 > ([ ?1.00000000e-01 ? 1.00000000e-01 ? 5.00000000e-02 ? 1.39979309e+02 > ?-5.00000000e-02 ? 2.70003604e+02], 0.55408092, {'warnflag': 0, > 'task': 'CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL', 'grad': > array([-4.41837799, ?1.03037829, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. > , ?0. ? ? ? ?]), 'funcalls': 2}) >>>> >>>> errfuncsumsq(ret3[0], t, y_data) > 0.55408092 >>>> >>>> >>>> bounds > [(-0.2, 0.1), (0.1, 0.97), (0.05, 0.8), (120, 190), (-0.8, -0.05), (200, 300)] >>>> fitfunc(ret3[0],t) > [ 0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 > ?0.1 ?0.1 ?0.1 ?0.1] >>>> > > (Note: ?fitfunc above used wronge 't' instead of 'x', correct is: >>>> fitfunc = lambda p, x: ? ?p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(x-p[3])))) + (1/(1+exp(p[4]*(x-p[5])))) -1) > ) > I am far from an expert on (constrained) optimization, but it looks like given the bounds that the solver doesn't know where to go / the function is flat in the region of the (bounded) starting parameters. from scipy.optimize import approx_fprime ret3 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, bounds=bounds, approx_grad=True, args=(t, y_data), iprint=2) approx_fprime(ret3[0], errfuncsumsq, 1e-8, *(t, y_data)) array([-4.41837799, 1.03037829, 0. , 0. , 0. , 0. ]) approx_fprime(guess, errfuncsumsq, 1e-8, *(t, y_data)) array([ -1.40430234e+01, 8.29307161e+00, 2.48736942e-01, 2.06907380e-02, -1.91057303e+00, -3.60373953e-03]) From jjstickel at vcn.com Fri Oct 15 11:55:42 2010 From: jjstickel at vcn.com (Jonathan Stickel) Date: Fri, 15 Oct 2010 09:55:42 -0600 Subject: [SciPy-User] odeint with saturation? In-Reply-To: References: Message-ID: <4CB8797E.7070202@vcn.com> On 10/12/10 11:00 , scipy-user-request at scipy.org wrote: > Date: Tue, 12 Oct 2010 08:12:41 -0400 > From: Brian Blais > Subject: [SciPy-User] odeint with saturation? > To: SciPy Users List > > Hello, > > I am using odeint to solve some diff eqs, and it works great, but > some of my cases have saturating values. In many cases a value can't > go negative (if it does, it should just be set equal to zero). It > doesn't seem as if odeint can do this, but is there an easy or > preferred way of solving that kind of system? > > > thanks! > > Brian Blais > Brian Maybe you have figured this out on your own already, but I have one suggestion. In your function that defines the set of ODEs, do something like: ... if var < eps && dvar<0: dvar = 0 ... return [array of rate variables including dvar] Where 'var' is the variable you want to prevent being negative and 'dvar' is the rate for var calculated earlier in your function. HTH, Jonathan P.S. I found that the effort to learn how to use scipy.integrate.ode was the effort. It provides more control and a slightly better solver than scipy.integrate.odeint. YMMV. From bblais at bryant.edu Fri Oct 15 13:58:42 2010 From: bblais at bryant.edu (Brian Blais) Date: Fri, 15 Oct 2010 13:58:42 -0400 Subject: [SciPy-User] odeint with saturation? In-Reply-To: <4CB8797E.7070202@vcn.com> References: <4CB8797E.7070202@vcn.com> Message-ID: On Oct 15, 2010, at 11:55 AM, Jonathan Stickel wrote: >> some of my cases have saturating values. In many cases a value can't >> go negative (if it does, it should just be set equal to zero). It > > ... > if var < eps && dvar<0: > dvar = 0 > ... thanks! so the obvious works...I feel a little sheepish now... :) > > P.S. I found that the effort to learn how to use scipy.integrate.ode > was the effort. It provides more control and a slightly better solver > than scipy.integrate.odeint. YMMV. what's the difference? it looks like ode lets you choose the integrator, and has some fine-grained options dependent on integrator method. is that right, or is there another advantage to using ode instead of odeint? thanks, bb -- Brian Blais bblais at bryant.edu http://web.bryant.edu/~bblais http://bblais.blogspot.com/ From jjstickel at vcn.com Fri Oct 15 15:30:24 2010 From: jjstickel at vcn.com (Jonathan Stickel) Date: Fri, 15 Oct 2010 13:30:24 -0600 Subject: [SciPy-User] odeint with saturation? In-Reply-To: References: <4CB8797E.7070202@vcn.com> Message-ID: <4CB8ABD0.4000809@vcn.com> On 10/15/10 11:58 , Brian Blais wrote: > On Oct 15, 2010, at 11:55 AM, Jonathan Stickel wrote: > >>> some of my cases have saturating values. In many cases a value >>> can't go negative (if it does, it should just be set equal to >>> zero). It >> >> ... if var< eps&& dvar<0: dvar = 0 ... > > thanks! so the obvious works...I feel a little sheepish now... :) > >> >> P.S. I found that the effort to learn how to use >> scipy.integrate.ode was the effort. It provides more control and a >> slightly better solver than scipy.integrate.odeint. YMMV. > > what's the difference? it looks like ode lets you choose the > integrator, and has some fine-grained options dependent on integrator > method. is that right, or is there another advantage to using ode > instead of odeint? > The help strings indicate that odeint uses LSODA and ode uses VODE. Based on a little research, it seems that VODE is a slightly newer and improved algorithm. The primary difference that I have observed is that ode does a bit better with its automatic time step determination. I also like the way that ode is typically used, i.e. in a loop. If I use a while loop, I can stop based on some criteria other than time, e.g. a variable's value or rate of convergence. I suppose odeint could be used like this as well (in a loop), but it doesn't seem that it was intended to be used this way. Jonathan From tapadamocho at yahoo.com Fri Oct 15 21:55:23 2010 From: tapadamocho at yahoo.com (Alex Luso) Date: Fri, 15 Oct 2010 18:55:23 -0700 (PDT) Subject: [SciPy-User] Problem installing scipy on 10.6.4, python 2.7 In-Reply-To: References: <81942.22876.qm@web52503.mail.re2.yahoo.com> Message-ID: <188679.22205.qm@web52502.mail.re2.yahoo.com> Thanks, that worked. Cheers, Alex ________________________________ From: Ralf Gommers To: SciPy Users List Sent: Fri, October 15, 2010 11:32:37 AM Subject: Re: [SciPy-User] Problem installing scipy on 10.6.4, python 2.7 On Fri, Oct 15, 2010 at 11:25 PM, Alex Luso wrote: > Hi, > > I am trying to install Scipy from source on Python 2.7, Mac OS X 10.6.4 > > Despite following the instructions on the install page (i.e. use gcc++-4.0, >etc, gfortran install), the install fails as follows: > >-- snip -- >many warnings >... >ld: in /usr/lib/libSystem.B.dylib, missing required architecture ppc64 in file >for architecture ppc64 >collect2: ld returned 1 exit status >ld: warning: object file compiled with -mlong-branch which is no longer needed. >To remove this warning, recompile without -mlong-branch: >/usr/local/lib/gcc/powerpc-apple-darwin8/4.2.3/crt3.o >lipo: can't open input file: >/var/folders/4F/4F3fgpofFGSlvDrh+OmSHE+++TI/-Tmp-//cco4e1PE.out (No such file or >directory) >error: Command "/usr/local/bin/gfortran -Wall -arch ppc -arch i686 -arch x86_64 >-arch ppc64 -Wall -undefined dynamic_lookup -bundle >build/temp.macosx-10.5-fat3-2.7/build/src.macosx-10.5-fat3-2.7/scipy/fftpack/_fftpackmodule.o > build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/zfft.o >build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/drfft.o >build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/zrfft.o >build/temp.macosx-10.5-fat3-2.7/scipy/fftpack/src/zfftnd.o >build/temp.macosx-10.5-fat3-2.7/build/src.macosx-10.5-fat3-2.7/scipy/fftpack/src/dct.o > build/temp.macosx-10.5-fat3-2.7/build/src.macosx-10.5-fat3-2.7/fortranobject.o >-Lbuild/temp.macosx-10.5-fat3-2.7 -ldfftpack -lfftpack -lgfortran -o >build/lib.macosx-10.5-fat3-2.7/scipy/fftpack/_fftpack.so" failed with exit >status 1 >-- snip -- > > As a test, I tried to manually re-run that last offending command (the >gfortran one) after manually removing -arch ppc64, no error is reported. Is that >a clue? > It is, see http://projects.scipy.org/numpy/ticket/1399. The fix at http://github.com/matthew-brett/numpy/compare/numpy:master...farchs-from-c should fix this problem. Cheers, Ralf >Info: >Mac OS 10.6.4 >I am using Python binary from python.org (the 64-bit version of it): >$ python --version >Python 2.7 > >$ gcc --version >i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) > >$ gfortran --version >GNU Fortran (GCC) 4.2.3 > > > > >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nico.schloemer at gmail.com Sat Oct 16 07:05:28 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Sat, 16 Oct 2010 13:05:28 +0200 Subject: [SciPy-User] Numpy array: row/column scale? Message-ID: Hi all, I've got this (100,3) array X and another array Y of length 3 , and I'd like to do something like for k in xrange(3): X[:,k] *= Y[k] This seems little efficient, and I'm sure there's a more BLASified way to do so. Right? Cheers, Nico From nico.schloemer at gmail.com Sat Oct 16 07:11:36 2010 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Sat, 16 Oct 2010 13:11:36 +0200 Subject: [SciPy-User] Numpy array: row/column scale? In-Reply-To: References: Message-ID: HA! X *= Y I love Numpy. <3 On Sat, Oct 16, 2010 at 1:05 PM, Nico Schl?mer wrote: > Hi all, > > I've got this (100,3) array X and another array Y of length 3 , and > I'd like to do something like > > for k in xrange(3): > ? ?X[:,k] *= Y[k] > > This seems little efficient, and I'm sure there's a more BLASified way > to do so. Right? > > Cheers, > Nico > From issa at aims.ac.za Sat Oct 16 13:20:00 2010 From: issa at aims.ac.za (Issa Karambal) Date: Sat, 16 Oct 2010 19:20:00 +0200 (SAST) Subject: [SciPy-User] Matrix function evaluation on a grid Message-ID: <43529.92.236.31.13.1287249600.squirrel@webmail.aims.ac.za> Hi all I would like to find out if there is an efficient way to evaluation a matrix function such that it returns a matrix of size n*m where n is the size of the matrix function and m the size of the grid. example if you have a function f=f(x,y) and evaluate it on a grid of size m, it returns an array of size m with entries f(x_i, y_j). But in my case I have a matrix function M = M(x, y) and I would like to evaluate on a grid of size m and the output must be an array of size n*m with entries M(x_i, y_j). So far I've been using the map command to obtain array of submatrices and bsr_matrix (scipy.sparse) to obtain my matrix of size m*n. But I am sure there is more efficient way to do that. Thanks From Torkild.Tveraa at nina.no Thu Oct 14 18:39:11 2010 From: Torkild.Tveraa at nina.no (Tveraa, Torkild) Date: Thu, 14 Oct 2010 22:39:11 +0000 Subject: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b In-Reply-To: References: <1DF088E3-790C-4382-A53D-5F4313CB5B70@bryant.edu> <1C2E7B0273BE4B4BB75944275286AC1A25CF0C@NINSRV05.nina.no> Message-ID: <1C2E7B0273BE4B4BB75944275286AC1A25D3D3@NINSRV05.nina.no> -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Skipper Seabold Sent: 14. oktober 2010 23:33 To: SciPy Users List Subject: Re: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b On Thu, Oct 14, 2010 at 5:21 PM, Skipper Seabold wrote: > On Tue, Oct 12, 2010 at 9:10 AM, Tveraa, Torkild wrote: >> Dear All, >> >> I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-) >> >> ? ? ? ?Cheers, >> ? ? ? ?Torkild >> >> ------------------------------------------------------- >> import numpy >> import pylab >> from scipy import * >> from scipy import optimize >> >> ## This is y-data: >> y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0])) >> >> ## This is x-data: >> t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353])) >> >> ## This is the equation: >> fitfunc = lambda p, x: ? ?p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1) >> >> ## >> errfunc = lambda p, x, y: fitfunc(p,x) -y >> >> guess = [0, max(y_data), 0.1, 140, -0.1, 270] >> >> bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ] >> >> ## This seems to work ok: >> p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0) >> print 'Estimates from leastsq \n', p2,success >> >> >> ## But this does not: >> best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2) > > The minimization routines, I believe, in fmin expect a function that > maps from to a scalar. ?So you need to tell fmin_l_bfgs that you want > to minimize the sum of squared errors, optimze.leastsq assumes this. > So just define one more function that sums the squared errors and > minimize it > > errfuncsumsq = lambda p, x, y: np.sum(errfunc(p,x,y)**2) > > Now, run it without bounds to make sure we get the same thing > > boundsnone = [(None,None)]*6 > > Notice that you also have to tell fmin_l_bfgs_b to approximate the > gradient or else it assumes that your objective function also returns > its gradient > > best, val, d = optimize.fmin_l_bfgs_b(errfuncsum, guess, > approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2) > > p2 > array([ ?6.79548883e-02, ? 3.68922503e-01, ? 7.55565728e-02, > ? ? ? ? 1.41378227e+02, ? 2.91307814e+00, ? 2.70608242e+02]) > > best > array([ ?6.79585333e-02, ?-2.33026316e-01, ?-7.55409880e-02, > ? ? ? ? 1.41388265e+02, ?-1.36069434e+00, ? 2.70160779e+02]) > I just realized that these don't come up with the same thing. I don't have an answer for why yet. Skipper _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user Thanks a lot for your help and insight Skipper. Could it be that the difference between the two simply arise because it is a bit ambiguous to estimate 6 parameters based on just 21 observations? At least that is my gut feeling :-) The data are a bit noisy (see attached figure) so the results might vary accordingly? Thanks, Torkild Estimates from leastsq: [ 6.79548906e-02 3.68922902e-01 7.55558888e-02 1.41378372e+02 2.25898321e+00 2.70494848e+02] Estimates from fmin_l_bfgs_b (without bounds): [ -2.24017466e-01 1.51747173e+00 1.62043085e-02 1.20009182e+02 1.27344545e-02 2.10006033e+02] Estimates from fmin_l_bfgs_b (with bounds): (if you try this please note that bounds for p[4] should be positive) [ 7.56582544e-02 3.72058134e-01 7.57033694e-02 1.42823970e+02 7.99990970e-01 2.50821346e+02] -------------- next part -------------- A non-text attachment was scrubbed... Name: plot_of_data.png Type: image/png Size: 15343 bytes Desc: plot_of_data.png URL: From Torkild.Tveraa at nina.no Fri Oct 15 05:05:05 2010 From: Torkild.Tveraa at nina.no (Tveraa, Torkild) Date: Fri, 15 Oct 2010 09:05:05 +0000 Subject: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b In-Reply-To: References: <1DF088E3-790C-4382-A53D-5F4313CB5B70@bryant.edu> <1C2E7B0273BE4B4BB75944275286AC1A25CF0C@NINSRV05.nina.no> Message-ID: <1C2E7B0273BE4B4BB75944275286AC1A25D480@NINSRV05.nina.no> -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Sebastian Haase Sent: 15. oktober 2010 10:31 To: SciPy Users List Subject: Re: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b On Thu, Oct 14, 2010 at 11:58 PM, Skipper Seabold wrote: > On Thu, Oct 14, 2010 at 5:32 PM, Skipper Seabold wrote: >> On Thu, Oct 14, 2010 at 5:21 PM, Skipper Seabold wrote: >>> On Tue, Oct 12, 2010 at 9:10 AM, Tveraa, Torkild wrote: >>>> Dear All, >>>> >>>> I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-) >>>> >>>> ? ? ? ?Cheers, >>>> ? ? ? ?Torkild >>>> >>>> ------------------------------------------------------- >>>> import numpy >>>> import pylab >>>> from scipy import * >>>> from scipy import optimize >>>> >>>> ## This is y-data: >>>> y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0])) >>>> >>>> ## This is x-data: >>>> t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353])) >>>> >>>> ## This is the equation: >>>> fitfunc = lambda p, x: ? ?p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1) >>>> >>>> ## >>>> errfunc = lambda p, x, y: fitfunc(p,x) -y >>>> >>>> guess = [0, max(y_data), 0.1, 140, -0.1, 270] >>>> >>>> bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ] >>>> >>>> ## This seems to work ok: >>>> p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0) >>>> print 'Estimates from leastsq \n', p2,success >>>> >>>> >>>> ## But this does not: >>>> best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2) >>> >>> The minimization routines, I believe, in fmin expect a function that >>> maps from to a scalar. ?So you need to tell fmin_l_bfgs that you want >>> to minimize the sum of squared errors, optimze.leastsq assumes this. >>> So just define one more function that sums the squared errors and >>> minimize it >>> >>> errfuncsumsq = lambda p, x, y: np.sum(errfunc(p,x,y)**2) >>> >>> Now, run it without bounds to make sure we get the same thing >>> >>> boundsnone = [(None,None)]*6 >>> >>> Notice that you also have to tell fmin_l_bfgs_b to approximate the >>> gradient or else it assumes that your objective function also returns >>> its gradient >>> >>> best, val, d = optimize.fmin_l_bfgs_b(errfuncsum, guess, >>> approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2) >>> >>> p2 >>> array([ ?6.79548883e-02, ? 3.68922503e-01, ? 7.55565728e-02, >>> ? ? ? ? 1.41378227e+02, ? 2.91307814e+00, ? 2.70608242e+02]) >>> >>> best >>> array([ ?6.79585333e-02, ?-2.33026316e-01, ?-7.55409880e-02, >>> ? ? ? ? 1.41388265e+02, ?-1.36069434e+00, ? 2.70160779e+02]) >>> >> >> I just realized that these don't come up with the same thing. ?I don't >> have an answer for why yet. >> >> Skipper >> > > Oh, > > ret = optimize.leastsq(errfunc, guess, args=(t,y_data)) > > ret2 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True, > bounds=boundsnone, args=(t, y_data), iprint=2) > > fitfunc(ret[0],t) > array([ 0.0690421 , ?0.0731951 , ?0.08481868, ?0.14388978, ?0.199337 ?, > ? ? ? ?0.30971974, ?0.33570587, ?0.3602918 , ?0.36414477, ?0.36777158, > ? ? ? ?0.36874788, ?0.36881958, ?0.14080121, ?0.06794499, ?0.06795339, > ? ? ? ?0.0679536 , ?0.0679545 , ?0.06795477, ?0.06795485]) > > fitfunc(ret2[0],t) > array([ 0.06904625, ?0.07319943, ?0.0848205 , ?0.14386744, ?0.19929593, > ? ? ? ?0.30968735, ?0.3356897 , ?0.36029973, ?0.3641578 , ?0.36779021, > ? ? ? ?0.36876834, ?0.3688402 , ?0.14077023, ?0.06795562, ?0.06795703, > ? ? ? ?0.06795724, ?0.06795815, ?0.06795842, ?0.0679585 ]) > > errfuncsumsq(ret[0], t, y_data) > 0.079297668259408899 > > errfuncsumsq(ret2[0], t, y_data) > 0.079298042836826454 > Very nice example ! However, is it correct, that *within* the given bounds the result is just a constant ? >>> ret3 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True, bounds=bounds, args=(t, y_data), iprint=2) >>> ret3 ([ 1.00000000e-01 1.00000000e-01 5.00000000e-02 1.39979309e+02 -5.00000000e-02 2.70003604e+02], 0.55408092, {'warnflag': 0, 'task': 'CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL', 'grad': array([-4.41837799, 1.03037829, 0. , 0. , 0. , 0. ]), 'funcalls': 2}) >>> >>> errfuncsumsq(ret3[0], t, y_data) 0.55408092 >>> >>> >>> bounds [(-0.2, 0.1), (0.1, 0.97), (0.05, 0.8), (120, 190), (-0.8, -0.05), (200, 300)] >>> fitfunc(ret3[0],t) [ 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1] >>> (Note: fitfunc above used wronge 't' instead of 'x', correct is: >>> fitfunc = lambda p, x: p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(x-p[3])))) + (1/(1+exp(p[4]*(x-p[5])))) -1) ) Thanks, Sebastian Haase _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user Jepp thanks. - For me it was a very, very helpful lesson. I realize that equation is here somewhat taken out of the air, sorry. The equation is slightly modified from Beck et al 2006. "Improved monitoring of vegetation dynamics at very high latitudes: A new method using MODIS NDVI. Remote Sensing of Environment 100:321-334". So the first parameter (p[0]) estimates the minimum NDVI/EVI. Parameter 2 estimates maximum NDVI. Parameter 3 & 5 estimates the rate of increase in EVI in spring and the rate of decrease in NDVI in fall. Parameter 4 & 6 estimates the inflection points in spring and fall (cf. Fig 3 in Beck et al). Thanks to you both, Torkild From Torkild.Tveraa at nina.no Fri Oct 15 19:45:17 2010 From: Torkild.Tveraa at nina.no (Tveraa, Torkild) Date: Fri, 15 Oct 2010 23:45:17 +0000 Subject: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b In-Reply-To: References: <1DF088E3-790C-4382-A53D-5F4313CB5B70@bryant.edu> <1C2E7B0273BE4B4BB75944275286AC1A25CF0C@NINSRV05.nina.no> Message-ID: <1C2E7B0273BE4B4BB75944275286AC1A25D605@NINSRV05.nina.no> -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Skipper Seabold Sent: 15. oktober 2010 17:54 To: SciPy Users List Subject: Re: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b On Fri, Oct 15, 2010 at 4:31 AM, Sebastian Haase wrote: > On Thu, Oct 14, 2010 at 11:58 PM, Skipper Seabold wrote: >> On Thu, Oct 14, 2010 at 5:32 PM, Skipper Seabold wrote: >>> On Thu, Oct 14, 2010 at 5:21 PM, Skipper Seabold wrote: >>>> On Tue, Oct 12, 2010 at 9:10 AM, Tveraa, Torkild wrote: >>>>> Dear All, >>>>> >>>>> I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-) >>>>> >>>>> ? ? ? ?Cheers, >>>>> ? ? ? ?Torkild >>>>> >>>>> ------------------------------------------------------- >>>>> import numpy >>>>> import pylab >>>>> from scipy import * >>>>> from scipy import optimize >>>>> >>>>> ## This is y-data: >>>>> y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0])) >>>>> >>>>> ## This is x-data: >>>>> t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353])) >>>>> >>>>> ## This is the equation: >>>>> fitfunc = lambda p, x: ? ?p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1) >>>>> >>>>> ## >>>>> errfunc = lambda p, x, y: fitfunc(p,x) -y >>>>> >>>>> guess = [0, max(y_data), 0.1, 140, -0.1, 270] >>>>> >>>>> bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ] >>>>> >>>>> ## This seems to work ok: >>>>> p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0) >>>>> print 'Estimates from leastsq \n', p2,success >>>>> >>>>> >>>>> ## But this does not: >>>>> best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2) >>>> >>>> The minimization routines, I believe, in fmin expect a function that >>>> maps from to a scalar. ?So you need to tell fmin_l_bfgs that you want >>>> to minimize the sum of squared errors, optimze.leastsq assumes this. >>>> So just define one more function that sums the squared errors and >>>> minimize it >>>> >>>> errfuncsumsq = lambda p, x, y: np.sum(errfunc(p,x,y)**2) >>>> >>>> Now, run it without bounds to make sure we get the same thing >>>> >>>> boundsnone = [(None,None)]*6 >>>> >>>> Notice that you also have to tell fmin_l_bfgs_b to approximate the >>>> gradient or else it assumes that your objective function also returns >>>> its gradient >>>> >>>> best, val, d = optimize.fmin_l_bfgs_b(errfuncsum, guess, >>>> approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2) >>>> >>>> p2 >>>> array([ ?6.79548883e-02, ? 3.68922503e-01, ? 7.55565728e-02, >>>> ? ? ? ? 1.41378227e+02, ? 2.91307814e+00, ? 2.70608242e+02]) >>>> >>>> best >>>> array([ ?6.79585333e-02, ?-2.33026316e-01, ?-7.55409880e-02, >>>> ? ? ? ? 1.41388265e+02, ?-1.36069434e+00, ? 2.70160779e+02]) >>>> >>> >>> I just realized that these don't come up with the same thing. ?I don't >>> have an answer for why yet. >>> >>> Skipper >>> >> >> Oh, >> >> ret = optimize.leastsq(errfunc, guess, args=(t,y_data)) >> >> ret2 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True, >> bounds=boundsnone, args=(t, y_data), iprint=2) >> >> fitfunc(ret[0],t) >> array([ 0.0690421 , ?0.0731951 , ?0.08481868, ?0.14388978, ?0.199337 ?, >> ? ? ? ?0.30971974, ?0.33570587, ?0.3602918 , ?0.36414477, ?0.36777158, >> ? ? ? ?0.36874788, ?0.36881958, ?0.14080121, ?0.06794499, ?0.06795339, >> ? ? ? ?0.0679536 , ?0.0679545 , ?0.06795477, ?0.06795485]) >> >> fitfunc(ret2[0],t) >> array([ 0.06904625, ?0.07319943, ?0.0848205 , ?0.14386744, ?0.19929593, >> ? ? ? ?0.30968735, ?0.3356897 , ?0.36029973, ?0.3641578 , ?0.36779021, >> ? ? ? ?0.36876834, ?0.3688402 , ?0.14077023, ?0.06795562, ?0.06795703, >> ? ? ? ?0.06795724, ?0.06795815, ?0.06795842, ?0.0679585 ]) >> >> errfuncsumsq(ret[0], t, y_data) >> 0.079297668259408899 >> >> errfuncsumsq(ret2[0], t, y_data) >> 0.079298042836826454 >> > > Very nice example ! > However, is it correct, that *within* the given bounds the result is > just a constant ? >>>> ret3 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True, bounds=bounds, args=(t, y_data), iprint=2) >>>> ret3 > ([ ?1.00000000e-01 ? 1.00000000e-01 ? 5.00000000e-02 ? 1.39979309e+02 > ?-5.00000000e-02 ? 2.70003604e+02], 0.55408092, {'warnflag': 0, > 'task': 'CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL', 'grad': > array([-4.41837799, ?1.03037829, ?0. ? ? ? ?, ?0. ? ? ? ?, ?0. > , ?0. ? ? ? ?]), 'funcalls': 2}) >>>> >>>> errfuncsumsq(ret3[0], t, y_data) > 0.55408092 >>>> >>>> >>>> bounds > [(-0.2, 0.1), (0.1, 0.97), (0.05, 0.8), (120, 190), (-0.8, -0.05), (200, 300)] >>>> fitfunc(ret3[0],t) > [ 0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 ?0.1 > ?0.1 ?0.1 ?0.1 ?0.1] >>>> > > (Note: ?fitfunc above used wronge 't' instead of 'x', correct is: >>>> fitfunc = lambda p, x: ? ?p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(x-p[3])))) + (1/(1+exp(p[4]*(x-p[5])))) -1) > ) > I am far from an expert on (constrained) optimization, but it looks like given the bounds that the solver doesn't know where to go / the function is flat in the region of the (bounded) starting parameters. from scipy.optimize import approx_fprime ret3 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, bounds=bounds, approx_grad=True, args=(t, y_data), iprint=2) approx_fprime(ret3[0], errfuncsumsq, 1e-8, *(t, y_data)) array([-4.41837799, 1.03037829, 0. , 0. , 0. , 0. ]) approx_fprime(guess, errfuncsumsq, 1e-8, *(t, y_data)) array([ -1.40430234e+01, 8.29307161e+00, 2.48736942e-01, 2.06907380e-02, -1.91057303e+00, -3.60373953e-03]) _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user A colleague and R-guru came up with what seems to be a solution: fitfunc = lambda p, x: (p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(x-p[3])))) + (1/(1+exp(p[4]*(x-p[5])))) -1)) errfunc = lambda p, y_data: np.sum((y_data-fitfunc(p,x))**2) best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, approx_grad=True, args=(y_data,), iprint=-1) ret = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, approx_grad=True, args=(y_data,), iprint=-1) From ralf.gommers at googlemail.com Sun Oct 17 02:17:53 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 17 Oct 2010 14:17:53 +0800 Subject: [SciPy-User] Problems building SciPy on OS X due to ppc64 issues In-Reply-To: References: Message-ID: On Wed, Oct 13, 2010 at 11:16 PM, Karl-Dieter Crisman wrote: > Hi, sorry for the delay. ?This is on Intel OS X 10.6 (more than one > machine). ?The previous poster knows a lot about build stuff, but > doesn't happen to have access to this type of system. > > We have tried the latest fixes to Numpy which just remove ppc64 (or do > something smarter), and although that still might end up being an > issue, it isn't this problem. ?I think that Robert's is the correct > analysis. ?We are using gfortran-4.2 on Mac OS X 10.6 as an included > binary in Sage to do fortran compiling (which works fine for R and > Numpy, and until recently was fine for Scipy as well). ?Also, I should > point out that g95 works great for Scipy 0.8 on our OS X 10.4 machines > (we are working on eventually migrating to gfortran, but unfortunately > this is difficult because of Sage's "batteries included" philosophy), > so different fortran compilers shouldn't be the issue per se, but the > extra option appears to be - compiling object code twice, is that > correct? > > I know very little about how compiler options are set, so I don't know > where this would be coming from. ?Where does this "compile options" > line usually come from in a typical Scipy build? ?Is it somewhere in a > makefile or something (again, excuse my ignorance if this makes no > sense, I'm only guessing blindly). It comes from CCompiler_compile() in numpy.distutils.ccompiler.py. On my system it looks like: compile options: '-I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -c' gfortran:f77: scipy/fftpack/src/dfftpack/dcosqb.f No idea where the second '-c' comes from. Ralf > > Karl-Dieter Crisman > >> -I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include >> -c -c scipy/fftpack/src/dfftpack/dcosqb.f -o > > Are the multiple "-c" options causing issues? From the build log, it > looks like "-c" is being added explicitly somewhere. > > compile options: > '-I/Applications/sage_builds/numpy/sage-4.6.alpha2/local/lib/python2.6/site-packages/numpy/core/include > -c' > > Exactly where did the gfortran compiler come from? What version is it? > What architecture is the machine doing the build? > > -- > Robert Kern > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at googlemail.com Sun Oct 17 07:35:59 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 17 Oct 2010 19:35:59 +0800 Subject: [SciPy-User] ANN: NumPy 1.5.1 release candidate 1 Message-ID: Hi, I am pleased to announce the availability of the first release candidate of NumPy 1.5.1. This is a bug-fix release with no new features compared to 1.5.0. Binaries, sources and release notes can be found at https://sourceforge.net/projects/numpy/files/. A note on the available binaries for OS X: these are known to not work on Intel-based OS X 10.5. We hope to have that problem fixed within a week. On Windows there are still two known test failures: - #1610, in fromfile (under Python 2.x) - #1633, a failing test for ldexp (under Python 2.5 only) Please report any other issues on the Numpy-discussion mailing list. Enjoy, Ralf From benni.buch at gmx.de Sun Oct 17 06:48:24 2010 From: benni.buch at gmx.de (Benjamin Buch) Date: Sun, 17 Oct 2010 10:48:24 +0000 (UTC) Subject: [SciPy-User] Problem installing scipy on 10.6.4, python 2.7 References: <81942.22876.qm@web52503.mail.re2.yahoo.com> <188679.22205.qm@web52502.mail.re2.yahoo.com> Message-ID: Hi, I have tried the supplied patches, but now the build process stops with this: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/ Versions/2.7/lib/python2.7/site- packages/numpy/distutils/fcompiler/ __init__.py", line 812, in new_fcompiler load_all_fcompiler_classes() File "/Library/Frameworks/Python.framework/ Versions/2.7/lib/python2.7/site- packages/numpy/distutils/fcompiler/ __init__.py", line 734, in load_all_fcompiler_classes fcompiler_aliases[alias][1].__name__)) ValueError: alias 'g77' defined for both GnuFCompiler and GnuFCompiler So it seems like I would need the g77 compiler instead of gfortran? Or is there a setting on my machine wrong? I'm pretty noob to all this compiling stuff so I don't know what to do... I'm on OSX 10.6.4, use python.org 32-bit python and gfortran 4.2.1. Best, Benjamin From thomas.p.krauss at gmail.com Sun Oct 17 19:27:26 2010 From: thomas.p.krauss at gmail.com (Tom Krauss) Date: Sun, 17 Oct 2010 23:27:26 +0000 (UTC) Subject: [SciPy-User] ANN: upfirdn 0.2.1 Message-ID: ANNOUNCEMENT I am pleased to announce a new release of "upfirdn" - version 0.2.1. This package provides an efficient polyphase FIR resampler object (SWIG-ed C++) and some python wrappers. This release includes C++-only templatized functions contributed originally by Lewis Anderson at USCD. Besides the main project web-site at opensource.motorola.com, the source code is browse-able online through a Google Code site with mercurial repository. https://opensource.motorola.com/sf/projects/upfirdn http://code.google.com/p/upfirdn/ Thanks to Google for providing this hosting service! From ralf.gommers at googlemail.com Mon Oct 18 08:48:38 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 18 Oct 2010 20:48:38 +0800 Subject: [SciPy-User] Problem installing scipy on 10.6.4, python 2.7 In-Reply-To: References: <81942.22876.qm@web52503.mail.re2.yahoo.com> <188679.22205.qm@web52502.mail.re2.yahoo.com> Message-ID: On Sun, Oct 17, 2010 at 6:48 PM, Benjamin Buch wrote: > Hi, > > I have tried the supplied patches, but now the build process stops with this: You don't seem to be replying to another email, so I'm not sure exactly what you did. Can you please give us: 1. exact numpy/scipy versions you used (version or git commit nr) 2. what patches you used 3. the build command you used and the complete build log 4. gfortran details (see below) > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/ > Versions/2.7/lib/python2.7/site- > packages/numpy/distutils/fcompiler/ > __init__.py", line 812, in new_fcompiler > ? ?load_all_fcompiler_classes() > ?File "/Library/Frameworks/Python.framework/ > Versions/2.7/lib/python2.7/site- > packages/numpy/distutils/fcompiler/ > __init__.py", line 734, in load_all_fcompiler_classes > ? ?fcompiler_aliases[alias][1].__name__)) > ValueError: alias 'g77' defined for both > GnuFCompiler and GnuFCompiler > > > So it seems like I would need the g77 compiler instead of gfortran? > Or is there a setting on my machine wrong? > I'm pretty noob to all this compiling stuff so I don't know what to do... > > I'm on OSX 10.6.4, use python.org 32-bit python and gfortran 4.2.1. The recommended gfortran is from http://r.research.att.com/tools/, the version should be 4.2.3. Where did you get yours? Cheers, Ralf From vincent at vincentdavis.net Mon Oct 18 09:55:16 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Mon, 18 Oct 2010 07:55:16 -0600 Subject: [SciPy-User] [Numpy-discussion] ANN: NumPy 1.5.1 release candidate 1 In-Reply-To: References: Message-ID: On Sun, Oct 17, 2010 at 5:35 AM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the first release > candidate of NumPy 1.5.1. This is a bug-fix release with no new > features compared to 1.5.0. > > Binaries, sources and release notes can be found at > https://sourceforge.net/projects/numpy/files/. > A note on the available binaries for OS X: these are known to not work > on Intel-based OS X 10.5. We hope to have that problem fixed within a > week. > > On Windows there are still two known test failures: > - #1610, in fromfile (under Python 2.x) > - #1633, a failing test for ldexp (under Python 2.5 only) > Please report any other issues on the Numpy-discussion mailing list. Test pass for me. osx 10.6 py27 OK (KNOWNFAIL=4, SKIP=2) Vincent > > Enjoy, > Ralf > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Thanks Vincent Davis 720-301-3003 From benni.buch at gmx.de Mon Oct 18 11:10:11 2010 From: benni.buch at gmx.de (Benjamin Buch) Date: Mon, 18 Oct 2010 17:10:11 +0200 Subject: [SciPy-User] Problem installing scipy on 10.6.4, python 2.7 In-Reply-To: References: <81942.22876.qm@web52503.mail.re2.yahoo.com> <188679.22205.qm@web52502.mail.re2.yahoo.com> Message-ID: <12AD279B-0074-416A-8C13-E3DB4982FD91@gmx.de> Hi Ralf, thanks for your answer! Am 18.10.2010 um 14:48 schrieb Ralf Gommers: > You don't seem to be replying to another email, so I'm not sure > exactly what you did. Can you please give us: I was replying to this email directly on gmane: http://permalink.gmane.org/gmane.comp.python.scientific.user/26559 However, it seems like I somehow messed up the replying process.. As I deleted all I've done yesterday, I gave it a new try today - which yielded different results which I will describe here. > 1. exact numpy/scipy versions you used (version or git commit nr) numpy: >>> numpy.__version__ '1.5.0' scipy: Recent svn checkout: revision 6845. > 2. what patches you used http://projects.scipy.org/numpy/ticket/1399 - this was already patched in my numpy install. http://github.com/matthew-brett/numpy/compare/numpy:master...farchs-from-c > 3. the build command you used and the complete build log It now builds and installs with python setup.py build python setup.py install But I did 'export MACOSX_DEPLOYMENT_TARGET=10.6' before - read it here: http://mail.scipy.org/pipermail/numpy-discussion/2010-August/052227.html This is the only thing I did differently than yesterday (as far as I can remember). However, if I now do >>> import scipy >>> scipy.test() I get some errors. See this gist: http://gist.github.com/632345 > 4. gfortran details (see below) > >> >> Traceback (most recent call last): >> >> File "/Library/Frameworks/Python.framework/ >> Versions/2.7/lib/python2.7/site- >> packages/numpy/distutils/fcompiler/ >> __init__.py", line 812, in new_fcompiler >> load_all_fcompiler_classes() >> File "/Library/Frameworks/Python.framework/ >> Versions/2.7/lib/python2.7/site- >> packages/numpy/distutils/fcompiler/ >> __init__.py", line 734, in load_all_fcompiler_classes >> fcompiler_aliases[alias][1].__name__)) >> ValueError: alias 'g77' defined for both >> GnuFCompiler and GnuFCompiler >> >> >> So it seems like I would need the g77 compiler instead of gfortran? >> Or is there a setting on my machine wrong? >> I'm pretty noob to all this compiling stuff so I don't know what to do... >> >> I'm on OSX 10.6.4, use python.org 32-bit python and gfortran 4.2.1. > > The recommended gfortran is from http://r.research.att.com/tools/, the > version should be 4.2.3. Where did you get yours? I installed gfortran via homebrew: http://mxcl.github.com/homebrew/ I think it should be 4.2.3, as 'brew info gfortran' gives me $ brew info gfortran gfortran 4.2.4-5664 http://r.research.att.com/tools/ /usr/local/Cellar/gfortran/4.2.4-5664 (28 files, 83M) But gfortran itself tells me otherwise: $ gfortran --version GNU Fortran (GCC) 4.2.1 (Apple Inc. build 5664) Copyright (C) 2007 Free Software Foundation, Inc. > Cheers, > Ralf Best, Benjamin From macquigg at ece.arizona.edu Mon Oct 18 14:11:53 2010 From: macquigg at ece.arizona.edu (David MacQuigg) Date: Mon, 18 Oct 2010 18:11:53 +0000 (UTC) Subject: [SciPy-User] Avoiding lambda functions Message-ID: I'm working on some Python examples to present to freshman students interested science and engineering. One of the more powerful examples is image processing using FFTs and spatial filters. The examples I have from a graduate class in astronomy use lambda functions in a way which freshmen will find confusing. Here is part of the example code:
from numpy import exp, indices  # numpy package from scipy.org
img0 = imread('Lena.pgm')    # a 200 by 200 greyscale image
shape = img0.shape           # (200, 200)

def gauss(i,j,sigma,shape):  # a 2D gaussian function
    x = -1.0 + 2.0*i/shape[0]
    y = -1.0 + 2.0*j/shape[1]
    ans = exp(-(x*x+y*y)/(2*sigma*sigma))
    return ans

def gaussianfilter(sigma,shape):
    iray, jray = indices(shape)     # indices for a 200 x 200 array
    filter = (lambda i,j: gauss(i,j,sigma,shape))(iray, jray)
    return filter

filter = gaussianfilter(0.1,shape)

This use of lambda is confusing.  The reason to use lambda syntax is that it
saves having to provide a name for a simple one-line function.  Here, we are
giving the lambda a name "filter", so there is no savings, just convoluted code,
which is contrary to the spirit of Python.

Let's try to "unconvolute" the gaussianfilter function.

def gaussianfilter01(sigma, shape):
    iray,jray  = indices(shape)
    def filter(i, j):
        return gauss(i,j,sigma,shape)(iray, jray)
    return filter
This doesn't work!! The problem is that the original function returns a numpy array, and here we get just an ordinary function. It seems that numpy is doing something special with the lambda syntax. How can we do this and keep it simple. I would really like to avoid lambda functions entirely, but not if it means we lose the elegance of numpy arrays. From robert.kern at gmail.com Mon Oct 18 14:25:11 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 18 Oct 2010 13:25:11 -0500 Subject: [SciPy-User] Avoiding lambda functions In-Reply-To: References: Message-ID: On Mon, Oct 18, 2010 at 13:11, David MacQuigg wrote: > I'm working on some Python examples to present to freshman students interested > science and engineering. ?One of the more powerful examples is image processing > using FFTs and spatial filters. ?The examples I have from a graduate class in > astronomy use lambda functions in a way which freshmen will find confusing. > > Here is part of the example code: >
> from numpy import exp, indices ?# numpy package from scipy.org
> img0 = imread('Lena.pgm') ? ?# a 200 by 200 greyscale image
> shape = img0.shape ? ? ? ? ? # (200, 200)
>
> def gauss(i,j,sigma,shape): ?# a 2D gaussian function
> ? ?x = -1.0 + 2.0*i/shape[0]
> ? ?y = -1.0 + 2.0*j/shape[1]
> ? ?ans = exp(-(x*x+y*y)/(2*sigma*sigma))
> ? ?return ans
>
> def gaussianfilter(sigma,shape):
> ? ?iray, jray = indices(shape) ? ? # indices for a 200 x 200 array
> ? ?filter = (lambda i,j: gauss(i,j,sigma,shape))(iray, jray)
> ? ?return filter
>
> filter = gaussianfilter(0.1,shape)
>
> This use of lambda is confusing. ?The reason to use lambda syntax is that it
> saves having to provide a name for a simple one-line function. ?Here, we are
> giving the lambda a name "filter", so there is no savings, just convoluted code,
> which is contrary to the spirit of Python.
>
> Let's try to "unconvolute" the gaussianfilter function.
>
> def gaussianfilter01(sigma, shape):
> ? ?iray,jray ?= indices(shape)
> ? ?def filter(i, j):
> ? ? ? ?return gauss(i,j,sigma,shape)(iray, jray)
> ? ?return filter
> 
> This doesn't work!! The problem is that the original function returns a numpy > array, and here we get just an ordinary function. ?It seems that numpy is doing > something special with the lambda syntax. Not at all. The first version is calling the lambda inside the gaussianfilter() function for some weird reason. Check the parentheses. Your version defines the function (incorrectly, as it happens), but does not call it. The least-confusing version is this: def gaussianfilter(sigma, shape): iray,jray ?= indices(shape) filter = gauss(iray, jray, sigma, shape) return filter -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From zachary.pincus at yale.edu Mon Oct 18 14:35:53 2010 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 18 Oct 2010 14:35:53 -0400 Subject: [SciPy-User] Avoiding lambda functions In-Reply-To: References: Message-ID: > from numpy import exp, indices # numpy package from scipy.org > img0 = imread('Lena.pgm') # a 200 by 200 greyscale image > shape = img0.shape # (200, 200) > > def gauss(i,j,sigma,shape): # a 2D gaussian function > x = -1.0 + 2.0*i/shape[0] > y = -1.0 + 2.0*j/shape[1] > ans = exp(-(x*x+y*y)/(2*sigma*sigma)) > return ans > > def gaussianfilter(sigma,shape): > iray, jray = indices(shape) # indices for a 200 x 200 array > filter = (lambda i,j: gauss(i,j,sigma,shape))(iray, jray) > return filter > > filter = gaussianfilter(0.1,shape) Here you are constructing a lambda function and immediately calling it, storing the result (a numpy array) in the variable 'filter'. Let's take that line apart a little. The below is equivalent: temp_function = lambda i, j: gauss(i, j, sigma, shape) filter = temp_function(iray, jray) Which, as you can see now, is a little silly, as it is EXACTLY equivalent to the following: filter = gauss(iray, jray, sigma, shape) > def gaussianfilter01(sigma, shape): > iray,jray = indices(shape) > def filter(i, j): > return gauss(i,j,sigma,shape)(iray, jray) > return filter > This doesn't work!! Yes, this will not work for a variety of reasons. It returns a function, as you have noted. Moreover, the function is broken. If I do the following: filter = gaussianfilter01(sigma, shape) then filter is a 2-parameter function that I can call as: filter(i, j) which will do two operations, that decompose as follows: temp_result = gauss(i, j, sigma, shape) (using the i and j passed to the filter function, and the "stored" sigma and shape value that was passed to gaussianfilter01.) then it will attempt to call, as a function, temp_result as follows: temp_result(iray, jray) Which might work if temp_result (what gets returned from gauss) is a function, but of course it's not, it's a numpy array. If you wanted to de-lambdafy the above code, this would be exactly equivalent: def gaussianfilter01(sigma, shape): iray,jray = indices(shape) def temp_function(i, j): return gauss(i,j,sigma,shape) filter = temp_function(iray, jray) return filter Which is still silly compared to just calling gauss() directly. Meta-lesson here: when confronted with something that doesn't work as expected in python, it is a good idea to pull apart compound statements in the interactive python interpreter to make sure that the intermediate results are what you are expecting. Zach From stefan at sun.ac.za Mon Oct 18 16:47:36 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 18 Oct 2010 22:47:36 +0200 Subject: [SciPy-User] Avoiding lambda functions In-Reply-To: References: Message-ID: Hi David On Mon, Oct 18, 2010 at 8:11 PM, David MacQuigg wrote: > I'm working on some Python examples to present to freshman students interested > science and engineering. ?One of the more powerful examples is image processing > using FFTs and spatial filters. ?The examples I have from a graduate class in > astronomy use lambda functions in a way which freshmen will find confusing. You may also want to check out the py4science examples, hosted in the matplotlib repository at http://matplotlib.sf.net. Those are the examples used by Fernando Perez and John Hunter in their workshops. Regards St?fan From macquigg at ece.arizona.edu Mon Oct 18 20:08:46 2010 From: macquigg at ece.arizona.edu (David MacQuigg) Date: Tue, 19 Oct 2010 00:08:46 +0000 (UTC) Subject: [SciPy-User] Avoiding lambda functions References: Message-ID: > Reply to all. Many thanks to Robert and Zachary. I see now we don't even need the gaussianfilter function. We can just use gauss directly. Code modifications below. Thanks also to Stefan. I'll check out py4science. > from numpy import exp, indices # numpy package from scipy.org > img0 = imread('Lena.pgm') # a 200 by 200 greyscale image > shape = img0.shape # (200, 200) > > def gauss(i,j,sigma,shape): # a 2D gaussian function > x = -1.0 + 2.0*i/shape[0] > y = -1.0 + 2.0*j/shape[1] > return exp(-(x*x+y*y)/(2*sigma*sigma)) iray, jray = indices(shape) filter = gauss(iray, jray, 0.1, shape) From j.reid at mail.cryst.bbk.ac.uk Tue Oct 19 07:13:14 2010 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Tue, 19 Oct 2010 12:13:14 +0100 Subject: [SciPy-User] _construct_default_doc extradoc=None bug in latest svn of scipy.stats.distributions Message-ID: Hi, Using the latest scipy svn version (6845) I had the following ipython session: In [1]: import scipy.stats.distributions In [2]: scipy.stats.distributions.gamma_gen( name = 'gamma' ) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /home/john/local/src/ in () /usr/local/lib/python2.6/dist-packages/scipy/stats/distributions.pyc in __init__(self, momtype, a, b, xa, xb, xtol, badvalue, name, longname, shapes, extradoc) 963 # generate docstring for subclass instances 964 if self.__doc__ is None: --> 965 self._construct_default_doc(longname=longname, extradoc=extradoc) 966 else: 967 self._construct_doc() /usr/local/lib/python2.6/dist-packages/scipy/stats/distributions.pyc in _construct_default_doc(self, longname, extradoc) 972 def _construct_default_doc(self, longname=None, extradoc=None): 973 """Construct instance docstring from the default template.""" --> 974 if extradoc.startswith('\n\n'): 975 extradoc = extradoc[2:] 976 self.__doc__ = ''.join(['%s continuous random variable.'%longname, AttributeError: 'NoneType' object has no attribute 'startswith' The following patch fixes it: Index: scipy/stats/distributions.py =================================================================== --- scipy/stats/distributions.py (revision 6845) +++ scipy/stats/distributions.py (working copy) @@ -971,6 +971,8 @@ def _construct_default_doc(self, longname=None, extradoc=None): """Construct instance docstring from the default template.""" + if None == extradoc: + extradoc = "" if extradoc.startswith('\n\n'): extradoc = extradoc[2:] self.__doc__ = ''.join(['%s continuous random variable.'%longname, Is this the best forum to submit a patch to? Thanks, John. From seb.haase at gmail.com Tue Oct 19 07:31:39 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Tue, 19 Oct 2010 13:31:39 +0200 Subject: [SciPy-User] FreeImage <-> numpy IO wrappers In-Reply-To: <1362D71A-5799-436D-BB21-01CE45ECFBD6@yale.edu> References: <201005141650.17597.lpc@cmu.edu> <1362D71A-5799-436D-BB21-01CE45ECFBD6@yale.edu> Message-ID: Hi Zach, I finally got around to testing your image.py module. Someone send me a 16-bit TIFF file made from Metamorph that PIL did not open correctly. Here are some comment and questions: 1) I would rename it to freeImage.py - or alike 2) Compiling freeimage from source went really well. No configure, simple a "make" and it ran through. Except in my version freeimage-3.14.1 I had to add "#include " to file ImathMatrix.h so that it would accept the use of memset(...) 3) you are changing the FreeImage convention of 0,0 being bottom-left to 0,0 being top-left -- to make it more like "all other" image software. I actually come from UCSF where we used the MRC format having 0,0 being left-bottom. How strong do you feel about this ? 4) For the return numpy-array you explicitly call copy() - I guess this is needed because one has to call FreeImage's unload() , right ? 5) you define the array strides to have the pitch (that is, the line width) last -- this is somewhat against the C-convention of having the fast axis last. Obviously you did this, to get arrays with indices i,j having x,y order rather than y,x -- how strong do you feel about this ? I accepted at some point that the fast (x) coordinate would be last and thus always write coordinates as y,x. (In 3D this becomes semi-naturally z,y,x rather then z,x,y - BTW - ) Do you happen to have os-x binaries of libfreeimage.dylib ? I'm looking forward to not having to use PIL anymore - thanks for your work. - Sebastian From josef.pktd at gmail.com Tue Oct 19 09:55:18 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 19 Oct 2010 09:55:18 -0400 Subject: [SciPy-User] _construct_default_doc extradoc=None bug in latest svn of scipy.stats.distributions In-Reply-To: References: Message-ID: On Tue, Oct 19, 2010 at 7:13 AM, John Reid wrote: > Hi, > > Using the latest scipy svn version (6845) I had the following ipython > session: > > In [1]: import scipy.stats.distributions > > In [2]: scipy.stats.distributions.gamma_gen( name = 'gamma' ) > --------------------------------------------------------------------------- > AttributeError ? ? ? ? ? ? ? ? ? ? ? ? ? ?Traceback (most recent call last) > > /home/john/local/src/ in () > > /usr/local/lib/python2.6/dist-packages/scipy/stats/distributions.pyc in > __init__(self, momtype, a, b, xa, xb, xtol, badvalue, name, longname, > shapes, extradoc) > ? ? 963 ? ? ? ? # generate docstring for subclass instances > > ? ? 964 ? ? ? ? if self.__doc__ is None: > --> 965 ? ? ? ? ? ? self._construct_default_doc(longname=longname, > extradoc=extradoc) > ? ? 966 ? ? ? ? else: > ? ? 967 ? ? ? ? ? ? self._construct_doc() > > /usr/local/lib/python2.6/dist-packages/scipy/stats/distributions.pyc in > _construct_default_doc(self, longname, extradoc) > ? ? 972 ? ? def _construct_default_doc(self, longname=None, extradoc=None): > ? ? 973 ? ? ? ? """Construct instance docstring from the default > template.""" > --> 974 ? ? ? ? if extradoc.startswith('\n\n'): > ? ? 975 ? ? ? ? ? ? extradoc = extradoc[2:] > ? ? 976 ? ? ? ? self.__doc__ = ''.join(['%s continuous random > variable.'%longname, > > AttributeError: 'NoneType' object has no attribute 'startswith' > > > The following patch fixes it: > > Index: scipy/stats/distributions.py > =================================================================== > --- scipy/stats/distributions.py ? ? ? ?(revision 6845) > +++ scipy/stats/distributions.py ? ? ? ?(working copy) > @@ -971,6 +971,8 @@ > > ? ? ?def _construct_default_doc(self, longname=None, extradoc=None): > ? ? ? ? ?"""Construct instance docstring from the default template.""" > + ? ? ? ?if None == extradoc: > + ? ? ? ? ? ?extradoc = "" > ? ? ? ? ?if extradoc.startswith('\n\n'): > ? ? ? ? ? ? ?extradoc = extradoc[2:] > ? ? ? ? ?self.__doc__ = ''.join(['%s continuous random variable.'%longname, > > > > Is this the best forum to submit a patch to? It's good to report it here, but you could also open a ticket at http://projects.scipy.org/scipy It's clearly a bug, I think, it's also be possible to do def _construct_default_doc(self, longname=None, extradoc='') but I haven't looked at the source yet. Thanks for catching and reporting this. I never tried an example that didn't have extradoc defined. Josef > > Thanks, > John. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From j.reid at mail.cryst.bbk.ac.uk Tue Oct 19 10:10:53 2010 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Tue, 19 Oct 2010 15:10:53 +0100 Subject: [SciPy-User] _construct_default_doc extradoc=None bug in latest svn of scipy.stats.distributions In-Reply-To: References: Message-ID: On 19/10/10 14:55, josef.pktd at gmail.com wrote: > > It's good to report it here, but you could also open a ticket at > http://projects.scipy.org/scipy http://projects.scipy.org/scipy/ticket/1316 > > It's clearly a bug, I think, it's also be possible to do > > def _construct_default_doc(self, longname=None, extradoc='') > > but I haven't looked at the source yet. Off the top of my head I thought it was preferred to have default arguments be None and then reassign in the function body although I couldn't tell you why. From ralf.gommers at googlemail.com Tue Oct 19 10:16:57 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 19 Oct 2010 22:16:57 +0800 Subject: [SciPy-User] Problem installing scipy on 10.6.4, python 2.7 In-Reply-To: <12AD279B-0074-416A-8C13-E3DB4982FD91@gmx.de> References: <81942.22876.qm@web52503.mail.re2.yahoo.com> <188679.22205.qm@web52502.mail.re2.yahoo.com> <12AD279B-0074-416A-8C13-E3DB4982FD91@gmx.de> Message-ID: On Mon, Oct 18, 2010 at 11:10 PM, Benjamin Buch wrote: > Hi Ralf, > > thanks for your answer! > > Am 18.10.2010 um 14:48 schrieb Ralf Gommers: > >> You don't seem to be replying to another email, so I'm not sure >> exactly what you did. Can you please give us: > > I was replying to this email directly on gmane: > http://permalink.gmane.org/gmane.comp.python.scientific.user/26559 > However, it seems like I somehow messed up the replying process.. > > As I deleted all I've done yesterday, I gave it a new try today - which yielded different results > which I will describe here. No problem. There is indeed still a problem, #1399 is reopened. You can do two things. One is to wait a few days. The other one is to use numpy 1.5.0 (without the patches you mentioned), and remove "ppc64" from line 257 of numpy/distutils/fcompiler/gnu.py by hand. Sorry for the trouble. > >> 1. exact numpy/scipy versions you used (version or git commit nr) > > numpy: >>>> numpy.__version__ > '1.5.0' > > scipy: > Recent svn checkout: revision 6845. > >> 2. what patches you used > > http://projects.scipy.org/numpy/ticket/1399 - this was already patched in my numpy install. > http://github.com/matthew-brett/numpy/compare/numpy:master...farchs-from-c > >> 3. the build command you used and the complete build log > > It now builds and installs with > python setup.py build > python setup.py install > > But I did 'export MACOSX_DEPLOYMENT_TARGET=10.6' before - read it here: > http://mail.scipy.org/pipermail/numpy-discussion/2010-August/052227.html > This is the only thing I did differently than yesterday (as far as I can remember). > > However, if I now do >>>> import scipy >>>> scipy.test() > > I get some errors. > See this gist: > http://gist.github.com/632345 > > >> 4. gfortran details (see below) >> >>> >>> Traceback (most recent call last): >>> >>> File "/Library/Frameworks/Python.framework/ >>> Versions/2.7/lib/python2.7/site- >>> packages/numpy/distutils/fcompiler/ >>> __init__.py", line 812, in new_fcompiler >>> ? ?load_all_fcompiler_classes() >>> ?File "/Library/Frameworks/Python.framework/ >>> Versions/2.7/lib/python2.7/site- >>> packages/numpy/distutils/fcompiler/ >>> __init__.py", line 734, in load_all_fcompiler_classes >>> ? ?fcompiler_aliases[alias][1].__name__)) >>> ValueError: alias 'g77' defined for both >>> GnuFCompiler and GnuFCompiler >>> >>> >>> So it seems like I would need the g77 compiler instead of gfortran? >>> Or is there a setting on my machine wrong? >>> I'm pretty noob to all this compiling stuff so I don't know what to do... >>> >>> I'm on OSX 10.6.4, use python.org 32-bit python and gfortran 4.2.1. >> >> The recommended gfortran is from http://r.research.att.com/tools/, the >> version should be 4.2.3. Where did you get yours? > > I installed gfortran via homebrew: > http://mxcl.github.com/homebrew/ > > I think it should be 4.2.3, as 'brew info gfortran' gives me Never seen or heard of brew. To be sure I'd just grab the recommended binary installer anway, but it is not the issue you're seeing. Cheers, Ralf > > $ brew info gfortran > gfortran 4.2.4-5664 > http://r.research.att.com/tools/ > /usr/local/Cellar/gfortran/4.2.4-5664 (28 files, 83M) > > But gfortran itself tells me otherwise: > $ gfortran --version > GNU Fortran (GCC) 4.2.1 (Apple Inc. build 5664) > Copyright (C) 2007 Free Software Foundation, Inc. > >> Cheers, >> Ralf > > Best, > Benjamin > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue Oct 19 10:20:59 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 19 Oct 2010 10:20:59 -0400 Subject: [SciPy-User] _construct_default_doc extradoc=None bug in latest svn of scipy.stats.distributions In-Reply-To: References: Message-ID: On Tue, Oct 19, 2010 at 10:10 AM, John Reid wrote: > On 19/10/10 14:55, josef.pktd at gmail.com wrote: >> >> It's good to report it here, but you could also open a ticket at >> http://projects.scipy.org/scipy > http://projects.scipy.org/scipy/ticket/1316 > >> >> It's clearly a bug, I think, it's also be possible to do >> >> def _construct_default_doc(self, longname=None, extradoc='') >> >> but I haven't looked at the source yet. > > Off the top of my head I thought it was preferred to have default > arguments be None and then reassign in the function body although I > couldn't tell you why. There are possible problems or one has to be more careful with default arguments that are mutable, strings and numbers are safe. Josef > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cr.anil at gmail.com Tue Oct 19 10:31:12 2010 From: cr.anil at gmail.com (Anil) Date: Tue, 19 Oct 2010 20:01:12 +0530 Subject: [SciPy-User] Avoiding lambda functions In-Reply-To: References: Message-ID: <1287498672.1705.1.camel@anil-laptop> hi david, There is actually a gaussian filter function available in SciPy's ndimage module Regards Anil On Mon, 2010-10-18 at 18:11 +0000, David MacQuigg wrote: > I'm working on some Python examples to present to freshman students interested > science and engineering. One of the more powerful examples is image processing > using FFTs and spatial filters. The examples I have from a graduate class in > astronomy use lambda functions in a way which freshmen will find confusing. > > Here is part of the example code: >
> from numpy import exp, indices  # numpy package from scipy.org
> img0 = imread('Lena.pgm')    # a 200 by 200 greyscale image
> shape = img0.shape           # (200, 200)
> 
> def gauss(i,j,sigma,shape):  # a 2D gaussian function
>     x = -1.0 + 2.0*i/shape[0]
>     y = -1.0 + 2.0*j/shape[1]
>     ans = exp(-(x*x+y*y)/(2*sigma*sigma))
>     return ans
> 
> def gaussianfilter(sigma,shape):
>     iray, jray = indices(shape)     # indices for a 200 x 200 array
>     filter = (lambda i,j: gauss(i,j,sigma,shape))(iray, jray)
>     return filter
> 
> filter = gaussianfilter(0.1,shape)
> 
> This use of lambda is confusing.  The reason to use lambda syntax is that it
> saves having to provide a name for a simple one-line function.  Here, we are
> giving the lambda a name "filter", so there is no savings, just convoluted code,
> which is contrary to the spirit of Python.
> 
> Let's try to "unconvolute" the gaussianfilter function.
> 
> def gaussianfilter01(sigma, shape):
>     iray,jray  = indices(shape)
>     def filter(i, j):
>         return gauss(i,j,sigma,shape)(iray, jray)
>     return filter
> 
> This doesn't work!! The problem is that the original function returns a numpy > array, and here we get just an ordinary function. It seems that numpy is doing > something special with the lambda syntax. > > How can we do this and keep it simple. I would really like to avoid lambda > functions entirely, but not if it means we lose the elegance of numpy arrays. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From benni.buch at gmx.de Tue Oct 19 11:07:30 2010 From: benni.buch at gmx.de (Benjamin Buch) Date: Tue, 19 Oct 2010 17:07:30 +0200 Subject: [SciPy-User] Problem installing scipy on 10.6.4, python 2.7 In-Reply-To: References: <81942.22876.qm@web52503.mail.re2.yahoo.com> <188679.22205.qm@web52502.mail.re2.yahoo.com> <12AD279B-0074-416A-8C13-E3DB4982FD91@gmx.de> Message-ID: <7956F31C-8F0B-443F-9A76-FC7C81A24BB6@gmx.de> Hi Ralf, thanks again! Am 19.10.2010 um 16:16 schrieb Ralf Gommers: >> As I deleted all I've done yesterday, I gave it a new try today - which yielded different results >> which I will describe here. > > No problem. There is indeed still a problem, #1399 is reopened. You > can do two things. One is to wait a few days. The other one is to use > numpy 1.5.0 (without the patches you mentioned), and remove "ppc64" > from line 257 of numpy/distutils/fcompiler/gnu.py by hand. Sorry for > the trouble. I went with the second option, but scipy still failed the test. So I'll just wait a few days... Best, Benjamin -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Tue Oct 19 12:10:07 2010 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 19 Oct 2010 12:10:07 -0400 Subject: [SciPy-User] FreeImage <-> numpy IO wrappers In-Reply-To: References: <201005141650.17597.lpc@cmu.edu> <1362D71A-5799-436D-BB21-01CE45ECFBD6@yale.edu> Message-ID: <29E48CA5-5676-4AAF-AB3F-43E4DC079D48@yale.edu> Hi Sebastien, Thanks -- I hope this winds up being useful. St?fan tried out this code a while ago and ran into a segfault loading a color jpeg that I was never able to reproduce... perhaps something to do with the fact that his python/FreeImage were 64-bit. Anyhow, beware that caveat. > Here are some comment and questions: > > 1) I would rename it to freeImage.py - or alike Yeah, sure, that's reasonable. Below is a new version of the code that's a package that you import as "FreeImage"... > 2) Compiling freeimage from source went really well. No configure, > simple a "make" and it ran through. Except in my version > freeimage-3.14.1 I had to add "#include " to file > ImathMatrix.h so that it would accept the use of memset(...) Huh... that's odd. Good to know! > 3) you are changing the FreeImage convention of 0,0 being bottom-left > to 0,0 being top-left -- to make it more like "all other" image > software. I actually come from UCSF where we used the MRC format > having 0,0 being left-bottom. How strong do you feel about this ? Pretty much all of the common basic image formats (TIFF, PNG, JPG and the like) have 0,0 as top-left, so I tried to make it so that images loaded would index just the same as they would in any other image viewer like MetaMorph or ImageJ or Photoshop, etc. This explains the 0,0-as-top-left as well as your question 5 below about the striding. Also, the image formats I'm most familiar with are by-and-large stored on-disk in scanline order from top to bottom, then additional image planes (also in scanline order). Which corresponds to fortran-order (fast axis first) xy[zt] in memory, with 0,0 at the top-left. Which is also what most libraries expect when passed an memory region that's supposed to be an image (e.g. OpenGL, other UI libs, etc.), so I tried to make the output conform to the most usual "expected" memory pattern. One strange thing crops up with color images. Usually RGB(A) pixels are stored next to eachother, so the fortran-order memory format is cxyzt (again, this is what external libraries expect). But a quick .view() operation with a structured RGB(A) dtype makes this work more "naturally" (if desired). > 4) For the return numpy-array you explicitly call copy() - I guess > this is needed because one has to call FreeImage's unload() , right ? Yes, this is the case. I think there's some way in the numpy C-api (or perhaps the new py3 buffer api?) to specify a function pointer to be called when an array needs to be freed (if the array is constructed around some external chunk of memory). So if some numpy hacker who knows the C api better than I could comment on how one might do this, it could be arranged such that FreeImage_unload is called by numpy, so that we don't need to copy the array and then manually unload the image. > 5) you define the array strides to have the pitch (that is, the line > width) last -- this is somewhat against the C-convention of having the > fast axis last. Obviously you did this, to get arrays with indices i,j > having x,y order rather than y,x -- how strong do you feel about this > ? I accepted at some point that the fast (x) coordinate would be last > and thus always write coordinates as y,x. > (In 3D this becomes semi-naturally z,y,x rather then z,x,y - BTW - ) See above... given that images are usually fortran-order on disk and that most external libraries expect them to be that way too, I think that this is the most reasonable. It's just that PIL has been so broken for so long that people on Python are used to indexing images as i[y,x] instead of i[x,y]. But fortran-order arrays are just as natural as C-order, and i[x,y,z,t] is also as natural, if not more so, than i[t,z,y,x]. Again, the code is barely a few hundred lines, so feel free to modify to the conventions that are most natural for your application! But I think this is the best general-case approach. > Do you happen to have os-x binaries of libfreeimage.dylib ? I do -- 32-bit only though. (Pretty easy to build...) Do you want me to send the dylib? Attached is the latest version of my code, which has a few new bug fixes from the previous version. (Can read palletized grey-scale images, for example.) Just drop whatever .dylib, .dll, or .so FreeImage shared library you've got into the directory with image.py and the setup.py script will detect it and install it alongside the python code. Then just do "import FreeImage" and you're good to go. Zach -------------- next part -------------- A non-text attachment was scrubbed... Name: FreeImage.zip Type: application/zip Size: 5312 bytes Desc: not available URL: -------------- next part -------------- From warren.weckesser at enthought.com Tue Oct 19 12:36:08 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 19 Oct 2010 11:36:08 -0500 Subject: [SciPy-User] _construct_default_doc extradoc=None bug in latest svn of scipy.stats.distributions In-Reply-To: References: Message-ID: On Tue, Oct 19, 2010 at 9:20 AM, wrote: > On Tue, Oct 19, 2010 at 10:10 AM, John Reid > wrote: > > On 19/10/10 14:55, josef.pktd at gmail.com wrote: > >> > >> It's good to report it here, but you could also open a ticket at > >> http://projects.scipy.org/scipy > > http://projects.scipy.org/scipy/ticket/1316 > > > >> > >> It's clearly a bug, I think, it's also be possible to do > >> > >> def _construct_default_doc(self, longname=None, extradoc='') > >> > >> but I haven't looked at the source yet. > > > > Off the top of my head I thought it was preferred to have default > > arguments be None and then reassign in the function body although I > > couldn't tell you why. > > There are possible problems or one has to be more careful with default > arguments that are mutable, strings and numbers are safe. > > Josef > > > Fixed in r6848: http://projects.scipy.org/scipy/changeset/6848 John, thanks for reporting the problem. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Tue Oct 19 13:50:50 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Tue, 19 Oct 2010 12:50:50 -0500 Subject: [SciPy-User] Curve fitting questions Message-ID: Hello, I am doing a few simple tests to see if I could estimate parameters from a function fitting using less number of inputs. Three inputs version works very well. The original version has 5 input points but the difference between 3 and 5 inputs estimations are very small in this case. I am wondering if two inputs case would be improved and if anything could be done to estimate somewhat reasonable parameters (with or without initial parameters provided) using 1 input only. The code is as follows: import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit def my_ck(x, a, b): return a*x**b ccn_ss1 = [0.27, 0.34, 0.57] ccn_conc1 = np.array([383.51237409766452, 424.82669523141652, 511.48197391304342]) # works fine with defaults tfit1, pcov1 = curve_fit(my_ck, ccn_ss1, ccn_conc1) ccn_ss1 = [0.27, 0.34] ccn_conc1 = np.array([383.51237409766452, 424.82669523141652]) # Fails with RuntimeError: Optimal parameters not found: The relative error # between two consecutive iterates is at most 0.000000 # tfit1, pcov1 = curve_fit(my_ck, ccn_ss1, ccn_conc1) # Trying with the original estimates, ftol should be set to a big number otherwise this one fails as well. # Is there any different optimize function which will auto-discover the # initial parameters instead of supplying them explicitly? tfit2, pcov2 = curve_fit(my_ck, ccn_ss1, ccn_conc1, p0=tfit1, ftol=1) ccn_ss1 = 0.27 ccn_conc1 = 383.51237409766452 # One data point estimation fails with IndexError: index out of range for array tfit3, pcov3 = curve_fit(my_ck, ccn_ss1, ccn_conc1, p0=tfit1, ftol=1) Thanks. -- G?khan From seb.haase at gmail.com Tue Oct 19 13:59:19 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Tue, 19 Oct 2010 19:59:19 +0200 Subject: [SciPy-User] FreeImage <-> numpy IO wrappers In-Reply-To: <29E48CA5-5676-4AAF-AB3F-43E4DC079D48@yale.edu> References: <201005141650.17597.lpc@cmu.edu> <1362D71A-5799-436D-BB21-01CE45ECFBD6@yale.edu> <29E48CA5-5676-4AAF-AB3F-43E4DC079D48@yale.edu> Message-ID: On Tue, Oct 19, 2010 at 6:10 PM, Zachary Pincus wrote: > Hi Sebastien, > > Thanks -- I hope this winds up being useful. St?fan tried out this code a > while ago and ran into a segfault loading a color jpeg that I was never able > to reproduce... perhaps something to do with the fact that his > python/FreeImage were 64-bit. Anyhow, beware that caveat. > >> Here are some comment and questions: >> >> 1) I would rename it to freeImage.py - or alike > > Yeah, sure, that's reasonable. Below is a new version of the code that's a > package that you import as "FreeImage"... > >> 2) Compiling freeimage from source went really well. No configure, >> simple a "make" and it ran through. Except in my version >> freeimage-3.14.1 I had to add "#include " to file >> ImathMatrix.h so that it would accept the use of memset(...) > > Huh... that's odd. Good to know! > >> 3) you are changing the FreeImage convention of 0,0 being bottom-left >> to 0,0 being top-left -- to make it more like "all other" image >> software. ? I actually come from UCSF where we used the MRC format >> having 0,0 being left-bottom. ?How strong do you feel about this ? > > Pretty much all of the common basic image formats (TIFF, PNG, JPG and the > like) have 0,0 as top-left, so I tried to make it so that images loaded > would index just the same as they would in any other image viewer like > MetaMorph or ImageJ or Photoshop, etc. This explains the 0,0-as-top-left as > well as your question 5 below about the striding. > > Also, the image formats I'm most familiar with are by-and-large stored > on-disk in scanline order from top to bottom, then additional image planes > (also in scanline order). Which corresponds to fortran-order (fast axis > first) xy[zt] in memory, with 0,0 at the top-left. Which is also what most > libraries expect when passed an memory region that's supposed to be an image > (e.g. OpenGL, other UI libs, etc.), so I tried to make the output conform to > the most usual "expected" memory pattern. > > One strange thing crops up with color images. Usually RGB(A) pixels are > stored next to eachother, so the fortran-order memory format is cxyzt > (again, this is what external libraries expect). But a quick .view() > operation with a structured RGB(A) dtype makes this work more "naturally" > (if desired). > >> 4) For the return numpy-array you explicitly call copy() - ?I guess >> this is needed because one has to call FreeImage's unload() , right ? > > Yes, this is the case. > > I think there's some way in the numpy C-api (or perhaps the new py3 buffer > api?) to specify a function pointer to be called when an array needs to be > freed (if the array is constructed around some external chunk of memory). So > if some numpy hacker who knows the C api better than I could comment on how > one might do this, it could be arranged such that FreeImage_unload is called > by numpy, so that we don't need to copy the array and then manually unload > the image. > >> 5) you define the array strides to have the pitch (that is, the line >> width) last -- this is somewhat against the C-convention of having the >> fast axis last. Obviously you did this, to get arrays with indices i,j >> having x,y order rather than y,x -- how strong do you feel about this >> ? I accepted at some point that the fast (x) coordinate would be last >> and thus always write coordinates as y,x. >> (In 3D this becomes semi-naturally z,y,x ?rather then z,x,y - BTW - ) > > See above... given that images are usually fortran-order on disk and that > most external libraries expect them to be that way too, I think that this is > the most reasonable. It's just that PIL has been so broken for so long that > people on Python are used to indexing images as i[y,x] instead of i[x,y]. > But fortran-order arrays are just as natural as C-order, and i[x,y,z,t] is > also as natural, if not more so, than i[t,z,y,x]. > > Again, the code is barely a few hundred lines, so feel free to modify to the > conventions that are most natural for your application! But I think this is > the best general-case approach. > >> Do you happen to have os-x binaries of libfreeimage.dylib ? > > I do -- 32-bit only though. (Pretty easy to build...) Do you want me to send > the dylib? > > Attached is the latest version of my code, which has a few new bug fixes > from the previous version. (Can read palletized grey-scale images, for > example.) Just drop whatever .dylib, .dll, or .so FreeImage shared library > you've got into the directory with image.py and the setup.py script will > detect it and install it alongside the python code. Then just do "import > FreeImage" and you're good to go. > > Zach > I really don't want to edit your python file - even though I appreciate the fact that it is quite small. I would rather write smaller wrapper functions to get the memory order I need. Another way would be to add some sort of mem_layout flag to support the four possible permutations c<->fortran and bottom<->top at the lowest level and prevent an extra copy. I had never heard that image data (file formats, OpenGL, etc.) was not ordered in y,x (C-order, i.e. x being fast) memory layout - I though the top vs. bottom origin was the only "uncertainty" - well, there maybe more FORTRAN around than I thought ... I will take a look at your update tomorrow. - Sebastian From josef.pktd at gmail.com Tue Oct 19 14:04:24 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 19 Oct 2010 14:04:24 -0400 Subject: [SciPy-User] Curve fitting questions In-Reply-To: References: Message-ID: On Tue, Oct 19, 2010 at 1:50 PM, G?khan Sever wrote: > Hello, > > I am doing a few simple tests to see if I could estimate parameters > from a function fitting using less number of inputs. Three inputs > version works very well. The original version has 5 input points but > the difference between 3 and 5 inputs estimations are very small in > this case. I am wondering if two inputs case would be improved and if > anything could be done to estimate somewhat reasonable parameters > (with or without initial parameters provided) using 1 input only. > > The code is as follows: > > import numpy as np > import matplotlib.pyplot as plt > from scipy.optimize import curve_fit > > def my_ck(x, a, b): > ? ?return a*x**b I think you could take logs and you would have a linear function in param=log(x), and you could use linalg to solve for param, and then transform back exp(param). Or this would give you a starting value if you want the non-linear optimization. > > ccn_ss1 = [0.27, 0.34, 0.57] > ccn_conc1 = np.array([383.51237409766452, 424.82669523141652, > 511.48197391304342]) > > # works fine with defaults > tfit1, pcov1 = curve_fit(my_ck, ccn_ss1, ccn_conc1) > > ccn_ss1 = [0.27, 0.34] > ccn_conc1 = np.array([383.51237409766452, 424.82669523141652]) > > # Fails with RuntimeError: Optimal parameters not found: The relative error > # between two consecutive iterates is at most 0.000000 > # tfit1, pcov1 = curve_fit(my_ck, ccn_ss1, ccn_conc1) > > # Trying with the original estimates, ftol should be set to a big > number otherwise this one fails as well. > # Is there any different optimize function which will auto-discover the > # initial parameters instead of supplying them explicitly? > tfit2, pcov2 = curve_fit(my_ck, ccn_ss1, ccn_conc1, p0=tfit1, ftol=1) Without knowing and using properties of the non-linear function and choosing starting values well enough, there will always be numerical problems like this, especially with very few observations. > > ccn_ss1 = 0.27 > ccn_conc1 = 383.51237409766452 > # One data point estimation fails with IndexError: index out of range for array > tfit3, pcov3 = curve_fit(my_ck, ccn_ss1, ccn_conc1, p0=tfit1, ftol=1) If you have one parameter to estimate and only one observations, then you should be able to solve it exactly with one of the solvers/ rootfinders in scipy. optimize. Josef > > Thanks. > > -- > G?khan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From zachary.pincus at yale.edu Tue Oct 19 14:16:02 2010 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 19 Oct 2010 14:16:02 -0400 Subject: [SciPy-User] FreeImage <-> numpy IO wrappers In-Reply-To: References: <201005141650.17597.lpc@cmu.edu> <1362D71A-5799-436D-BB21-01CE45ECFBD6@yale.edu> <29E48CA5-5676-4AAF-AB3F-43E4DC079D48@yale.edu> Message-ID: <7E7A9A20-D1D1-45E9-A2C0-2F40E58A357B@yale.edu> On Oct 19, 2010, at 1:59 PM, Sebastian Haase wrote: > On Tue, Oct 19, 2010 at 6:10 PM, Zachary Pincus > wrote: >> Hi Sebastien, >> >> Thanks -- I hope this winds up being useful. St?fan tried out this >> code a >> while ago and ran into a segfault loading a color jpeg that I was >> never able >> to reproduce... perhaps something to do with the fact that his >> python/FreeImage were 64-bit. Anyhow, beware that caveat. >> >>> Here are some comment and questions: >>> >>> 1) I would rename it to freeImage.py - or alike >> >> Yeah, sure, that's reasonable. Below is a new version of the code >> that's a >> package that you import as "FreeImage"... >> >>> 2) Compiling freeimage from source went really well. No configure, >>> simple a "make" and it ran through. Except in my version >>> freeimage-3.14.1 I had to add "#include " to file >>> ImathMatrix.h so that it would accept the use of memset(...) >> >> Huh... that's odd. Good to know! >> >>> 3) you are changing the FreeImage convention of 0,0 being bottom- >>> left >>> to 0,0 being top-left -- to make it more like "all other" image >>> software. I actually come from UCSF where we used the MRC format >>> having 0,0 being left-bottom. How strong do you feel about this ? >> >> Pretty much all of the common basic image formats (TIFF, PNG, JPG >> and the >> like) have 0,0 as top-left, so I tried to make it so that images >> loaded >> would index just the same as they would in any other image viewer >> like >> MetaMorph or ImageJ or Photoshop, etc. This explains the 0,0-as-top- >> left as >> well as your question 5 below about the striding. >> >> Also, the image formats I'm most familiar with are by-and-large >> stored >> on-disk in scanline order from top to bottom, then additional image >> planes >> (also in scanline order). Which corresponds to fortran-order (fast >> axis >> first) xy[zt] in memory, with 0,0 at the top-left. Which is also >> what most >> libraries expect when passed an memory region that's supposed to be >> an image >> (e.g. OpenGL, other UI libs, etc.), so I tried to make the output >> conform to >> the most usual "expected" memory pattern. >> >> One strange thing crops up with color images. Usually RGB(A) pixels >> are >> stored next to eachother, so the fortran-order memory format is cxyzt >> (again, this is what external libraries expect). But a quick .view() >> operation with a structured RGB(A) dtype makes this work more >> "naturally" >> (if desired). >> >>> 4) For the return numpy-array you explicitly call copy() - I guess >>> this is needed because one has to call FreeImage's unload() , >>> right ? >> >> Yes, this is the case. >> >> I think there's some way in the numpy C-api (or perhaps the new py3 >> buffer >> api?) to specify a function pointer to be called when an array >> needs to be >> freed (if the array is constructed around some external chunk of >> memory). So >> if some numpy hacker who knows the C api better than I could >> comment on how >> one might do this, it could be arranged such that FreeImage_unload >> is called >> by numpy, so that we don't need to copy the array and then manually >> unload >> the image. >> >>> 5) you define the array strides to have the pitch (that is, the line >>> width) last -- this is somewhat against the C-convention of having >>> the >>> fast axis last. Obviously you did this, to get arrays with indices >>> i,j >>> having x,y order rather than y,x -- how strong do you feel about >>> this >>> ? I accepted at some point that the fast (x) coordinate would be >>> last >>> and thus always write coordinates as y,x. >>> (In 3D this becomes semi-naturally z,y,x rather then z,x,y - BTW >>> - ) >> >> See above... given that images are usually fortran-order on disk >> and that >> most external libraries expect them to be that way too, I think >> that this is >> the most reasonable. It's just that PIL has been so broken for so >> long that >> people on Python are used to indexing images as i[y,x] instead of >> i[x,y]. >> But fortran-order arrays are just as natural as C-order, and >> i[x,y,z,t] is >> also as natural, if not more so, than i[t,z,y,x]. >> >> Again, the code is barely a few hundred lines, so feel free to >> modify to the >> conventions that are most natural for your application! But I think >> this is >> the best general-case approach. >> >>> Do you happen to have os-x binaries of libfreeimage.dylib ? >> >> I do -- 32-bit only though. (Pretty easy to build...) Do you want >> me to send >> the dylib? >> >> Attached is the latest version of my code, which has a few new bug >> fixes >> from the previous version. (Can read palletized grey-scale images, >> for >> example.) Just drop whatever .dylib, .dll, or .so FreeImage shared >> library >> you've got into the directory with image.py and the setup.py script >> will >> detect it and install it alongside the python code. Then just do >> "import >> FreeImage" and you're good to go. >> >> Zach >> > I really don't want to edit your python file - even though I > appreciate the fact that it is quite small. > I would rather write smaller wrapper functions to get the memory > order I need. > Another way would be to add some sort of mem_layout flag to support > the four possible permutations c<->fortran and bottom<->top at the > lowest level and prevent an extra copy. The good news is that this can all be done without any copying needed: i.T will switch from i[x,y] (fortran order) to i[y,x] (C order), just by reversing the shape and strides, without copying data. Likewise, i[::-1] will give a top-to-bottom reversed array also without any copying, by using a negative stride. Sure, you'll need to copy the data before handing it to something that expects a contiguous array, but most numpy operations will work just fine. Though, honestly, compared to the latency of reading images from disk, probably an extra copy wouldn't even be perceptible in terms of performance. > I had never heard that image data (file formats, OpenGL, etc.) was not > ordered in y,x (C-order, i.e. x being fast) memory layout - I though > the top vs. bottom origin was the only "uncertainty" - well, there > maybe more FORTRAN around than I thought ... It's not to do so much with FORTRAN as the fact that images are typically blitted in "raster order" to the screen(buffer): left-to- right, top-to-bottom. Probably doesn't matter much anymore, but maybe that's what was fastest in the early days? Anyway, yeah, most image formats store things in scanlines, unless there's chunked compression, or interleaving, etc. Zach PS. consider the code I sent to be in the public domain. Do with it what you will. > I will take a look at your update tomorrow. > - Sebastian > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From gokhansever at gmail.com Tue Oct 19 15:20:46 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Tue, 19 Oct 2010 14:20:46 -0500 Subject: [SciPy-User] Curve fitting questions In-Reply-To: References: Message-ID: On Tue, Oct 19, 2010 at 1:04 PM, wrote: > I think you could take logs and you would have a linear function in > param=log(x), and you could use linalg to solve for param, and then > transform back exp(param). ?Or this would give you a starting value if > you want the non-linear optimization. Using 3 and 5 data-points the curve_fit usually does a good job, even without the initial estimates provided. When it's necessary we usually constrain the initial parameters with max CCN concentration for C (param[0]) and a typical k (param[1]) values. This even works with 2 data-points: I[34]: ccn_ss1 = [0.27, 0.34] I[35]: ccn_conc1 = np.array([383.51237409766452, 424.82669523141652]) I[36]: tfit2, pcov2 = curve_fit(my_ck, ccn_ss1, ccn_conc1, p0=(424, 0.5), ftol=1) provides me reasonable estimations. However, having another data-point would surely improve the quality of the fit and estimations. >> ccn_ss1 = 0.27 >> ccn_conc1 = 383.51237409766452 >> # One data point estimation fails with IndexError: index out of range for array >> tfit3, pcov3 = curve_fit(my_ck, ccn_ss1, ccn_conc1, p0=tfit1, ftol=1) > > If you have one parameter to estimate and only one observations, then > you should be able to solve it exactly with one of the solvers/ > rootfinders in scipy. optimize. > > Josef I want to estimate two parameters using one observation (which is a data-pair for my case --one for ccn_ss1 and one for ccn_conc1.) Probably, in this current version fsolve can't do give me any roots. def my_ck(x, a, b): return a*x**b fsolve(my_ck, x0=tfit1, args=(ccn_ss1, ccn_conc1), xtol=1) rather gives a couple of overflow warnings: Warning: overflow encountered in power In one data-pair situation my function looks like: a*x**b = 383.5 Now there are two unknowns providing the x as ccn_ss1 as a*0.27**b = 383.5. I should make one more assumption otherwise it is still unsolvable. Probably making an assumption for a, then I can hand solve this easily. OK, with a = 350 assumption in 350*0.27**b == 383.5, here solving for b results ~ -0.065 With some modifications on the original fitfunc: def my_ck(x): return 350*0.27**x - 383 fsolve nicely estimates what I want. fsolve(my_ck, x0=0.5) -0.068815047568104443 -- G?khan From josef.pktd at gmail.com Tue Oct 19 16:17:43 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 19 Oct 2010 16:17:43 -0400 Subject: [SciPy-User] Curve fitting questions In-Reply-To: References: Message-ID: On Tue, Oct 19, 2010 at 3:20 PM, G?khan Sever wrote: > On Tue, Oct 19, 2010 at 1:04 PM, ? wrote: >> I think you could take logs and you would have a linear function in >> param=log(x), and you could use linalg to solve for param, and then >> transform back exp(param). ?Or this would give you a starting value if >> you want the non-linear optimization. > > Using 3 and 5 data-points the curve_fit usually does a good job, even without > the initial estimates provided. When it's necessary we usually constrain the > initial parameters with max CCN concentration for C (param[0]) and a typical > k (param[1]) values. > > This even works with 2 data-points: > > I[34]: ccn_ss1 = [0.27, 0.34] > > I[35]: ccn_conc1 = np.array([383.51237409766452, 424.82669523141652]) > > I[36]: tfit2, pcov2 = curve_fit(my_ck, ccn_ss1, ccn_conc1, p0=(424, > 0.5), ftol=1) > > provides me reasonable estimations. However, having another data-point > would surely > improve the quality of the fit and estimations. > >>> ccn_ss1 = 0.27 >>> ccn_conc1 = 383.51237409766452 >>> # One data point estimation fails with IndexError: index out of range for array >>> tfit3, pcov3 = curve_fit(my_ck, ccn_ss1, ccn_conc1, p0=tfit1, ftol=1) >> >> If you have one parameter to estimate and only one observations, then >> you should be able to solve it exactly with one of the solvers/ >> rootfinders in scipy. optimize. >> >> Josef > > I want to estimate two parameters using one observation (which is a > data-pair for my case --one for ccn_ss1 and one for ccn_conc1.) > Probably, in this current version fsolve can't do give me any roots. I remembered curve_fit wrongly, I didn't remember it switched data and parameters in the argument list compare to leastsq. I need to reread your example (later today). Taking logs and using linalg is still more efficient (unless you insist on an additive error term). Josef > > def my_ck(x, a, b): > ? ?return a*x**b > > fsolve(my_ck, x0=tfit1, args=(ccn_ss1, ccn_conc1), xtol=1) > > rather gives a couple of overflow warnings: > Warning: overflow encountered in power > > In one data-pair situation my function looks like: > > a*x**b = 383.5 > > Now there are two unknowns providing the x as ccn_ss1 as a*0.27**b = > 383.5. I should make one more assumption otherwise it is still > unsolvable. Probably making an assumption for a, then I can hand solve > this easily. OK, with a = 350 assumption in 350*0.27**b == 383.5, here > solving for b results ~ -0.065 > > With some modifications on the original fitfunc: > > def my_ck(x): > ? ?return 350*0.27**x - 383 > > fsolve nicely estimates what I want. > > fsolve(my_ck, x0=0.5) > -0.068815047568104443 > > > > > > -- > G?khan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue Oct 19 22:00:02 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 19 Oct 2010 22:00:02 -0400 Subject: [SciPy-User] Curve fitting questions In-Reply-To: References: Message-ID: On Tue, Oct 19, 2010 at 4:17 PM, wrote: > On Tue, Oct 19, 2010 at 3:20 PM, G?khan Sever wrote: >> On Tue, Oct 19, 2010 at 1:04 PM, ? wrote: >>> I think you could take logs and you would have a linear function in >>> param=log(x), and you could use linalg to solve for param, and then >>> transform back exp(param). ?Or this would give you a starting value if >>> you want the non-linear optimization. >> >> Using 3 and 5 data-points the curve_fit usually does a good job, even without >> the initial estimates provided. When it's necessary we usually constrain the >> initial parameters with max CCN concentration for C (param[0]) and a typical >> k (param[1]) values. >> >> This even works with 2 data-points: >> >> I[34]: ccn_ss1 = [0.27, 0.34] >> >> I[35]: ccn_conc1 = np.array([383.51237409766452, 424.82669523141652]) >> >> I[36]: tfit2, pcov2 = curve_fit(my_ck, ccn_ss1, ccn_conc1, p0=(424, >> 0.5), ftol=1) >> >> provides me reasonable estimations. However, having another data-point >> would surely >> improve the quality of the fit and estimations. >> >>>> ccn_ss1 = 0.27 >>>> ccn_conc1 = 383.51237409766452 >>>> # One data point estimation fails with IndexError: index out of range for array >>>> tfit3, pcov3 = curve_fit(my_ck, ccn_ss1, ccn_conc1, p0=tfit1, ftol=1) >>> >>> If you have one parameter to estimate and only one observations, then >>> you should be able to solve it exactly with one of the solvers/ >>> rootfinders in scipy. optimize. >>> >>> Josef >> >> I want to estimate two parameters using one observation (which is a >> data-pair for my case --one for ccn_ss1 and one for ccn_conc1.) >> Probably, in this current version fsolve can't do give me any roots. > > I remembered curve_fit wrongly, I didn't remember it switched data and > parameters in the argument list compare to leastsq. > > I need to reread your example (later today). > > Taking logs and using linalg is still more efficient (unless you > insist on an additive error term). >>> ccn_ss1 = [0.27, 0.34, 0.57] >>> ccn_conc1 = np.array([383.51237409766452, 424.82669523141652, 511.48197391304342]) >>> def my_ck(x, a, b): return a*x**b >>> tfit1, pcov1 = curve_fit(my_ck, ccn_ss1, ccn_conc1) >>> tfit1 array([ 6.33851519e+02, 3.78527717e-01]) >>> stats.linregress(np.log(ccn_ss1), np.log(ccn_conc1)) (0.38096158507713485, 6.4541006630438478, 0.99864456413652103, 0.033150010788760682, 0.019855348412039904) >>> np.exp(6.4541006630438478) 635.30211858377766 >>> stats.linregress(np.log(ccn_ss1[:-1]), np.log(ccn_conc1[:-1])) (0.44381311635631338, 6.5304711876039025, 1.0, nan, nan) >>> np.exp(6.5304711876039025) 685.72123873003966 >>> stats.linregress(np.log(ccn_ss1[:-2]), np.log(ccn_conc1[:-2])) (nan, nan, 0.0, nan, nan) strangely leastsq/curve_fit has a better fit than linregress for exact solution (2 observations) >>> tfit1, pcov1 = curve_fit(my_ck, ccn_ss1[:-1], ccn_conc1[:-1], p0=(1,1)) >>> my_ck(ccn_ss1[:-1], *tfit1) array([ 383.5123741 , 424.82669523]) >>> my_ck(ccn_ss1[:-1], *tfit1) - ccn_conc1[:-1] array([ 0., 0.]) >>> my_ck(np.asarray(ccn_ss1[:-1]), np.exp(6.5304711876039025), 0.44381311635631338) - ccn_conc1[:-1] array([ 1.70530257e-13, 3.41060513e-13]) If you have reasonably good information about the function or the range of starting values, then this always works better and faster for non-linear optimization. An interesting alternative that James was using for distribution estimation, is to use a global optimizer (differential evolution) in combination with a non-linear optimizer. You could also just draw several random starting values. Since your optimization problem is very small, it would still be fast. Josef > > Josef > >> >> def my_ck(x, a, b): >> ? ?return a*x**b >> >> fsolve(my_ck, x0=tfit1, args=(ccn_ss1, ccn_conc1), xtol=1) >> >> rather gives a couple of overflow warnings: >> Warning: overflow encountered in power >> >> In one data-pair situation my function looks like: >> >> a*x**b = 383.5 >> >> Now there are two unknowns providing the x as ccn_ss1 as a*0.27**b = >> 383.5. I should make one more assumption otherwise it is still >> unsolvable. Probably making an assumption for a, then I can hand solve >> this easily. OK, with a = 350 assumption in 350*0.27**b == 383.5, here >> solving for b results ~ -0.065 >> >> With some modifications on the original fitfunc: >> >> def my_ck(x): >> ? ?return 350*0.27**x - 383 >> >> fsolve nicely estimates what I want. >> >> fsolve(my_ck, x0=0.5) >> -0.068815047568104443 >> >> >> >> >> >> -- >> G?khan >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From waleriantunes at gmail.com Wed Oct 20 12:45:13 2010 From: waleriantunes at gmail.com (=?ISO-8859-1?Q?Wal=E9ria_Antunes_David?=) Date: Wed, 20 Oct 2010 14:45:13 -0200 Subject: [SciPy-User] Error Message-ID: Hi all, I have this code: http://pastebin.com/9qi75Bbv This equation has integral calculus. Is an extended equation so i decided to split it into pieces, first part called "a" and the second "b". Well, i have three conditions: if omega_k == 0 (cauculus) if omega_k > 0 (cauculus) if omega_k < 0 (cauculus) But returns me this error: Request Method: POST Request URL: http://localhost:8000/index/ Exception Type: NameError Exception Value: global name 'omega_k' is not defined Exception Location: \supernovas_Ia\..\supernovas_Ia\supernovas \views.py in index, line 17 What am I doing wrong? And the way I'm using the condition correct? And then to return the result in the graph is correct? Thanks, Waleria -------------- next part -------------- An HTML attachment was scrubbed... URL: From wardefar at iro.umontreal.ca Wed Oct 20 12:59:48 2010 From: wardefar at iro.umontreal.ca (David Warde-Farley) Date: Wed, 20 Oct 2010 12:59:48 -0400 Subject: [SciPy-User] Error In-Reply-To: References: Message-ID: <8CDB5759-1D41-4775-8E70-1ACD0BAEF82A@iro.umontreal.ca> On 2010-10-20, at 12:45 PM, Wal?ria Antunes David wrote: > What am I doing wrong? And the way I'm using the condition correct? And then to return the result in the graph is correct? You haven't actually defined omega_k anywhere that I can see. Also, you should probably test your expressions and such at the Python interpreter before immediately trying to embed them into a web application. David From faltet at pytables.org Wed Oct 20 13:03:19 2010 From: faltet at pytables.org (Francesc Alted) Date: Wed, 20 Oct 2010 19:03:19 +0200 Subject: [SciPy-User] ANN: NUmexpr 1.4.1 released Message-ID: <201010201903.19193.faltet@pytables.org> ========================== Announcing Numexpr 1.4.1 ========================== Numexpr is a fast numerical expression evaluator for NumPy. With it, expressions that operate on arrays (like "3*a+4*b") are accelerated and use less memory than doing the same calculation in Python. What's new ========== This is a maintenance release. On it, several improvements have been done in order to prevent deadlocks in new threaded code (fixes #33). Also the GIL is released now during computations, which should be interesting for embedding numexpr in threaded Python apps. In case you want to know more in detail what has changed in this version, see: http://code.google.com/p/numexpr/wiki/ReleaseNotes or have a look at RELEASE_NOTES.txt in the tarball. Where I can find Numexpr? ========================= The project is hosted at Google code in: http://code.google.com/p/numexpr/ And you can get the packages from PyPI as well: http://pypi.python.org/pypi Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. Enjoy! -- Francesc Alted From waleriantunes at gmail.com Wed Oct 20 13:04:03 2010 From: waleriantunes at gmail.com (=?ISO-8859-1?Q?Wal=E9ria_Antunes_David?=) Date: Wed, 20 Oct 2010 15:04:03 -0200 Subject: [SciPy-User] Error In-Reply-To: <8CDB5759-1D41-4775-8E70-1ACD0BAEF82A@iro.umontreal.ca> References: <8CDB5759-1D41-4775-8E70-1ACD0BAEF82A@iro.umontreal.ca> Message-ID: I have a forms.py: http://pastebin.com/ihhM9GVf This value comes from a form. Not right this correct? Thanks, Waleria On Wed, Oct 20, 2010 at 2:59 PM, David Warde-Farley < wardefar at iro.umontreal.ca> wrote: > > On 2010-10-20, at 12:45 PM, Wal?ria Antunes David wrote: > > > What am I doing wrong? And the way I'm using the condition correct? And > then to return the result in the graph is correct? > > You haven't actually defined omega_k anywhere that I can see. > > Also, you should probably test your expressions and such at the Python > interpreter before immediately trying to embed them into a web application. > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Wed Oct 20 14:13:41 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 20 Oct 2010 13:13:41 -0500 Subject: [SciPy-User] Curve fitting questions In-Reply-To: References: Message-ID: On Tue, Oct 19, 2010 at 9:00 PM, wrote: >>>> ccn_ss1 = [0.27, 0.34, 0.57] >>>> ccn_conc1 = np.array([383.51237409766452, 424.82669523141652, 511.48197391304342]) > >>>> def my_ck(x, a, b): > ? return a*x**b > >>>> tfit1, pcov1 = curve_fit(my_ck, ccn_ss1, ccn_conc1) >>>> tfit1 > array([ ?6.33851519e+02, ? 3.78527717e-01]) > > >>>> stats.linregress(np.log(ccn_ss1), np.log(ccn_conc1)) > (0.38096158507713485, 6.4541006630438478, 0.99864456413652103, > 0.033150010788760682, 0.019855348412039904) >>>> np.exp(6.4541006630438478) > 635.30211858377766 >>>> stats.linregress(np.log(ccn_ss1[:-1]), np.log(ccn_conc1[:-1])) > (0.44381311635631338, 6.5304711876039025, 1.0, nan, nan) >>>> np.exp(6.5304711876039025) > 685.72123873003966 >>>> stats.linregress(np.log(ccn_ss1[:-2]), np.log(ccn_conc1[:-2])) > (nan, nan, 0.0, nan, nan) Now that it makes much more sense when you demonstrate your words with code :) I have never approach this question with linregress approach. Thanks for the demo. I guess there is no way to make such regression using only one data-pair. In my case I have proxy approaches that require to make further assumptions using data from different sections of my analysis. > > strangely leastsq/curve_fit has a better fit than linregress for exact > solution (2 observations) > >>>> tfit1, pcov1 = curve_fit(my_ck, ccn_ss1[:-1], ccn_conc1[:-1], p0=(1,1)) >>>> my_ck(ccn_ss1[:-1], *tfit1) > array([ 383.5123741 , ?424.82669523]) >>>> my_ck(ccn_ss1[:-1], *tfit1) - ccn_conc1[:-1] > array([ 0., ?0.]) >>>> my_ck(np.asarray(ccn_ss1[:-1]), np.exp(6.5304711876039025), 0.44381311635631338) - ccn_conc1[:-1] > array([ ?1.70530257e-13, ? 3.41060513e-13]) > The error is very negligible. So far I have good results from curve_fit and leastsq functions. I use linregress for mostly obtaining linear fit parameters and r^2 values. > If you have reasonably good information about the function or the > range of starting values, then this always works better and faster for > non-linear optimization. An interesting alternative that James was > using for distribution estimation, is to use a global optimizer > (differential evolution) in combination with a non-linear optimizer. > You could also just draw several random starting values. Since your > optimization problem is very small, it would still be fast. Could you give an example for this approach? Thanks. -- G?khan From waleriantunes at gmail.com Thu Oct 21 06:22:35 2010 From: waleriantunes at gmail.com (=?ISO-8859-1?Q?Wal=E9ria_Antunes_David?=) Date: Thu, 21 Oct 2010 08:22:35 -0200 Subject: [SciPy-User] TypeError Message-ID: Hi all, I have this code: http://pastebin.com/nDrisnhZ and my forms.py: http://pastebin.com/wHEErcQx But is ocurring this error: TypeError at /index/ unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' Request Method: POST Request URL: http://localhost:8000/index/ Exception Type: TypeError Exception Value: unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' Exception Location: C:\supernovas_Ia\..\supernovas_Ia\supernovas\views.py in index, line 19 How do i fix this? Thanks, Wal?ria -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Oct 21 09:16:48 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 Oct 2010 09:16:48 -0400 Subject: [SciPy-User] TypeError In-Reply-To: References: Message-ID: On Thu, Oct 21, 2010 at 6:22 AM, Wal?ria Antunes David < waleriantunes at gmail.com> wrote: > Hi all, > > I have this code: http://pastebin.com/nDrisnhZ > and my forms.py: http://pastebin.com/wHEErcQx > > But is ocurring this error: > > TypeError at /index/ > > unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' > > Request Method: POST Request URL: http://localhost:8000/index/ Exception > Type: TypeError Exception Value: > > unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' > > Exception Location: C:\supernovas_Ia\..\supernovas_Ia\supernovas\views.py > in index, line 19 > as your exception says fractional powers are not supported for Decimal type, you need to convert/cast it >>> Decimal(532, 2)**(0.5) Traceback (most recent call last): File "", line 1, in Decimal(532, 2)**(0.5) TypeError: unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' >>> float(Decimal(532, 2))**(0.5) 23.065125189341593 As someone mentioned before, trying it in an interpreter shell is easier. Josef > > How do i fix this? > > Thanks, > Wal?ria > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From waleriantunes at gmail.com Thu Oct 21 09:23:56 2010 From: waleriantunes at gmail.com (=?ISO-8859-1?Q?Wal=E9ria_Antunes_David?=) Date: Thu, 21 Oct 2010 11:23:56 -0200 Subject: [SciPy-User] TypeError In-Reply-To: References: Message-ID: Thanks On Thu, Oct 21, 2010 at 11:16 AM, wrote: > > > On Thu, Oct 21, 2010 at 6:22 AM, Wal?ria Antunes David < > waleriantunes at gmail.com> wrote: > >> Hi all, >> >> I have this code: http://pastebin.com/nDrisnhZ >> and my forms.py: http://pastebin.com/wHEErcQx >> >> But is ocurring this error: >> >> TypeError at /index/ >> >> unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' >> >> Request Method: POST Request URL: http://localhost:8000/index/ Exception >> Type: TypeError Exception Value: >> >> unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' >> >> Exception Location: C:\supernovas_Ia\..\supernovas_Ia\supernovas\views.py >> in index, line 19 >> > > as your exception says fractional powers are not supported for Decimal > type, you need to convert/cast it > > >>> Decimal(532, 2)**(0.5) > Traceback (most recent call last): > File "", line 1, in > Decimal(532, 2)**(0.5) > TypeError: unsupported operand type(s) for ** or pow(): 'Decimal' and > 'float' > >>> float(Decimal(532, 2))**(0.5) > 23.065125189341593 > > As someone mentioned before, trying it in an interpreter shell is easier. > > Josef > > >> >> How do i fix this? >> >> Thanks, >> Wal?ria >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Oct 21 11:37:03 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 21 Oct 2010 17:37:03 +0200 Subject: [SciPy-User] FreeImage <-> numpy IO wrappers In-Reply-To: <7E7A9A20-D1D1-45E9-A2C0-2F40E58A357B@yale.edu> References: <201005141650.17597.lpc@cmu.edu> <1362D71A-5799-436D-BB21-01CE45ECFBD6@yale.edu> <29E48CA5-5676-4AAF-AB3F-43E4DC079D48@yale.edu> <7E7A9A20-D1D1-45E9-A2C0-2F40E58A357B@yale.edu> Message-ID: Hi guys, [...] On Tue, Oct 19, 2010 at 8:16 PM, Zachary Pincus wrote: >> I had never heard that image data (file formats, OpenGL, etc.) was not >> ordered in y,x (C-order, i.e. x being fast) memory layout - I though >> the top vs. bottom origin was the only "uncertainty" - well, there >> maybe more FORTRAN around than I thought ... Whatever fixes you make, please remember to contribute them back to scikits.image! The plugin is being maintained there as well. Regards St?fan From zachary.pincus at yale.edu Thu Oct 21 11:49:21 2010 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 21 Oct 2010 11:49:21 -0400 Subject: [SciPy-User] FreeImage <-> numpy IO wrappers In-Reply-To: References: <201005141650.17597.lpc@cmu.edu> <1362D71A-5799-436D-BB21-01CE45ECFBD6@yale.edu> <29E48CA5-5676-4AAF-AB3F-43E4DC079D48@yale.edu> <7E7A9A20-D1D1-45E9-A2C0-2F40E58A357B@yale.edu> Message-ID: On Oct 21, 2010, at 11:37 AM, St?fan van der Walt wrote: > Hi guys, > > [...] > > On Tue, Oct 19, 2010 at 8:16 PM, Zachary Pincus > wrote: >>> I had never heard that image data (file formats, OpenGL, etc.) was >>> not >>> ordered in y,x (C-order, i.e. x being fast) memory layout - I though >>> the top vs. bottom origin was the only "uncertainty" - well, there >>> maybe more FORTRAN around than I thought ... > > Whatever fixes you make, please remember to contribute them back to > scikits.image! The plugin is being maintained there as well. Oh gosh, I didn't realize that -- great! (Did you ever diagnose that segfault?) Anyway, the only serious change I've made has to do with properly saving 2D uint8 images, which need to have a palette defined for certain formats (PNG maybe? I don't recall...). Anyway, the scikits.image version has diverged in structure somewhat from the one I use internally in my code, but here's the core of the change: At the end of _array_to_bitmap(array), right before the return statement, I have the following stanza to copy in a greyscale palette: if len(shape) == 2 and dtype.type == numpy.uint8: palette = _FI.FreeImage_GetPalette(bitmap) if not palette: raise RuntimeError('Could not get palette of 8-bit greyscale image') ctypes.memmove(palette, _GREY_PALETTE.ctypes.data, 1024) where _GREY_PALETTE is defined globally as: # 4-byte quads of 0,v,v,v from 0,0,0,0 to 0,255,255,255 _GREY_PALETTE = numpy.arange(0, 0x01000000, 0x00010101, dtype=numpy.uint32) From josef.pktd at gmail.com Thu Oct 21 12:03:39 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 Oct 2010 12:03:39 -0400 Subject: [SciPy-User] struggling and fighting with linalg Message-ID: There are a lot of recommendations in the mailing list and recipes in various packages how to use linalg more efficiently. However, yesterday I wasted several hours how to do the multivariate normal distribution with cholesky factor and solve. The other problem we have in some of the code is that svd or similar are calculated several times when individual numpy/scipy functions are called (e.g. pinv, logdet). Dag has oomatrix, and there are some packages on the matlab fileexchange (I didn't look at those yet). As a collection of recipes and reminder for myself, which is which and what needs to be transposed in s v d, I started to write some linalg array classes (in attachment). It's essentially just a holder for the decomposition and operations defined with it, using arrays not matrices and without any operator overloading. Still incomplete and only partially tested, and I haven't yet gone back to doing some statistics with it. Initially, I was looking for a cholesky decomposition of (block-)Toeplitz matrices. But those and sparse cholesky seem to be unavailable in the BSD-compatible landscape, but are for GPL. If anyone can provide code snippets or (short) references for missing parts for the linalg, it would reduce my trial-and-error time. I haven't worked my way through all the suggestions on the mailing list yet. As for my initial intention, I would also like to get a multivariate normal class with batteries included, so I don't have to figure it out and test each time I need a piece. Josef -------------- next part -------------- '''Recipes for more efficient work with linalg using classes intended for use for multivariate normal and linear regression calculations x is the data (nobs, nvars) m is the moment matrix (x'x) or a covariance matrix Sigma examples: x'sigma^{-1}x z = Px where P=Sigma^{-1/2} or P=Sigma^{1/2} Initially assume positive definite, then add spectral cutoff and regularization of moment matrix, and extend to PCA Author: josef-pktd Created on 2010-10-20 ''' import numpy as np from scipy import linalg #this has been copied from nitime a long time ago #TODO: ceck whether class has changed in nitime class OneTimeProperty(object): """A descriptor to make special properties that become normal attributes. This is meant to be used mostly by the auto_attr decorator in this module. Author: Fernando Perez, copied from nitime """ def __init__(self,func): """Create a OneTimeProperty instance. Parameters ---------- func : method The method that will be called the first time to compute a value. Afterwards, the method's name will be a standard attribute holding the value of this computation. """ self.getter = func self.name = func.func_name def __get__(self,obj,type=None): """This will be called on attribute access on the class or instance. """ if obj is None: # Being called on the class, return the original function. This way, # introspection works on the class. #return func print 'class access' return self.getter val = self.getter(obj) #print "** auto_attr - loading '%s'" % self.name # dbg setattr(obj, self.name, val) return val class PlainMatrixArray(object): '''Class that defines linalg operation on an array simplest version as benchmark linear algebra recipes for multivariate normal and linear regression calculations ''' def __init__(self, data=None, sym=None): if not data is None: if sym is None: self.x = np.asarray(data) self.m = np.dot(self.x.T, self.x) else: raise ValueError('data and sym cannot be both given') elif not sym is None: self.m = np.asarray(sym) self.x = np.eye(*self.m.shape) #default else: raise ValueError('either data or sym need to be given') @OneTimeProperty def minv(self): return np.linalg.inv(self.m) @OneTimeProperty def m_y(self, y): return np.dot(self.m, y) def minv_y(self, y): return np.dot(self.minv, y) @OneTimeProperty def mpinv(self): return linalg.pinv(self.m) @OneTimeProperty def xpinv(self): return linalg.pinv(self.x) def yt_m_y(self, y): return np.dot(y.T, np.dot(self.m, y)) def yt_minv_y(self, y): return np.dot(y.T, np.dot(self.minv, y)) #next two are redundant def y_m_yt(self, y): return np.dot(y, np.dot(self.m, y.T)) def y_minv_yt(self, y): return np.dot(y, np.dot(self.minv, y.T)) @OneTimeProperty def mdet(self): return linalg.det(self.m) @OneTimeProperty def mlogdet(self): return np.log(linalg.det(self.m)) @OneTimeProperty def meigh(self): evals, evecs = linalg.eigh(self.m) sortind = np.argsort(evals)[::-1] return evals[sortind], evecs[:,sortind] @OneTimeProperty def mhalf(self): evals, evecs = self.meigh return np.dot(np.diag(evals**0.5), evecs.T) #return np.dot(evecs, np.dot(np.diag(evals**0.5), evecs.T)) #return np.dot(evecs, 1./np.sqrt(evals) * evecs.T)) @OneTimeProperty def minvhalf(self): evals, evecs = self.meigh return np.dot(evecs, 1./np.sqrt(evals) * evecs.T) class SvdArray(PlainMatrixArray): '''Class that defines linalg operation on an array svd version, where svd is taken on original data array, if or when it matters no spectral cutoff in first version ''' def __init__(self, data=None, sym=None): super(SvdArray, self).__init__(data=data, sym=sym) u, s, v = np.linalg.svd(self.x, full_matrices=1) self.u, self.s, self.v = u, s, v self.sdiag = linalg.diagsvd(s, *x.shape) self.sinvdiag = linalg.diagsvd(1./s, *x.shape) def _sdiagpow(self, p): return linalg.diagsvd(np.power(self.s, p), *x.shape) @OneTimeProperty def minv(self): sinvv = np.dot(self.sinvdiag, self.v) return np.dot(sinvv.T, sinvv) @OneTimeProperty def meigh(self): evecs = self.v.T evals = self.s**2 return evals, evecs @OneTimeProperty def mdet(self): return self.meigh[0].prod() @OneTimeProperty def mlogdet(self): return np.log(self.meigh[0]).sum() @OneTimeProperty def mhalf(self): return np.dot(np.diag(self.s), self.v) @OneTimeProperty def xxthalf(self): return np.dot(self.u, self.sdiag) @OneTimeProperty def xxtinvhalf(self): return np.dot(self.u, self.sinvdiag) class CholArray(PlainMatrixArray): '''Class that defines linalg operation on an array cholesky version, where svd is taken on original data array, if or when it matters plan: use cholesky factor and cholesky solve nothing implemented yet ''' def __init__(self, data=None, sym=None): super(SvdArray, self).__init__(data=data, sym=sym) def testcompare(m1, m2): from numpy.testing import assert_almost_equal, assert_approx_equal decimal = 12 #inv assert_almost_equal(m1.minv, m2.minv, decimal=decimal) #matrix half and invhalf #fix sign in test, should this be standardized s1 = np.sign(m1.mhalf.sum(1))[:,None] s2 = np.sign(m2.mhalf.sum(1))[:,None] scorr = s1/s2 assert_almost_equal(m1.mhalf, m2.mhalf * scorr, decimal=decimal) assert_almost_equal(m1.minvhalf, m2.minvhalf, decimal=decimal) #eigenvalues, eigenvectors evals1, evecs1 = m1.meigh evals2, evecs2 = m2.meigh assert_almost_equal(evals1, evals2, decimal=decimal) #normalization can be different: evecs in columns s1 = np.sign(evecs1.sum(0)) s2 = np.sign(evecs2.sum(0)) scorr = s1/s2 assert_almost_equal(evecs1, evecs2 * scorr, decimal=decimal) #determinant assert_approx_equal(m1.mdet, m2.mdet, significant=13) assert_approx_equal(m1.mlogdet, m2.mlogdet, significant=13) ####### helper function for interactive work def tiny2zero(x, eps = 1e-15): '''replace abs values smaller than eps by zero, makes copy ''' mask = np.abs(x.copy()) < eps x[mask] = 0 return x def maxabs(x): return np.max(np.abs(x)) #if __name__ == '__main__': n = 5 y = np.arange(n) x = np.random.randn(100,n) autocov = 2*0.8**np.arange(n) +0.01 * np.random.randn(n) sigma = linalg.toeplitz(autocov) mat = PlainMatrixArray(sym=sigma) print tiny2zero(mat.mhalf) mih = mat.minvhalf print tiny2zero(mih) #for nicer printing mat2 = PlainMatrixArray(data=x) print maxabs(mat2.yt_minv_y(np.dot(x.T, x)) - mat2.m) print tiny2zero(mat2.minv_y(mat2.m)) mat3 = SvdArray(data=x) print mat3.meigh[0] print mat2.meigh[0] testcompare(mat2, mat3) ''' m = np.dot(x.T, x) u,s,v = np.linalg.svd(x, full_matrices=1) Sig = linalg.diagsvd(s,*x.shape) >>> np.max(np.abs(np.dot(u, np.dot(Sig, v)) - x)) 3.1086244689504383e-015 >>> np.max(np.abs(np.dot(u.T, u) - np.eye(100))) 3.3306690738754696e-016 >>> np.max(np.abs(np.dot(v.T, v) - np.eye(5))) 6.6613381477509392e-016 >>> np.max(np.abs(np.dot(Sig.T, Sig) - np.diag(s**2))) 5.6843418860808015e-014 >>> evals,evecs = linalg.eigh(np.dot(x.T, x)) >>> evals[::-1] array([ 123.36404464, 112.17036442, 102.04198468, 76.60832278, 74.70484487]) >>> s**2 array([ 123.36404464, 112.17036442, 102.04198468, 76.60832278, 74.70484487]) >>> np.max(np.abs(np.dot(v.T, np.dot(np.diag(s**2), v)) - m)) 1.1368683772161603e-013 >>> us = np.dot(u, Sig) >>> np.max(np.abs(np.dot(us, us.T) - np.dot(x, x.T))) 1.0658141036401503e-014 >>> sv = np.dot(Sig, v) >>> np.max(np.abs(np.dot(sv.T, sv) - np.dot(x.T, x))) 1.1368683772161603e-013 ''' From warren.weckesser at enthought.com Thu Oct 21 12:46:00 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 21 Oct 2010 11:46:00 -0500 Subject: [SciPy-User] struggling and fighting with linalg In-Reply-To: References: Message-ID: Hi Josef, On Thu, Oct 21, 2010 at 11:03 AM, wrote: > There are a lot of recommendations in the mailing list and recipes in > various packages how to use linalg more efficiently. > However, yesterday I wasted several hours how to do the multivariate > normal distribution with cholesky factor and solve. > > What do you mean by "*do* the multivariate normal distribution" (emphasis mine)? numpy.random has multivariate_normal for generating samples; I wrote a little cookbook example ( http://www.scipy.org/Cookbook/CorrelatedRandomSamples) before I realized multivariate_normal existed. But I think you want more than just generating samples, right? Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Oct 21 13:02:14 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 Oct 2010 13:02:14 -0400 Subject: [SciPy-User] struggling and fighting with linalg In-Reply-To: References: Message-ID: On Thu, Oct 21, 2010 at 12:46 PM, Warren Weckesser wrote: > Hi Josef, > > On Thu, Oct 21, 2010 at 11:03 AM, wrote: >> >> There are a lot of recommendations in the mailing list and recipes in >> various packages how to use linalg more efficiently. >> However, yesterday I wasted several hours how to do the multivariate >> normal distribution with cholesky factor and solve. >> > > What do you mean by "*do* the multivariate normal distribution" (emphasis > mine)?? numpy.random has multivariate_normal for generating samples; I wrote > a little cookbook example > (http://www.scipy.org/Cookbook/CorrelatedRandomSamples) before I realized > multivariate_normal existed. > > But I think you want more than just generating samples, right? yes, main things I need for statsmodels are loglikelihood and linear transformation, (and separately in other application non-linear transformation), plus linear transformation that comes from the decomposition of the inverse covariance matrix (for whitening or standardization). The last part is where I got stuck. The extras will be conditional, marginal distributions, cdf, confidence regions, ... Part of these just needs to be collected (and tested). Josef > > Warren > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From josef.pktd at gmail.com Thu Oct 21 16:16:42 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 Oct 2010 16:16:42 -0400 Subject: [SciPy-User] struggling and fighting with linalg In-Reply-To: References: Message-ID: On Thu, Oct 21, 2010 at 1:02 PM, wrote: > On Thu, Oct 21, 2010 at 12:46 PM, Warren Weckesser > wrote: >> Hi Josef, >> >> On Thu, Oct 21, 2010 at 11:03 AM, wrote: >>> >>> There are a lot of recommendations in the mailing list and recipes in >>> various packages how to use linalg more efficiently. >>> However, yesterday I wasted several hours how to do the multivariate >>> normal distribution with cholesky factor and solve. >>> >> >> What do you mean by "*do* the multivariate normal distribution" (emphasis >> mine)?? numpy.random has multivariate_normal for generating samples; I wrote >> a little cookbook example >> (http://www.scipy.org/Cookbook/CorrelatedRandomSamples) before I realized >> multivariate_normal existed. >> >> But I think you want more than just generating samples, right? > > yes, main things I need for statsmodels are loglikelihood and linear > transformation, (and separately in other application non-linear > transformation), plus linear transformation that comes from the > decomposition of the inverse covariance matrix (for whitening or > standardization). The last part is where I got stuck. > > The extras will be conditional, marginal distributions, cdf, > confidence regions, ... Part of these just needs to be collected (and > tested). more cholesky woes: upper, lower, transpose or flip ? I still haven't figured it out and the test don't work yet. very simple example: autoregressive process order 1, timeseries has increasing index with time. But cholesky seems to run backwards in time. >>> >>> nobs = 5 >>> autocov = 0.8**np.arange(nobs) >>> sigma = linalg.toeplitz(autocov) >>> sigmainv = linalg.inv(sigma) >>> >>> c = linalg.cholesky(sigma, lower=True) >>> ci = linalg.cholesky(sigmainv, lower=True) >>> >>> print sigma [[ 1. 0.8 0.64 0.512 0.4096] [ 0.8 1. 0.8 0.64 0.512 ] [ 0.64 0.8 1. 0.8 0.64 ] [ 0.512 0.64 0.8 1. 0.8 ] [ 0.4096 0.512 0.64 0.8 1. ]] >>> print tiny2zero(ci/ci.max()) [[ 1. 0. 0. 0. 0. ] [-0.8 1. 0. 0. 0. ] [ 0. -0.8 1. 0. 0. ] [ 0. 0. -0.8 1. 0. ] [ 0. 0. 0. -0.8 0.6]] >>> >>> "this is the text book transformation for efficient GLS and MLE" >>> print 'coefficient for first observation', np.sqrt(1-autocov[1]**2) coefficient for first observation 0.6 >>> ci2 = ci[::-1,::-1].T >>> print tiny2zero(ci2/ci2.max()) [[ 0.6 0. 0. 0. 0. ] [-0.8 1. 0. 0. 0. ] [ 0. -0.8 1. 0. 0. ] [ 0. 0. -0.8 1. 0. ] [ 0. 0. 0. -0.8 1. ]] >>> >>> print np.dot(ci/ci.max(), np.ones(nobs)) #wrong [ 1. 0.2 0.2 0.2 -0.2] >>> >>> print np.dot(ci2/ci2.max(), np.ones(nobs)) #correct [ 0.6 0.2 0.2 0.2 0.2] Are there any rules to understand which direction cholesky goes? Josef > > Josef > >> >> Warren >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > From stefan at sun.ac.za Thu Oct 21 17:23:51 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 21 Oct 2010 23:23:51 +0200 Subject: [SciPy-User] FreeImage <-> numpy IO wrappers In-Reply-To: References: <201005141650.17597.lpc@cmu.edu> <1362D71A-5799-436D-BB21-01CE45ECFBD6@yale.edu> <29E48CA5-5676-4AAF-AB3F-43E4DC079D48@yale.edu> <7E7A9A20-D1D1-45E9-A2C0-2F40E58A357B@yale.edu> Message-ID: On Thu, Oct 21, 2010 at 5:49 PM, Zachary Pincus > At the end of _array_to_bitmap(array), right before the return > statement, I have the following stanza to copy in a greyscale palette: > > ? ? if len(shape) == 2 and dtype.type == numpy.uint8: > ? ? ? palette = _FI.FreeImage_GetPalette(bitmap) > ? ? ? if not palette: > ? ? ? ? raise RuntimeError('Could not get palette of 8-bit greyscale > image') > ? ? ? ctypes.memmove(palette, _GREY_PALETTE.ctypes.data, 1024) > > where _GREY_PALETTE is defined globally as: > # 4-byte quads of 0,v,v,v from 0,0,0,0 to 0,255,255,255 > _GREY_PALETTE = numpy.arange(0, 0x01000000, 0x00010101, > dtype=numpy.uint32) I must admit, I haven't looked at this code recently. Would you be so kind as to patch the scikits.image version? I'd appreciate the help! Thanks St?fan From waleriantunes at gmail.com Fri Oct 22 07:54:31 2010 From: waleriantunes at gmail.com (=?ISO-8859-1?Q?Wal=E9ria_Antunes_David?=) Date: Fri, 22 Oct 2010 09:54:31 -0200 Subject: [SciPy-User] Error In-Reply-To: References: <8CDB5759-1D41-4775-8E70-1ACD0BAEF82A@iro.umontreal.ca> Message-ID: The problem was solved. Now occurs this error: TypeError at /index/ unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' Request Method: POST Request URL: http://localhost:8000/index/ Exception Type: TypeError Exception Value: unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' Exception Location: views.py in index, line 24 My code: http://pastebin.com/mcMeFijV Can you help me? Thanks, On Fri, Oct 22, 2010 at 1:30 AM, David Warde-Farley < wardefar at iro.umontreal.ca> wrote: > On 2010-10-20, at 1:04 PM, Wal?ria Antunes David wrote: > > > I have a forms.py: http://pastebin.com/ihhM9GVf > > > > This value comes from a form. Not right this correct? > > > Then it looks like it would be a member of the form object in your original > paste, i.e. form.omega_k. > > David -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Oct 22 08:33:10 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 22 Oct 2010 08:33:10 -0400 Subject: [SciPy-User] Error In-Reply-To: References: <8CDB5759-1D41-4775-8E70-1ACD0BAEF82A@iro.umontreal.ca> Message-ID: On Fri, Oct 22, 2010 at 7:54 AM, Wal?ria Antunes David < waleriantunes at gmail.com> wrote: > The problem was solved. > Now occurs this error: > > TypeError at /index/ > > unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' > > Request Method: POST Request URL: http://localhost:8000/index/ Exception > Type: TypeError Exception Value: > > unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' > > Exception Location: views.py in index, line 24 > > My code: http://pastebin.com/mcMeFijV > > Can you help me? > What's the difference to the last question you asked "Type Error" Josef > > Thanks, > > > > On Fri, Oct 22, 2010 at 1:30 AM, David Warde-Farley < > wardefar at iro.umontreal.ca> wrote: > >> On 2010-10-20, at 1:04 PM, Wal?ria Antunes David wrote: >> >> > I have a forms.py: http://pastebin.com/ihhM9GVf >> > >> > This value comes from a form. Not right this correct? >> >> >> Then it looks like it would be a member of the form object in your >> original paste, i.e. form.omega_k. >> >> David > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From waleriantunes at gmail.com Fri Oct 22 08:43:47 2010 From: waleriantunes at gmail.com (=?ISO-8859-1?Q?Wal=E9ria_Antunes_David?=) Date: Fri, 22 Oct 2010 10:43:47 -0200 Subject: [SciPy-User] Error In-Reply-To: References: <8CDB5759-1D41-4775-8E70-1ACD0BAEF82A@iro.umontreal.ca> Message-ID: My last question was about this: global name 'omega_k' is not defined, but this is already solved. My error now is: TypeError at /index/ unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' Request Method: POST Request URL: http://localhost:8000/index/ Exception Type: TypeError Exception Value: unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' Exception Location: views.py in index, line 24 Waleria On Fri, Oct 22, 2010 at 10:33 AM, wrote: > > > On Fri, Oct 22, 2010 at 7:54 AM, Wal?ria Antunes David < > waleriantunes at gmail.com> wrote: > >> The problem was solved. >> Now occurs this error: >> >> TypeError at /index/ >> >> unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' >> >> Request Method: POST Request URL: http://localhost:8000/index/ Exception >> Type: TypeError Exception Value: >> >> unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' >> >> Exception Location: views.py in index, line 24 >> >> My code: http://pastebin.com/mcMeFijV >> >> Can you help me? >> > > What's the difference to the last question you asked "Type Error" > > Josef > > >> >> Thanks, >> >> >> >> On Fri, Oct 22, 2010 at 1:30 AM, David Warde-Farley < >> wardefar at iro.umontreal.ca> wrote: >> >>> On 2010-10-20, at 1:04 PM, Wal?ria Antunes David wrote: >>> >>> > I have a forms.py: http://pastebin.com/ihhM9GVf >>> > >>> > This value comes from a form. Not right this correct? >>> >>> >>> Then it looks like it would be a member of the form object in your >>> original paste, i.e. form.omega_k. >>> >>> David >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Oct 22 09:11:17 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 Oct 2010 08:11:17 -0500 Subject: [SciPy-User] Error In-Reply-To: References: <8CDB5759-1D41-4775-8E70-1ACD0BAEF82A@iro.umontreal.ca> Message-ID: On Fri, Oct 22, 2010 at 06:54, Wal?ria Antunes David wrote: > > The problem was solved. > Now occurs this error: > > TypeError at /index/ > > unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' When you use DecimalFields, the value you get will be a Decimal object, not a float. You will need to use other Decimal values (e.g. Decimal('-0.5')) in expressions with them. http://docs.python.org/library/decimal In any case, this has nothing to do with numpy or scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From waleriantunes at gmail.com Fri Oct 22 09:13:04 2010 From: waleriantunes at gmail.com (=?ISO-8859-1?Q?Wal=E9ria_Antunes_David?=) Date: Fri, 22 Oct 2010 11:13:04 -0200 Subject: [SciPy-User] Error In-Reply-To: References: <8CDB5759-1D41-4775-8E70-1ACD0BAEF82A@iro.umontreal.ca> Message-ID: Thank you very much!!! On Fri, Oct 22, 2010 at 11:11 AM, Robert Kern wrote: > On Fri, Oct 22, 2010 at 06:54, Wal?ria Antunes David > wrote: > > > > The problem was solved. > > Now occurs this error: > > > > TypeError at /index/ > > > > unsupported operand type(s) for ** or pow(): 'Decimal' and 'float' > > When you use DecimalFields, the value you get will be a Decimal > object, not a float. You will need to use other Decimal values (e.g. > Decimal('-0.5')) in expressions with them. > > http://docs.python.org/library/decimal > > In any case, this has nothing to do with numpy or scipy. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertwiley at gmail.com Fri Oct 22 11:53:27 2010 From: bertwiley at gmail.com (bert wiley) Date: Fri, 22 Oct 2010 11:53:27 -0400 Subject: [SciPy-User] How to solve a single linear equation Message-ID: Hello Ive just started using scipy and im learning some algebra. Ive read the scipy tutorial and noticed they have an example of solving linear equations and it uses 3 of them and takes a matrice for the params and one for the constants. How do solve for a single linear equation like this x+3y+5z = 10 ? ive tried it like this A = mat('[1 3 5') b = mat('[10]') linalg.solve(A,b) but i get raise ValueError, 'expected square matrix' thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Oct 22 12:03:03 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 Oct 2010 11:03:03 -0500 Subject: [SciPy-User] How to solve a single linear equation In-Reply-To: References: Message-ID: On Fri, Oct 22, 2010 at 10:53, bert wiley wrote: > Hello > ?? Ive just started using scipy and im learning some algebra. Ive read the > scipy tutorial and noticed they have an example of solving linear equations > and it uses 3 of them and takes a matrice for the params and one for the > constants. How do solve for a single linear equation like this? x+3y+5z = > 10? ??? ive tried it like this > > A = mat('[1 3 5') > b = mat('[10]') > > linalg.solve(A,b) > > > but i get > raise ValueError, 'expected square matrix' There is no unique point solution to that one equation. There is a plane in 3D space comprised of an infinite number of points that describes the solution to that equation. For three parameters, linalg.solve() requires three equations. That's why you get the ValueError saying that it expects a square matrix, 3x3 in this case. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From eric.moscardi at sophia.inria.fr Fri Oct 22 12:15:30 2010 From: eric.moscardi at sophia.inria.fr (moscardi) Date: Fri, 22 Oct 2010 18:15:30 +0200 Subject: [SciPy-User] how use the tutorial "Extending ndimage in C" In-Reply-To: <818683789.1009450.1287763415392.JavaMail.root@zmbs1.inria.fr> References: <818683789.1009450.1287763415392.JavaMail.root@zmbs1.inria.fr> Message-ID: <12E5554F-6F0A-4A2C-9255-4946F49E3CCF@sophia.inria.fr> Hello, i try to use the tutorial named "Extending ndimage in C" but It doesn't work. If anyone use geometric_transform() with a C function, can he help me? Thanks Eric On Oct 22, 2010, at 6:03 PM, Robert Kern wrote: > On Fri, Oct 22, 2010 at 10:53, bert wiley wrote: >> Hello >> Ive just started using scipy and im learning some algebra. Ive read the >> scipy tutorial and noticed they have an example of solving linear equations >> and it uses 3 of them and takes a matrice for the params and one for the >> constants. How do solve for a single linear equation like this x+3y+5z = >> 10 ? ive tried it like this >> >> A = mat('[1 3 5') >> b = mat('[10]') >> >> linalg.solve(A,b) >> >> >> but i get >> raise ValueError, 'expected square matrix' > > There is no unique point solution to that one equation. There is a > plane in 3D space comprised of an infinite number of points that > describes the solution to that equation. For three parameters, > linalg.solve() requires three equations. That's why you get the > ValueError saying that it expects a square matrix, 3x3 in this case. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Eric MOSCARDI INRIA - Virtual Plants CIRAD, Avenue Agropolis 34398 Montpellier Cedex 5, France 04 67 61 58 00 (ask number 60 09) email : eric.moscardi at sophia.inria.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Oct 22 12:19:15 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 Oct 2010 11:19:15 -0500 Subject: [SciPy-User] how use the tutorial "Extending ndimage in C" In-Reply-To: <12E5554F-6F0A-4A2C-9255-4946F49E3CCF@sophia.inria.fr> References: <818683789.1009450.1287763415392.JavaMail.root@zmbs1.inria.fr> <12E5554F-6F0A-4A2C-9255-4946F49E3CCF@sophia.inria.fr> Message-ID: On Fri, Oct 22, 2010 at 11:15, moscardi wrote: > Hello, > i try to use the tutorial named "Extending ndimage in C" but It doesn't > work. Please never just say "It doesn't work." Show us exactly what you tried. Show us exactly what results you got (copy-and-paste any error messages; never paraphrase). Tell us what results you expected. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From eric.moscardi at sophia.inria.fr Fri Oct 22 12:39:18 2010 From: eric.moscardi at sophia.inria.fr (moscardi) Date: Fri, 22 Oct 2010 18:39:18 +0200 Subject: [SciPy-User] how use the tutorial "Extending ndimage in C" In-Reply-To: <148942105.1019462.1287764386527.JavaMail.root@zmbs1.inria.fr> References: <818683789.1009450.1287763415392.JavaMail.root@zmbs1.inria.fr> <12E5554F-6F0A-4A2C-9255-4946F49E3CCF@sophia.inria.fr> <148942105.1019462.1287764386527.JavaMail.root@zmbs1.inria.fr> Message-ID: Of course, The simple shift function, referenced on the scipy.ndimage page (http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html#ndimage-ccallbacks), doesn't compile. static int _shift_function(int *output_coordinates, double* input_coordinates, int output_rank, int input_rank, void *callback_data) { int ii; /* get the shift from the callback data pointer: */ double shift = *(double*)callback_data; /* calculate the coordinates: */ for(ii = 0; ii < irank; ii++) icoor[ii] = ocoor[ii] - shift; /* return OK status: */ return 1; } because the arguments are not correctly named (irank -> input_rank, icoor ->input_coordinates, ocoor -> output_coordinates). I rename them, I compile and I try to run the python code used as example : >>> import example >>> array = arange(12, shape=(4,3), type = Float64) >>> fnc = example.shift_function(0.5) >>> print geometric_transform(array, fnc) I add : >>> from numpy import array but the following syntax is not correct : array = arange(12, shape=(4,3), type = Float64) So I run : >>> from numpy import array >>> import example >>> from scipy.ndimage import geometric_transform >>> array = np.arange(12, dtype = np.float64).reshape(4,3) >>> fnc = example.shift_function(0.5) >>> print geometric_transform(array, fnc) And the result is not : [[ 0. 0. 0. ] [ 0. 1.3625 2.7375] [ 0. 4.8125 6.1875] [ 0. 8.2625 9.6375]] but I have as result : [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] Any ideas ? Thanks, Eric On Oct 22, 2010, at 6:19 PM, Robert Kern wrote: > On Fri, Oct 22, 2010 at 11:15, moscardi wrote: >> Hello, >> i try to use the tutorial named "Extending ndimage in C" but It doesn't >> work. > > Please never just say "It doesn't work." Show us exactly what you > tried. Show us exactly what results you got (copy-and-paste any error > messages; never paraphrase). Tell us what results you expected. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Eric MOSCARDI INRIA - Virtual Plants CIRAD, Avenue Agropolis 34398 Montpellier Cedex 5, France 04 67 61 58 00 (ask number 60 09) email : eric.moscardi at sophia.inria.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From benni.buch at gmx.de Fri Oct 22 13:32:25 2010 From: benni.buch at gmx.de (Benjamin Buch) Date: Fri, 22 Oct 2010 19:32:25 +0200 Subject: [SciPy-User] Scipy tests fail: build for wrong architecture Message-ID: Hi, I successfully built and installed scipy, but the tests fail: http://gist.github.com/640995 For every *.so file, the test say 'mach-o, but wrong architecture'. I checked, and the *.so files are 'Mach-O 64-bit bundle x86_64' I'm on OSX 10.6.4 with python.org 32-bit python 2.7. I think that python 2.7 requires all files to be build 32 bit. I don't know why all *.so are build for 64 bit. Is there a way or command that makes 'python setup.py build' build scipy for 32 bit? Best, Benjamin From ndbecker2 at gmail.com Fri Oct 22 14:56:34 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 22 Oct 2010 14:56:34 -0400 Subject: [SciPy-User] tests on python3 Message-ID: python3 import scipy scipy.test() Running unit tests for scipy Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.1/site-packages/numpy/testing/nosetester.py", line 318, in test self._show_system_info() File "/usr/lib64/python3.1/site-packages/numpy/testing/nosetester.py", line 187, in _show_system_info nose = import_nose() File "/usr/lib64/python3.1/site-packages/numpy/testing/nosetester.py", line 69, in import_nose raise ImportError(msg) ImportError: Need nose >= 0.10.0 for tests - see http://somethingaboutorange.com/mrl/projects/nose OK, let's try that >>> sudo easy_install-3.1 --upgrade nose install_dir /usr/lib/python3.1/site-packages/ Searching for nose Reading http://pypi.python.org/simple/nose/ Reading http://somethingaboutorange.com/mrl/projects/nose/ Best match: nose 0.11.4 Downloading http://somethingaboutorange.com/mrl/projects/nose/nose-0.11.4.tar.gz Processing nose-0.11.4.tar.gz Running nose-0.11.4/setup.py -q bdist_egg --dist-dir /tmp/easy_install- AifmAS/nose-0.11.4/egg-dist-tmp-SwMTG_ no previously-included directories found matching 'doc/.build' File "build/bdist.linux-x86_64/egg/nose/inspector.py", line 40 except tokenize.TokenError, e: ^ SyntaxError: invalid syntax File "build/bdist.linux-x86_64/egg/nose/twistedtools.py", line 164 raise exc_type, exc_value, tb ^ ... lots more SyntaxErrors From gokhansever at gmail.com Fri Oct 22 21:44:57 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Fri, 22 Oct 2010 20:44:57 -0500 Subject: [SciPy-User] Passing a dictionary to fsolve as initial parameter Message-ID: Hello, I am getting NameError: global name 'kohler_data' is not defined when I try to execute the following code piece in script1. Here petters_critical_condition function is imported from an external library. In reality script2.py contains many similar functions that I use as an external library which then I import those functions into my main analysis script. Right now the execution can't go further due to the error. When I define the function within the script1.py than its ok. Is there any workaround for this or a bug? Thanks. This is: I[8]: scipy.__version__ O[8]: '0.8.0.dev6156' Sample code is below # script1.py from script2 import * kohler_data = {'rd': rd_accumu} rwcrit_accumu = fsolve(petters_critical_condition, kohler_data['rd']*2) # script2.py def petters_critical_condition(x): term1 = -kohler_const['kelvin']/x**2 * np.exp(kohler_const['kelvin']/x) * (x**3 - kohler_data['rd']**3) / (x**3 - kohler_data['rd']**3 * (1.0 - kohler_const['kappa'])) term2 = np.exp(kohler_const['kelvin']/x) * 3*x**2 / (x**3 - kohler_data['rd']**3 * (1.0 - kohler_const['kappa'])) term3 = -np.exp(kohler_const['kelvin']/x) * (x**3 - kohler_data['rd']**3) * \ 3*x**2 / (x**3 - kohler_data['rd']**3 * (1.0 -kohler_const['kappa']))**2 return term1+term2+term3 #sample data. I[4]: kohler_data O[4]: {'rd': array([ 6.80000000e-08, 8.05000000e-08, 9.85000000e-08, 1.16000000e-07, 1.42500000e-07, 1.90500000e-07, 3.29000000e-07, 4.66500000e-07, 6.83500000e-07, 1.09700000e-06, 1.34000000e-06, 1.44650000e-06, 1.74050000e-06, 2.25050000e-06, 2.70650000e-06])} following is the full error output: I[3]: run airplot.py --------------------------------------------------------------------------- NameError Traceback (most recent call last) /home/gsever/Desktop/ccnworks/thesis/part2_saudi/airplot.py in () 220 dn_accumu = pcasp_conc / cubic_meter_by_cubic_centimeter 221 kohler_data = {'rd': rd_accumu} --> 222 rwcrit_accumu = fsolve(petters_critical_condition, kohler_data['rd']*2) 223 scrit_accumu = petters_s(rwcrit_accumu) 224 kohler_data['rh'] = rh.mean() /usr/lib/python2.6/site-packages/scipy/optimize/minpack.py in fsolve(func, x0, args, fprime, full_output, col_deriv, xtol, maxfev, band, epsfcn, factor, diag, warning) 126 n = len(x0) 127 if type(args) != type(()): args = (args,) --> 128 check_func(func,x0,args,n,(n,)) 129 Dfun = fprime 130 if Dfun is None: /usr/lib/python2.6/site-packages/scipy/optimize/minpack.py in check_func(thefunc, x0, args, numinputs, output_shape) 11 12 def check_func(thefunc, x0, args, numinputs, output_shape=None): ---> 13 res = atleast_1d(thefunc(*((x0[:numinputs],)+args))) 14 if (output_shape is not None) and (shape(res) != output_shape): 15 if (output_shape[0] != 1): /home/gsever/Desktop/ccnworks/thesis/part2_saudi/modellib.py in petters_critical_condition(x) 313 term2 = np.exp(kohler_const['kelvin']/x) * 3*x**2 / (x**3 - kohler_data['rd']**3 * (1.0 - kohler_const['kappa'])) 314 term3 = -np.exp(kohler_const['kelvin']/x) * (x**3 - kohler_data['rd']**3) * \ --> 315 3*x**2 / (x**3 - kohler_data['rd']**3 * (1.0 -kohler_const['kappa']))**2 316 return term1+term2+term3 317 NameError: global name 'kohler_data' is not defined WARNING: Failure executing file: -- G?khan From josef.pktd at gmail.com Fri Oct 22 22:04:58 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 22 Oct 2010 22:04:58 -0400 Subject: [SciPy-User] Passing a dictionary to fsolve as initial parameter In-Reply-To: References: Message-ID: On Fri, Oct 22, 2010 at 9:44 PM, G?khan Sever wrote: > Hello, > > I am getting NameError: global name 'kohler_data' is not defined when > I try to execute the following code piece in script1. Here > petters_critical_condition function is imported from an external > library. In reality script2.py contains many similar functions that I > use as an external library which then I import those functions into my > main analysis script. Right now the execution can't go further due to > the error. When I define the function within the script1.py than its > ok. > > Is there any workaround for this or a bug? > > Thanks. > > This is: > > I[8]: scipy.__version__ > O[8]: '0.8.0.dev6156' > > Sample code is below > > # script1.py > from script2 import * > kohler_data = {'rd': rd_accumu} > rwcrit_accumu = fsolve(petters_critical_condition, kohler_data['rd']*2) > > > # script2.py > def petters_critical_condition(x): > ? ?term1 = -kohler_const['kelvin']/x**2 * > np.exp(kohler_const['kelvin']/x) * (x**3 > ? ? ? ? ? ?- kohler_data['rd']**3) / (x**3 - kohler_data['rd']**3 * > (1.0 - kohler_const['kappa'])) > ? ?term2 = np.exp(kohler_const['kelvin']/x) * 3*x**2 / (x**3 - > kohler_data['rd']**3 * (1.0 - kohler_const['kappa'])) > ? ?term3 = -np.exp(kohler_const['kelvin']/x) * (x**3 - > kohler_data['rd']**3) * \ > ? ? ? ? ? ?3*x**2 / (x**3 - kohler_data['rd']**3 * (1.0 > -kohler_const['kappa']))**2 > ? ?return term1+term2+term3 If this is the entire module, then none of the kohler dictionary should be defined. If you want to use them from the outer scope, then you have to define values in the same module. I'm not quite sure because I never try to do this. What I (would) do instead is to wrap petters_critical_condition in a class and add the kohler_data and kohler_const as attributes, that can then be accessed by the petters_critical_condition method Josef > #sample data. > I[4]: kohler_data > O[4]: > {'rd': array([ ?6.80000000e-08, ? 8.05000000e-08, ? 9.85000000e-08, > ? ? ? ? 1.16000000e-07, ? 1.42500000e-07, ? 1.90500000e-07, > ? ? ? ? 3.29000000e-07, ? 4.66500000e-07, ? 6.83500000e-07, > ? ? ? ? 1.09700000e-06, ? 1.34000000e-06, ? 1.44650000e-06, > ? ? ? ? 1.74050000e-06, ? 2.25050000e-06, ? 2.70650000e-06])} > > following is the full error output: > > I[3]: run airplot.py > --------------------------------------------------------------------------- > NameError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) > > /home/gsever/Desktop/ccnworks/thesis/part2_saudi/airplot.py in () > ? ?220 dn_accumu = pcasp_conc / cubic_meter_by_cubic_centimeter > ? ?221 kohler_data = {'rd': rd_accumu} > --> 222 rwcrit_accumu = fsolve(petters_critical_condition, kohler_data['rd']*2) > ? ?223 scrit_accumu = petters_s(rwcrit_accumu) > ? ?224 kohler_data['rh'] = rh.mean() > > /usr/lib/python2.6/site-packages/scipy/optimize/minpack.py in > fsolve(func, x0, args, fprime, full_output, col_deriv, xtol, maxfev, > band, epsfcn, factor, diag, warning) > ? ?126 ? ? n = len(x0) > ? ?127 ? ? if type(args) != type(()): args = (args,) > --> 128 ? ? check_func(func,x0,args,n,(n,)) > ? ?129 ? ? Dfun = fprime > ? ?130 ? ? if Dfun is None: > > /usr/lib/python2.6/site-packages/scipy/optimize/minpack.py in > check_func(thefunc, x0, args, numinputs, output_shape) > ? ? 11 > ? ? 12 def check_func(thefunc, x0, args, numinputs, output_shape=None): > ---> 13 ? ? res = atleast_1d(thefunc(*((x0[:numinputs],)+args))) > ? ? 14 ? ? if (output_shape is not None) and (shape(res) != output_shape): > ? ? 15 ? ? ? ? if (output_shape[0] != 1): > > /home/gsever/Desktop/ccnworks/thesis/part2_saudi/modellib.py in > petters_critical_condition(x) > ? ?313 ? ? term2 = np.exp(kohler_const['kelvin']/x) * 3*x**2 / (x**3 > - kohler_data['rd']**3 * (1.0 - kohler_const['kappa'])) > ? ?314 ? ? term3 = -np.exp(kohler_const['kelvin']/x) * (x**3 - > kohler_data['rd']**3) * \ > --> 315 ? ? ? ? ? ? 3*x**2 / (x**3 - kohler_data['rd']**3 * (1.0 > -kohler_const['kappa']))**2 > ? ?316 ? ? return term1+term2+term3 > ? ?317 > > NameError: global name 'kohler_data' is not defined > WARNING: Failure executing file: > > > > > -- > G?khan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From charlesr.harris at gmail.com Fri Oct 22 22:22:34 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 22 Oct 2010 20:22:34 -0600 Subject: [SciPy-User] tests on python3 In-Reply-To: References: Message-ID: On Fri, Oct 22, 2010 at 12:56 PM, Neal Becker wrote: > python3 > import scipy > > scipy.test() > Running unit tests for scipy > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib64/python3.1/site-packages/numpy/testing/nosetester.py", > line 318, in test > self._show_system_info() > File "/usr/lib64/python3.1/site-packages/numpy/testing/nosetester.py", > line 187, in _show_system_info > nose = import_nose() > File "/usr/lib64/python3.1/site-packages/numpy/testing/nosetester.py", > line 69, in import_nose > raise ImportError(msg) > ImportError: Need nose >= 0.10.0 for tests - see > http://somethingaboutorange.com/mrl/projects/nose > > OK, let's try that > >>> > sudo easy_install-3.1 --upgrade nose > install_dir /usr/lib/python3.1/site-packages/ > Searching for nose > Reading http://pypi.python.org/simple/nose/ > Reading http://somethingaboutorange.com/mrl/projects/nose/ > Best match: nose 0.11.4 > Downloading > http://somethingaboutorange.com/mrl/projects/nose/nose-0.11.4.tar.gz > Processing nose-0.11.4.tar.gz > Running nose-0.11.4/setup.py -q bdist_egg --dist-dir /tmp/easy_install- > AifmAS/nose-0.11.4/egg-dist-tmp-SwMTG_ > no previously-included directories found matching 'doc/.build' > File "build/bdist.linux-x86_64/egg/nose/inspector.py", line 40 > except tokenize.TokenError, e: > ^ > SyntaxError: invalid syntax > > File "build/bdist.linux-x86_64/egg/nose/twistedtools.py", line 164 > raise exc_type, exc_value, tb > ^ > ... lots more SyntaxErrors > > Maybe you need the python3K version of nose. It hasn't been released, you need to download it from it's repository, http://python-nose.googlecode.com/svn/branches/py3k Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From krzbar at chalmers.se Sat Oct 23 19:24:58 2010 From: krzbar at chalmers.se (Krzysztof Bartoszek) Date: Sun, 24 Oct 2010 01:24:58 +0200 Subject: [SciPy-User] Compiling scipy.sparse.sparsetools extension throws g++ internal error&In-Reply-To= Message-ID: Hi, I don't know whether anyone has done any follow up since March but I came across the exact same problem whilst installing scipy 0.8.0 under Python 2.7, the error output I got was : g++: scipy/sparse/sparsetools/csr_wrap.cxx scipy/sparse/sparsetools/csr_wrap.cxx: In function ?int require_size(PyArrayObject*, npy_intp*, int)?: scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments for format scipy/sparse/sparsetools/csr_wrap.cxx:2917: error: expected `)' before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: too many arguments for format scipy/sparse/sparsetools/csr_wrap.cxx: In function ?int require_size(PyArrayObject*, npy_intp*, int)?: scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments for format scipy/sparse/sparsetools/csr_wrap.cxx:2917: error: expected `)' before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: too many arguments for format error: Command "g++ -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -fPIC Disabling the sparse package worked as said in the previous message, but I managed to identify the lines which cause the problem : They are sprintf(s,"%" NPY_INTP_FMT ",", size[i]); and sprintf(s,"%" NPY_INTP_FMT ",", array_size(ary,i)); in the function int require_size(PyArrayObject* ary, npy_intp* size, int n) in the files : scipy/sparse/sparsetools/csc_wrap.cxx scipy/sparse/sparsetools/coo_wrap.cxx scipy/sparse/sparsetools/bsr_wrap.cxx scipy/sparse/sparsetools/dia_wrap.cxx scipy/sparse/sparsetools/csr_wrap.cxx Commenting these two lines in these files makes scipy build fine, though I have not run it so I don't know what effect this will have on the functionality. I hope this helps someone and is useful. Cheers Krzysztof From yosefmel at post.tau.ac.il Sun Oct 24 10:16:28 2010 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Sun, 24 Oct 2010 16:16:28 +0200 Subject: [SciPy-User] optimize.leastsq without check_func Message-ID: <201010241616.28191.yosefmel@post.tau.ac.il> Hello, I'm using leastsq in a setting that requires only few steps to converge, and in that setting the call to check_func becomes a major overhead. Is there a way to use leastsq without this check? Or an alternative? Thanks, Yosef. From mondifero at gmail.com Sun Oct 24 10:43:35 2010 From: mondifero at gmail.com (O) Date: Sun, 24 Oct 2010 10:43:35 -0400 Subject: [SciPy-User] Central File Exchange for SciPy Message-ID: Hi everyone, I'm a recent convert from MatLab. One thing I miss is the Central File Exchange. Are there any plans to set up a site like this for our community? It occurs to me this could dramatically strengthen our user base. And by a "Central File Exchange" I mean something far simpler and less formal than "SciKits", where users can just post their code with some information about how it works. Just a thought. Cheers, O -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Sun Oct 24 11:22:13 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 24 Oct 2010 11:22:13 -0400 Subject: [SciPy-User] [Numpy-discussion] ANN: NumPy 1.5.1 release candidate 1 In-Reply-To: References: Message-ID: On Sun, Oct 17, 2010 at 7:35 AM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the first release > candidate of NumPy 1.5.1. This is a bug-fix release with no new > features compared to 1.5.0. [...] > Please report any other issues on the Numpy-discussion mailing list. Just installed on kubuntu-10.10, python-2.7 and python-3.1.2. Tests look fine for py2.7, but I see datetime errors with py3k: . ====================================================================== ERROR: test_creation (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 10, in test_creation dt1 = np.dtype('M8[750%s]'%unit) TypeError: data type not understood ====================================================================== ERROR: test_creation_overflow (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 62, in test_creation_overflow timesteps = np.array([date], dtype='datetime64[s]')[0].astype(np.int64) TypeError: data type not understood ====================================================================== ERROR: test_divisor_conversion_as (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 58, in test_divisor_conversion_as self.assertRaises(ValueError, lambda : np.dtype('M8[as/10]')) File "/usr/lib/python3.1/unittest.py", line 589, in assertRaises callableObj(*args, **kwargs) File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 58, in self.assertRaises(ValueError, lambda : np.dtype('M8[as/10]')) TypeError: data type not understood ====================================================================== ERROR: test_divisor_conversion_bday (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 32, in test_divisor_conversion_bday assert np.dtype('M8[B/12]') == np.dtype('M8[2h]') TypeError: data type not understood ====================================================================== ERROR: test_divisor_conversion_day (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 37, in test_divisor_conversion_day assert np.dtype('M8[D/12]') == np.dtype('M8[2h]') TypeError: data type not understood ====================================================================== ERROR: test_divisor_conversion_fs (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 54, in test_divisor_conversion_fs assert np.dtype('M8[fs/100]') == np.dtype('M8[10as]') TypeError: data type not understood ====================================================================== ERROR: test_divisor_conversion_hour (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 42, in test_divisor_conversion_hour assert np.dtype('m8[h/30]') == np.dtype('m8[2m]') TypeError: data type not understood ====================================================================== ERROR: test_divisor_conversion_minute (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 46, in test_divisor_conversion_minute assert np.dtype('m8[m/30]') == np.dtype('m8[2s]') TypeError: data type not understood ====================================================================== ERROR: test_divisor_conversion_month (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 21, in test_divisor_conversion_month assert np.dtype('M8[M/2]') == np.dtype('M8[2W]') TypeError: data type not understood ====================================================================== ERROR: test_divisor_conversion_second (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 50, in test_divisor_conversion_second assert np.dtype('m8[s/100]') == np.dtype('m8[10ms]') TypeError: data type not understood ====================================================================== ERROR: test_divisor_conversion_week (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 26, in test_divisor_conversion_week assert np.dtype('m8[W/5]') == np.dtype('m8[B]') TypeError: data type not understood ====================================================================== ERROR: test_divisor_conversion_year (test_datetime.TestDateTime) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/darren/.local/lib/python3.1/site-packages/numpy/core/tests/test_datetime.py", line 16, in test_divisor_conversion_year assert np.dtype('M8[Y/4]') == np.dtype('M8[3M]') TypeError: data type not understood ---------------------------------------------------------------------- Ran 3061 tests in 19.197s FAILED (KNOWNFAIL=5, SKIP=4, errors=12) From josef.pktd at gmail.com Sun Oct 24 12:20:37 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 24 Oct 2010 12:20:37 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: Message-ID: On Sun, Oct 24, 2010 at 10:43 AM, O wrote: > > Hi everyone, > > I'm a recent convert from MatLab. > > One thing I miss is the Central File Exchange.? Are there any plans > to set up a site like this for our community?? It occurs to me this could > dramatically strengthen our user base.? And by a "Central File Exchange" > I mean something far simpler and less formal than "SciKits", where users > can just post their code with some information about how it works. There are methods available to publish your code more easily than a scikits, if it's just a single module than the cookbook is a good location, http://www.scipy.org/Cookbook . For anything larger, setting up a simple python package for pypi is relatively easy, e.g. numdifftools which is a translation of the matlab fileexchange program coauthored jointly with the file exchange author. The main thing I'm missing compared to the file exchange is the comment and starring system, which reduces the time to check out a new package a lot. Also compared to matlab developers, python developers are often keeping public source control repositories which makes finding "recipes" easier, but again finding something specific is a bit of a random search. A good but incomplete overview is in http://www.scipy.org/Topical_Software Maybe we could extend the purpose of ask.scipy.org to a package review/commenting and package "advertising". But I haven't found a search button on it yet. Josef > > Just a thought. > > Cheers, > > O > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From josef.pktd at gmail.com Sun Oct 24 14:09:31 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 24 Oct 2010 14:09:31 -0400 Subject: [SciPy-User] what is linsolve? sparse docs Message-ID: (from a question on ask.scipy.org) The docs for scipy sparse http://docs.scipy.org/doc/scipy/reference/sparse.html?highlight=linsolve contain an example that imports linsolve. I cannot find anything like this. Is this a historical artifact or some hidden module somewhere ? Can someone who knows this update the examples? (Another question: Can the doc page for scipy.sparse.linalg be broken up? It is very difficult to figure out what's in there without a toc list.) Thanks, From warren.weckesser at enthought.com Sun Oct 24 15:26:08 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 24 Oct 2010 14:26:08 -0500 Subject: [SciPy-User] what is linsolve? sparse docs In-Reply-To: References: Message-ID: On Sun, Oct 24, 2010 at 1:09 PM, wrote: > (from a question on ask.scipy.org) > > The docs for scipy sparse > > http://docs.scipy.org/doc/scipy/reference/sparse.html?highlight=linsolve > > contain an example that imports linsolve. > > I cannot find anything like this. Is this a historical artifact or > some hidden module somewhere ? > > Can someone who knows this update the examples? > I just updated scipy/sparse/info.py, which is where this documentation comes from. That file was last touched three years ago, and there have been changes in the source code since then. Here's a script containing the same code as in updated "Example 1" of that docstring: ----- from scipy.sparse import lil_matrix from scipy.sparse.linalg import spsolve from numpy.linalg import solve, norm from numpy.random import rand A = lil_matrix((1000, 1000)) A[0, :100] = rand(100) A[1, 100:200] = A[0, :100] A.setdiag(rand(1000)) # Now convert it to CSR format and solve A x = b for x: A = A.tocsr() b = rand(1000) x = spsolve(A, b) # Convert it to a dense matrix and solve, and check that the result # is the same: x_ = solve(A.todense(), b) # Now we can compute norm of the error with: err = norm(x-x_) print err < 1e-10 ----- Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun Oct 24 16:43:34 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 24 Oct 2010 15:43:34 -0500 Subject: [SciPy-User] Simple root solving issues Message-ID: Hello, I have a simple test case below: #!/usr/bin/env python import numpy as np from scipy.optimize import leastsq, fsolve def fitfunc(gsd): #return dH_dlogDP1 - (h1.sum()/((2*np.pi)**0.5*np.log(gsd)))*np.exp(-(np.log(Dp1)-np.log(Dp1))**2./(2.*np.log(gsd)**2.)) return dH_dlogDP1 - (h1.sum()/((2*np.pi)**0.5*np.log(gsd))) dH_dlogDP1 = np.array([ 869.11014589]) Dp1 = np.array([ 0.02994996]) h1 = np.array([ 1906.7283]) res = leastsq(fitfunc, x0=1.5) res2 = fsolve(fitfunc, x0=1.0) return terms in fitfunc are equivalent since np.exp(-0) = 1 I hand-solve this equation and find gsd as ~2.4. Both leastsq and fsolve requires initial estimates to work properly. Now the problem is they are very sensitive to the selection of initial parameter. x0=1.5 works (gives ~2.39) whereas x0=1.0 situation can't estimate a solution. Is there a way to improve this sensitive init estimate issue with an alternative approach? Thanks. -- G?khan From charlesr.harris at gmail.com Sun Oct 24 17:14:06 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 24 Oct 2010 15:14:06 -0600 Subject: [SciPy-User] Simple root solving issues In-Reply-To: References: Message-ID: On Sun, Oct 24, 2010 at 2:43 PM, G?khan Sever wrote: > Hello, > > I have a simple test case below: > > #!/usr/bin/env python > > import numpy as np > from scipy.optimize import leastsq, fsolve > > def fitfunc(gsd): > #return dH_dlogDP1 - > > (h1.sum()/((2*np.pi)**0.5*np.log(gsd)))*np.exp(-(np.log(Dp1)-np.log(Dp1))**2./(2.*np.log(gsd)**2.)) > return dH_dlogDP1 - (h1.sum()/((2*np.pi)**0.5*np.log(gsd))) > > dH_dlogDP1 = np.array([ 869.11014589]) > Dp1 = np.array([ 0.02994996]) > h1 = np.array([ 1906.7283]) > > res = leastsq(fitfunc, x0=1.5) > res2 = fsolve(fitfunc, x0=1.0) > > > return terms in fitfunc are equivalent since np.exp(-0) = 1 > I hand-solve this equation and find gsd as ~2.4. Both leastsq and > fsolve requires initial estimates to work properly. Now the problem is > they are very sensitive > to the selection of initial parameter. x0=1.5 works (gives ~2.39) > whereas x0=1.0 situation can't estimate a solution. > > Is there a way to improve this sensitive init estimate issue with an > alternative approach? > > Looks pretty much like a hyperbola here, it's the division by the log term. Are you sure this is the right example? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun Oct 24 17:29:26 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 24 Oct 2010 16:29:26 -0500 Subject: [SciPy-User] Simple root solving issues In-Reply-To: References: Message-ID: On Sun, Oct 24, 2010 at 4:14 PM, Charles R Harris wrote: > > > On Sun, Oct 24, 2010 at 2:43 PM, G?khan Sever wrote: > >> Hello, >> >> I have a simple test case below: >> >> #!/usr/bin/env python >> >> import numpy as np >> from scipy.optimize import leastsq, fsolve >> >> def fitfunc(gsd): >> #return dH_dlogDP1 - >> >> (h1.sum()/((2*np.pi)**0.5*np.log(gsd)))*np.exp(-(np.log(Dp1)-np.log(Dp1))**2./(2.*np.log(gsd)**2.)) >> return dH_dlogDP1 - (h1.sum()/((2*np.pi)**0.5*np.log(gsd))) >> >> dH_dlogDP1 = np.array([ 869.11014589]) >> Dp1 = np.array([ 0.02994996]) >> h1 = np.array([ 1906.7283]) >> >> res = leastsq(fitfunc, x0=1.5) >> res2 = fsolve(fitfunc, x0=1.0) >> >> >> return terms in fitfunc are equivalent since np.exp(-0) = 1 >> I hand-solve this equation and find gsd as ~2.4. Both leastsq and >> fsolve requires initial estimates to work properly. Now the problem is >> they are very sensitive >> to the selection of initial parameter. x0=1.5 works (gives ~2.39) >> whereas x0=1.0 situation can't estimate a solution. >> > Your function is basically c1 - c2/log(gsd), so yeah, it will have a problem if the initial guess is 1.0. It works fine if the initial guess is, say, 1.01. >> Is there a way to improve this sensitive init estimate issue with an >> alternative approach? >> >> Solve c1*log(gsd) - c2 = 0 instead (but don't then change your initial guess to 0!). Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun Oct 24 17:32:06 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 24 Oct 2010 16:32:06 -0500 Subject: [SciPy-User] Simple root solving issues In-Reply-To: References: Message-ID: On Sun, Oct 24, 2010 at 4:14 PM, Charles R Harris wrote: > Looks pretty much like a hyperbola here, it's the division by the log term. > Are you sure this is the right example? > > Chuck Yes, it is a simple one unknown equation: dH_dlogDP1 - (h1.sum()/((2*np.pi)**0.5*np.log(gsd))) Putting the numbers in: 1.14 = 1/np.log(gsd) np.log(gsd) = 0.875 gsd = np.e**0.875 gsd ~ 2.399 -- G?khan From gokhansever at gmail.com Sun Oct 24 17:38:15 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 24 Oct 2010 16:38:15 -0500 Subject: [SciPy-User] Simple root solving issues In-Reply-To: References: Message-ID: On Sun, Oct 24, 2010 at 4:29 PM, Warren Weckesser wrote: > > Your function is basically c1 - c2/log(gsd), so yeah, it will have a problem > if the initial guess is 1.0.? It works fine if the initial guess is, say, > 1.01. Yes, catch this better now. Interestingly it suffers when x0 < 1.00 too. Probably log(gsd<1) explodes the function by making the right term positive and restricting the function to find a zero. -- G?khan From josef.pktd at gmail.com Sun Oct 24 17:50:02 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 24 Oct 2010 17:50:02 -0400 Subject: [SciPy-User] Simple root solving issues In-Reply-To: References: Message-ID: On Sun, Oct 24, 2010 at 5:38 PM, G?khan Sever wrote: > On Sun, Oct 24, 2010 at 4:29 PM, Warren Weckesser > wrote: >> >> Your function is basically c1 - c2/log(gsd), so yeah, it will have a problem >> if the initial guess is 1.0.? It works fine if the initial guess is, say, >> 1.01. > > Yes, catch this better now. Interestingly it suffers when x0 < 1.00 > too. Probably log(gsd<1) explodes the function by making the right > term positive and restricting the function to find a zero. > > -- > G?khan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > with a bit of random search -------- #!/usr/bin/env python import numpy as np from scipy.optimize import leastsq, fsolve def fitfunc(gsd): #return dH_dlogDP1 - (h1.sum()/((2*np.pi)**0.5*np.log(gsd)))*np.exp(-(np.log(Dp1)-np.log(Dp1))**2./(2.*np.log(gsd)**2.)) return dH_dlogDP1 - (h1.sum()/((2*np.pi)**0.5*np.log(gsd))) dH_dlogDP1 = np.array([ 869.11014589]) Dp1 = np.array([ 0.02994996]) h1 = np.array([ 1906.7283]) res = leastsq(fitfunc, x0=1.5) np.random.seed(5) lb = -20 ub = 20 for x0 in lb + (ub-lb)*np.random.random(50): res2 = fsolve(fitfunc, x0=x0) if np.abs(res2 - x0) > 1e-6: print x0, res2 ---- prints >>> 0.736719514918 [ 0.09143545] 3.19351240758 [ 2.39943611] 3.997167865 [ 2.39943611] 3.10651432519 [ 2.39943611] 0.618904476216 [ 0.15916137] 1.8582599189 [ 2.39943611] From ralf.gommers at googlemail.com Sun Oct 24 20:09:35 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 25 Oct 2010 08:09:35 +0800 Subject: [SciPy-User] Compiling scipy.sparse.sparsetools extension throws g++ internal error&In-Reply-To= In-Reply-To: References: Message-ID: On Sun, Oct 24, 2010 at 7:24 AM, Krzysztof Bartoszek wrote: > Hi, > > I don't know whether anyone has done any follow up since March but I came across the exact same problem whilst installing scipy 0.8.0 under Python 2.7, the error output I got was : This was fixed just after 0.8.0 was released, please try the 0.8.x branch from svn. Cheers, Ralf > > g++: scipy/sparse/sparsetools/csr_wrap.cxx > scipy/sparse/sparsetools/csr_wrap.cxx: In function ?int require_size(PyArrayObject*, npy_intp*, int)?: > scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before ?PRIdPTR? > scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious trailing ?%? in format > scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments for format > scipy/sparse/sparsetools/csr_wrap.cxx:2917: error: expected `)' before ?PRIdPTR? > scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: spurious trailing ?%? in format > scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: too many arguments for format > scipy/sparse/sparsetools/csr_wrap.cxx: In function ?int require_size(PyArrayObject*, npy_intp*, int)?: > scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before ?PRIdPTR? > scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious trailing ?%? in format > scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments for format > scipy/sparse/sparsetools/csr_wrap.cxx:2917: error: expected `)' before ?PRIdPTR? > scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: spurious trailing ?%? in format > scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: too many arguments for format > error: Command "g++ -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -fPIC > > Disabling the sparse package worked as said in the previous message, but I managed to identify the lines which cause the problem : > They are > ?sprintf(s,"%" NPY_INTP_FMT ",", size[i]); > and > ?sprintf(s,"%" NPY_INTP_FMT ",", array_size(ary,i)); > in the function > int require_size(PyArrayObject* ary, npy_intp* size, int n) > in the files ?: > scipy/sparse/sparsetools/csc_wrap.cxx > scipy/sparse/sparsetools/coo_wrap.cxx > scipy/sparse/sparsetools/bsr_wrap.cxx > scipy/sparse/sparsetools/dia_wrap.cxx > scipy/sparse/sparsetools/csr_wrap.cxx > Commenting these two lines in these files makes scipy build fine, though I have not run it so I don't know what effect this will have on the functionality. > I hope this helps someone and is useful. > > Cheers > > Krzysztof > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Sun Oct 24 20:32:54 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 24 Oct 2010 20:32:54 -0400 Subject: [SciPy-User] what is linsolve? sparse docs In-Reply-To: References: Message-ID: On Sun, Oct 24, 2010 at 3:26 PM, Warren Weckesser wrote: > > > On Sun, Oct 24, 2010 at 1:09 PM, wrote: >> >> (from a question on ask.scipy.org) >> >> The docs for scipy sparse >> >> http://docs.scipy.org/doc/scipy/reference/sparse.html?highlight=linsolve >> >> contain an example that imports linsolve. >> >> I cannot find anything like this. Is this a historical artifact or >> some hidden module somewhere ? >> >> Can someone who knows this update the examples? > > > I just updated scipy/sparse/info.py, which is where this documentation comes > from.? That file was last touched three years ago, and there have been > changes in the source code since then. > > Here's a script containing the same code as in updated "Example 1" of that > docstring: > > ----- > from scipy.sparse import lil_matrix > from scipy.sparse.linalg import spsolve > from numpy.linalg import solve, norm > from numpy.random import rand > > A = lil_matrix((1000, 1000)) > A[0, :100] = rand(100) > A[1, 100:200] = A[0, :100] > A.setdiag(rand(1000)) > > # Now convert it to CSR format and solve A x = b for x: > > A = A.tocsr() > b = rand(1000) > x = spsolve(A, b) > > # Convert it to a dense matrix and solve, and check that the result > # is the same: > > x_ = solve(A.todense(), b) > > # Now we can compute norm of the error with: > > err = norm(x-x_) > print err < 1e-10 > ----- Thanks Warren, I added a link to your example on ask.scipy.org Josef > > Warren > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From hnry2k at hotmail.com Sun Oct 24 20:53:25 2010 From: hnry2k at hotmail.com (=?iso-8859-1?B?Sm9yZ2UgRS4gtFNhbmNoZXogU2FuY2hleg==?=) Date: Sun, 24 Oct 2010 19:53:25 -0500 Subject: [SciPy-User] No module enthought.enstaller.egg Message-ID: Hi, I was using ETS-3.2 before and now I am trying to install ETS-3.5 inside SAGE math CAS (ubuntu-amd64), I have all the prerequisites installed, and also I have made an ETS installer for it, everything (from AppTools to TRAITS_GUI) compiles OK, although I end up with the next diagnostic: ============================================================================== ... Building directory: TraitsGUI-3.5.0 Executing: /opt/sage-4.5.3/local/bin/python setup.py release build bdist_egg -d /opt/sage-4.5.3/spkg/build/ets-3.5.0-20101024.p0/src/dist WARNING: Could not build egg.info files: No module named enthought.enstaller.egg error: No urls, filenames, or requirements specified (see --help) error: Not a URL, existing file, or requirement spec: 'ETS_*' Error installing ETS ----------------------------------------------------------------------------------------------------------------------------------------------- It seems to me that I am very close to succeed. Please, can you tell me what steps I am missing and need to follow in order to finish the installation? Thank you very much in advance. Jorge -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Oct 24 21:02:41 2010 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 24 Oct 2010 20:02:41 -0500 Subject: [SciPy-User] No module enthought.enstaller.egg In-Reply-To: References: Message-ID: 2010/10/24 Jorge E. ?Sanchez Sanchez : > Hi, > > I was using ETS-3.2 before and now I am trying to install ETS-3.5 inside > SAGE math CAS (ubuntu-amd64), I have all the prerequisites installed, and > also I have made an ETS installer for it, everything (from AppTools to > TRAITS_GUI) compiles OK, although I end up with the next diagnostic: ETS questions are best sent to enthought-dev rather than here. https://mail.enthought.com/mailman/listinfo/enthought-dev You may also wish to ask on one of the SAGE lists since I do not recognize those messages as coming from anything in ETS. It might have something to do with their packaging system. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From hnry2k at hotmail.com Sun Oct 24 21:15:50 2010 From: hnry2k at hotmail.com (=?iso-8859-1?B?Sm9yZ2UgRS4gtFNhbmNoZXogU2FuY2hleg==?=) Date: Sun, 24 Oct 2010 20:15:50 -0500 Subject: [SciPy-User] No module enthought.enstaller.egg In-Reply-To: References: , Message-ID: I am sorry I was not aware of this, Thank you Robert I'll do that Jorge > From: robert.kern at gmail.com > Date: Sun, 24 Oct 2010 20:02:41 -0500 > To: scipy-user at scipy.org > Subject: Re: [SciPy-User] No module enthought.enstaller.egg > > 2010/10/24 Jorge E. ?Sanchez Sanchez : > > Hi, > > > > I was using ETS-3.2 before and now I am trying to install ETS-3.5 inside > > SAGE math CAS (ubuntu-amd64), I have all the prerequisites installed, and > > also I have made an ETS installer for it, everything (from AppTools to > > TRAITS_GUI) compiles OK, although I end up with the next diagnostic: > > ETS questions are best sent to enthought-dev rather than here. > > https://mail.enthought.com/mailman/listinfo/enthought-dev > > You may also wish to ask on one of the SAGE lists since I do not > recognize those messages as coming from anything in ETS. It might have > something to do with their packaging system. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From hao at biostat.ucsf.edu Sun Oct 24 21:42:14 2010 From: hao at biostat.ucsf.edu (Hao Xiong) Date: Sun, 24 Oct 2010 18:42:14 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix Message-ID: <4CC4E076.9030609@biostat.ucsf.edu> Hi everyone, I am trying to compute the eigenvectors corresponding to the d+1 smallest eigenvalues of A=W.T*W. I started with W as a dense matrix and then W = sparse.csr_matrix(W) A = W.dot(W) # W.T * W W,V = eigen_symmetric(A,d+1, which='SM') The biggest problem is that the algorithm fails to converge and I get all zeros as eigenvectors for a testing dataset. Using dense SVD I got the expected results. The second problem is that this sparse version is much slower than the dense version as u,s,vh = svd(W) The testing data only has 1000x1000, while I expect the real data will have millions by millions of entries. Each row will have only a dozen to at most dozes of non-zero entries. Thanks, Hao From lutz.maibaum at gmail.com Sun Oct 24 23:26:24 2010 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Sun, 24 Oct 2010 20:26:24 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: <4CC4E076.9030609@biostat.ucsf.edu> References: <4CC4E076.9030609@biostat.ucsf.edu> Message-ID: On Oct 24, 2010, at 6:42 PM, Hao Xiong wrote: > I am trying to compute the eigenvectors corresponding to the d+1 smallest > eigenvalues of A=W.T*W. I started with W as a dense matrix and then > W = sparse.csr_matrix(W) > A = W.dot(W) # W.T * W > W,V = eigen_symmetric(A,d+1, which='SM') > > The biggest problem is that the algorithm fails to converge and I get > all zeros as eigenvectors for a testing dataset. Using dense SVD I > got the expected results. What operating system are you using? The sparse eigensolvers have some issues on 64-bit OS-X (see http://projects.scipy.org/scipy/ticket/1220). I am currently having similar issues. Oddly enough, I get reasonable results if I convert the matrix from the native float64 to the float32 data type. If you are on a 64 bit system, could you try that and let me know if it changes anything? For example, define A as A = W.dot(W).astype(numpy.float32) Best, Lutz From dvnganga at yahoo.fr Mon Oct 25 00:07:56 2010 From: dvnganga at yahoo.fr (dvnganga at yahoo.fr) Date: Mon, 25 Oct 2010 04:07:56 +0000 Subject: [SciPy-User] SciPy-User Digest, Vol 86, Issue 49 Message-ID: <1577929651-1287979677-cardhu_decombobulator_blackberry.rim.net-1565281152-@bda346.bisx.produk.on.blackberry> I listened to your email using DriveCarefully and will respond as soon as I can. Download DriveCarefully for free at www.drivecarefully.com Sent via my BlackBerry from Vodacom - let your email find you! From pav at iki.fi Mon Oct 25 06:03:43 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 25 Oct 2010 10:03:43 +0000 (UTC) Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix References: <4CC4E076.9030609@biostat.ucsf.edu> Message-ID: Sun, 24 Oct 2010 18:42:14 -0700, Hao Xiong wrote: > I am trying to compute the eigenvectors corresponding to the d+1 > smallest eigenvalues of A=W.T*W. I started with W as a dense matrix and > then > W = sparse.csr_matrix(W) > A = W.dot(W) # W.T * W That is W*W and not (W.T)*W > W,V = eigen_symmetric(A,d+1, which='SM') > > The biggest problem is that the algorithm fails to converge and I get > all zeros as eigenvectors for a testing dataset. Using dense SVD I got > the expected results. You can try playing with setting the maxiter parameter to allow ARPACK to spend more iterations on the problem. -- Pauli Virtanen From lpc at cmu.edu Mon Oct 25 07:20:07 2010 From: lpc at cmu.edu (Luis Pedro Coelho) Date: Mon, 25 Oct 2010 07:20:07 -0400 Subject: [SciPy-User] FreeImage <-> numpy IO wrappers In-Reply-To: References: <201005141650.17597.lpc@cmu.edu> Message-ID: <201010250720.15897.lpc@cmu.edu> Hi, On Thursday, October 21, 2010 11:49:21 am Zachary Pincus wrote: > > Whatever fixes you make, please remember to contribute them back to > > scikits.image! The plugin is being maintained there as well. > > Oh gosh, I didn't realize that -- great! (Did you ever diagnose that > segfault?) I made some of the changes to the scikits.image plugin when I "borrowed" this for mahotas (my own image processing library [1]). I did fix one or two bugs in dealing with memory so those might explain the random segfaults. [1] : http://luispedro.org/software/mahotas. > Anyway, the only serious change I've made has to do with properly > saving 2D uint8 images, which need to have a palette defined for > certain formats (PNG maybe? I don't recall...). Anyway, the > scikits.image version has diverged in structure somewhat from the one > I use internally in my code, but here's the core of the change: Another change I made was to get rid of some copies and make the Python wrapping much faster. This is probably what you're seeing here. HTH Luis -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From zachary.pincus at yale.edu Mon Oct 25 08:56:13 2010 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 25 Oct 2010 08:56:13 -0400 Subject: [SciPy-User] FreeImage <-> numpy IO wrappers In-Reply-To: <201010250720.15897.lpc@cmu.edu> References: <201005141650.17597.lpc@cmu.edu> <201010250720.15897.lpc@cmu.edu> Message-ID: <43F8ECE9-079C-45F2-B25B-2D6D573D75A8@yale.edu> Hi Luis, >> Anyway, the only serious change I've made has to do with properly >> saving 2D uint8 images, which need to have a palette defined for >> certain formats (PNG maybe? I don't recall...). Anyway, the >> scikits.image version has diverged in structure somewhat from the one >> I use internally in my code, but here's the core of the change: > > Another change I made was to get rid of some copies and make the > Python > wrapping much faster. This is probably what you're seeing here. I think that this commit: http://github.com/luispedro/mahotas/commit/f04f224079f2ed9fc70bd8b25d3fab2c0b2df622 breaks reading of images with an alpha channel, which in the new version will get silently discarded. The old version modified the array in-place so that the optional alpha doesn't get touched. Your version will need to have different code-paths for BGR and BGRA images, which is of course easy enough to do. Also, certain image formats require palettes with 8-bit greyscale images (this is what I was talking about above), and so images saved with FreeImage won't necessarily open correctly without a palette defined. This commit to the scikits version (which I sketched out before) does that: http://github.com/stefanv/scikits.image/commit/325b59c59ade2a09f2b788f26801bb3f8d8a0d5e Zach On Oct 25, 2010, at 7:20 AM, Luis Pedro Coelho wrote: > Hi, > > On Thursday, October 21, 2010 11:49:21 am Zachary Pincus wrote: >>> Whatever fixes you make, please remember to contribute them back to >>> scikits.image! The plugin is being maintained there as well. >> >> Oh gosh, I didn't realize that -- great! (Did you ever diagnose that >> segfault?) > > I made some of the changes to the scikits.image plugin when I > "borrowed" this > for mahotas (my own image processing library [1]). > > I did fix one or two bugs in dealing with memory so those might > explain the > random segfaults. > > [1] : http://luispedro.org/software/mahotas. > >> Anyway, the only serious change I've made has to do with properly >> saving 2D uint8 images, which need to have a palette defined for >> certain formats (PNG maybe? I don't recall...). Anyway, the >> scikits.image version has diverged in structure somewhat from the one >> I use internally in my code, but here's the core of the change: > > Another change I made was to get rid of some copies and make the > Python > wrapping much faster. This is probably what you're seeing here. > > HTH > Luis > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From dvnganga at yahoo.fr Mon Oct 25 08:57:39 2010 From: dvnganga at yahoo.fr (dvnganga at yahoo.fr) Date: Mon, 25 Oct 2010 12:57:39 +0000 Subject: [SciPy-User] SciPy-User Digest, Vol 86, Issue 50 Message-ID: <1088594856-1288011460-cardhu_decombobulator_blackberry.rim.net-975228451-@bda346.bisx.produk.on.blackberry> I listened to your email using DriveCarefully and will respond as soon as I can. Download DriveCarefully for free at www.drivecarefully.com Sent via my BlackBerry from Vodacom - let your email find you! From hao at biostat.ucsf.edu Mon Oct 25 11:13:10 2010 From: hao at biostat.ucsf.edu (Hao Xiong) Date: Mon, 25 Oct 2010 08:13:10 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: References: <4CC4E076.9030609@biostat.ucsf.edu> Message-ID: <4CC59E86.7050105@biostat.ucsf.edu> I am on 64-bit Linux. I will try float32. Don't know if this matters but the data originally are integers, 0, 1, and 2. They are the encoding of genotypes. Regards, Hao On 10/24/2010 08:26 PM, Lutz Maibaum wrote: > On Oct 24, 2010, at 6:42 PM, Hao Xiong wrote: > >> I am trying to compute the eigenvectors corresponding to the d+1 smallest >> eigenvalues of A=W.T*W. I started with W as a dense matrix and then >> W = sparse.csr_matrix(W) >> A = W.dot(W) # W.T * W >> W,V = eigen_symmetric(A,d+1, which='SM') >> >> The biggest problem is that the algorithm fails to converge and I get >> all zeros as eigenvectors for a testing dataset. Using dense SVD I >> got the expected results. >> > What operating system are you using? The sparse eigensolvers have some issues on 64-bit OS-X (see http://projects.scipy.org/scipy/ticket/1220). > > I am currently having similar issues. Oddly enough, I get reasonable results if I convert the matrix from the native float64 to the float32 data type. If you are on a 64 bit system, could you try that and let me know if it changes anything? For example, define A as > > A = W.dot(W).astype(numpy.float32) > > Best, > > Lutz > > From hao at biostat.ucsf.edu Mon Oct 25 14:14:16 2010 From: hao at biostat.ucsf.edu (Hao Xiong) Date: Mon, 25 Oct 2010 11:14:16 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: References: Message-ID: <4CC5C8F8.4050002@biostat.ucsf.edu> >> I am trying to compute the eigenvectors corresponding to the d+1 >> smallest eigenvalues of A=W.T*W. I started with W as a dense matrix and >> then >> W = sparse.csr_matrix(W) >> A = W.dot(W) # W.T * W > That is W*W and not (W.T)*W Thanks, Pauli. Somehow I convinced myself it was otherwise. I have corrected that. >> W,V = eigen_symmetric(A,d+1, which='SM') >> >> The biggest problem is that the algorithm fails to converge and I get >> all zeros as eigenvectors for a testing dataset. Using dense SVD I got >> the expected results. > You can try playing with setting the maxiter parameter to allow ARPACK to > spend more iterations on the problem. I tried maxiter=100000 and still got zero vectors. I must be missing something. Regards, Hao From josef.pktd at gmail.com Mon Oct 25 14:23:57 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 25 Oct 2010 14:23:57 -0400 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: <4CC5C8F8.4050002@biostat.ucsf.edu> References: <4CC5C8F8.4050002@biostat.ucsf.edu> Message-ID: On Mon, Oct 25, 2010 at 2:14 PM, Hao Xiong wrote: > > >>> I am trying to compute the eigenvectors corresponding to the d+1 >>> smallest eigenvalues of A=W.T*W. I started with W as a dense matrix and >>> then >>> W = sparse.csr_matrix(W) >>> A = W.dot(W) # W.T * W > >> That is W*W and not (W.T)*W > > Thanks, Pauli. Somehow I convinced myself it was otherwise. I have corrected that. > >>> W,V = eigen_symmetric(A,d+1, which='SM') >>> >>> The biggest problem is that the algorithm fails to converge and I get >>> all zeros as eigenvectors for a testing dataset. Using dense SVD I got >>> the expected results. > >> You can try playing with setting the maxiter parameter to allow ARPACK to >> spend more iterations on the problem. > > I tried maxiter=100000 and still got zero vectors. I must be missing something. just a weird idea, since I have no idea what eigen_symmetric is doing, and there are no docs that I have seen for the extra options: Is it possible to run a dense svd on a (random) subset of the data and then use those as starting values for the sparse decompositions? Josef > > Regards, > Hao > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Mon Oct 25 15:02:38 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 25 Oct 2010 19:02:38 +0000 (UTC) Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix References: <4CC5C8F8.4050002@biostat.ucsf.edu> Message-ID: Mon, 25 Oct 2010 11:14:16 -0700, Hao Xiong wrote: [clip] >> You can try playing with setting the maxiter parameter to allow ARPACK >> to spend more iterations on the problem. > > I tried maxiter=100000 and still got zero vectors. I must be missing > something. Can you perhaps write a small routine that generates such a fake matrix, or save it to a file and put one online. This might help in finding out why the routine doesn't behave as expected. -- Pauli Virtanen From lutz.maibaum at gmail.com Mon Oct 25 16:35:21 2010 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Mon, 25 Oct 2010 13:35:21 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: References: <4CC5C8F8.4050002@biostat.ucsf.edu> Message-ID: <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> On Oct 25, 2010, at 12:02 PM, Pauli Virtanen wrote: > Mon, 25 Oct 2010 11:14:16 -0700, Hao Xiong wrote: >>> You can try playing with setting the maxiter parameter to allow ARPACK >>> to spend more iterations on the problem. >> >> I tried maxiter=100000 and still got zero vectors. I must be missing >> something. > > Can you perhaps write a small routine that generates such a fake matrix, > or save it to a file and put one online. This might help in finding out > why the routine doesn't behave as expected. I think I am seeing the same problem as Hao. I have uploaded a sample matrix exhibiting this problem: http://stanford.edu/~maibaum/matrix.mtx.gz A simple test to illustate the problem is the following: In [1]: from scipy.io import mmread In [2]: import scipy.sparse.linalg.eigen In [3]: import numpy as np In [4]: a = mmread("matrix.mtx.gz") In [5]: scipy.sparse.linalg.eigen(a.astype(np.float32))[0] Out[5]: array([ 1.00000119 +0.00000000e+00j, 0.99999952 +9.27230701e-07j, 0.99999952 -9.27230701e-07j, 1.00000107 +0.00000000e+00j, 1.00000715 +0.00000000e+00j, 1.00001323 +0.00000000e+00j]) In [6]: scipy.sparse.linalg.eigen(a.astype(np.float64))[0] Out[6]: array([ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]) Note that the 64-bit version takes about 10 times longer than the 32 bit version, and returns only zeros for the eigenvalues. This happens on NumPy 1.5.0, SciPy 0.8.0, Python 2.6.6 on 64-bit OS-X, where float64 is the default type of floats. I think this is essentially a convergence issue. Calling the eigensolver with an extra argument tol=1e-4 will give reasonable results. While debugging this, I came up with a few questions that the docs don't quite answer, and it would be great if someone could help me out: 1. The sparse eigensolvers take a "maxiter" argument, which defaults to None. I would have expected this to mean that there is no cap on the amount of iterations. Looking at the code, it seems like it is set to 10 times the size of the matrix by default. Is this correct? 2. The eigensolver also takes an optional tolerance argument, which defaults to 0. This seems like an odd choice for an iterative numerical algorithm. The ARPACK manual mentions that the default tolerance is set to the machine precision, is this what a tolerance of 0 means? This might explain why I don't get reasonable results using 64 bit floats, because the precision is just too high. 3. Is there any way for the calling function to know whether convergence was achieved, or whether the eigensolver returned because the maxiter iterations were performed? Something like a "NotConverged" exception, maybe? Hao, could you try to call the eigensolver with a generous tolerance argument, and see if this fixes your problem? If your issue is unrelated, I apologize for hijacking this thread. Thanks, Lutz From pav at iki.fi Mon Oct 25 18:27:20 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 25 Oct 2010 22:27:20 +0000 (UTC) Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix References: <4CC5C8F8.4050002@biostat.ucsf.edu> <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> Message-ID: Mon, 25 Oct 2010 13:35:21 -0700, Lutz Maibaum wrote: [clip] > I think I am seeing the same problem as Hao. I have uploaded a sample > matrix exhibiting this problem: > http://stanford.edu/~maibaum/matrix.mtx.gz Thanks! Not sure if I can reproduce the issue yet (it's been running an hour, and the end is not in sight :) [clip] > I think this is essentially a convergence issue. Calling the eigensolver > with an extra argument tol=1e-4 will give reasonable results. The wrappers are currently written to print a warning if the iteration does not converge, and I do not see yet why that doesn't happen. [clip] > 1. The sparse eigensolvers take a "maxiter" argument, which defaults to > None. I would have expected this to mean that there is no cap on the > amount of iterations. Looking at the code, it seems like it is set to 10 > times the size of the matrix by default. Is this correct? Correct. ARPACK wants some limit, so something has to be given. I don't know if 10*n is the correct choice, though. > 2. The eigensolver also takes an optional tolerance argument, which > defaults to 0. This seems like an odd choice for an iterative numerical > algorithm. The ARPACK manual mentions that the default tolerance is set > to the machine precision, is this what a tolerance of 0 means? This > might explain why I don't get reasonable results using 64 bit floats, > because the precision is just too high. Yes, the default 0 means machine epsilon, same as in ARPACK (where it's also a 'default'). Documentation is lacking, though. > 3. Is there any way for the calling function to know whether convergence > was achieved, or whether the eigensolver returned because the maxiter > iterations were performed? Something like a "NotConverged" exception, > maybe? It will be changed to raise an exception in Scipy 0.9 on non-convergence, with the obtained partial result stuffed in the exception. As it is now, it only raises a warning if it does not convergence. But no, there is currently no good programmatic way of catching this. -- Pauli Virtanen From lpc at cmu.edu Mon Oct 25 18:27:05 2010 From: lpc at cmu.edu (Luis Pedro Coelho) Date: Mon, 25 Oct 2010 18:27:05 -0400 Subject: [SciPy-User] FreeImage <-> numpy IO wrappers In-Reply-To: <43F8ECE9-079C-45F2-B25B-2D6D573D75A8@yale.edu> References: <201005141650.17597.lpc@cmu.edu> <201010250720.15897.lpc@cmu.edu> <43F8ECE9-079C-45F2-B25B-2D6D573D75A8@yale.edu> Message-ID: <201010251828.46622.lpc@cmu.edu> Hi Zach, That's a good catch! I'll fix the code (both on my distribution and on scikits.image [I'll fix my branch and send a pull request upstream]). I prefer to have the code be a bit more complex with the extra if than slow it down (the difference is easily measurable in case someone is worried about premature optimisation). Thank you, Luis On Monday, October 25, 2010 08:56:13 am Zachary Pincus wrote: > Hi Luis, > > >> Anyway, the only serious change I've made has to do with properly > >> saving 2D uint8 images, which need to have a palette defined for > >> certain formats (PNG maybe? I don't recall...). Anyway, the > >> scikits.image version has diverged in structure somewhat from the one > > > >> I use internally in my code, but here's the core of the change: > > Another change I made was to get rid of some copies and make the > > Python > > wrapping much faster. This is probably what you're seeing here. > > I think that this commit: > http://github.com/luispedro/mahotas/commit/f04f224079f2ed9fc70bd8b25d3fab2c > 0b2df622 breaks reading of images with an alpha channel, which in the new > version will get silently discarded. The old version modified the > array in-place so that the optional alpha doesn't get touched. Your > version will need to have different code-paths for > BGR and BGRA images, which is of course easy enough to do. > > Also, certain image formats require palettes with 8-bit greyscale > images (this is what I was talking about above), and so images saved > with FreeImage won't necessarily open correctly without a palette > defined. This commit to the scikits version (which I sketched out > before) does that: > http://github.com/stefanv/scikits.image/commit/325b59c59ade2a09f2b788f26801 > bb3f8d8a0d5e > > Zach > > On Oct 25, 2010, at 7:20 AM, Luis Pedro Coelho wrote: > > Hi, > > > > On Thursday, October 21, 2010 11:49:21 am Zachary Pincus wrote: > >>> Whatever fixes you make, please remember to contribute them back to > >>> scikits.image! The plugin is being maintained there as well. > >> > >> Oh gosh, I didn't realize that -- great! (Did you ever diagnose that > >> segfault?) > > > > I made some of the changes to the scikits.image plugin when I > > "borrowed" this > > for mahotas (my own image processing library [1]). > > > > I did fix one or two bugs in dealing with memory so those might > > explain the > > random segfaults. > > > > [1] : http://luispedro.org/software/mahotas. > > > >> Anyway, the only serious change I've made has to do with properly > >> saving 2D uint8 images, which need to have a palette defined for > >> certain formats (PNG maybe? I don't recall...). Anyway, the > >> scikits.image version has diverged in structure somewhat from the one > > > >> I use internally in my code, but here's the core of the change: > > Another change I made was to get rid of some copies and make the > > Python > > wrapping much faster. This is probably what you're seeing here. > > > > HTH > > Luis > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From tonightwedrink at hotmail.com Mon Oct 25 20:08:44 2010 From: tonightwedrink at hotmail.com (ben h) Date: Tue, 26 Oct 2010 00:08:44 +0000 (UTC) Subject: [SciPy-User] [newbie] how to compare two datasets References: <1279028224.3792.20.camel@falconeer> Message-ID: Th?ger Emil Juul Thorsen fys.ku.dk> writes: > > # Now say depth is the first column, and value is second: > > tck = scipy.interpolate.splrep(realdata[:, 0], realdata[:, 1]) I am unable to perform this when values in realdata are negative. I can't simply make them positive, as there are mixed +ve and -ve values. Any clues to a solution? Ben. From lutz.maibaum at gmail.com Mon Oct 25 20:39:50 2010 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Mon, 25 Oct 2010 17:39:50 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: References: <4CC5C8F8.4050002@biostat.ucsf.edu> <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> Message-ID: On Oct 25, 2010, at 3:27 PM, Pauli Virtanen wrote: > Mon, 25 Oct 2010 13:35:21 -0700, Lutz Maibaum wrote: > [clip] >> I think I am seeing the same problem as Hao. I have uploaded a sample >> matrix exhibiting this problem: >> http://stanford.edu/~maibaum/matrix.mtx.gz > > Thanks! Not sure if I can reproduce the issue yet (it's been running an > hour, and the end is not in sight :) That's odd. On my machine the 32 bit computation takes about a minute, whereas the 64 bit one takes about 15 minutes before it terminates. >> I think this is essentially a convergence issue. Calling the eigensolver >> with an extra argument tol=1e-4 will give reasonable results. > > The wrappers are currently written to print a warning if the iteration > does not converge, and I do not see yet why that doesn't happen. Interesting, I don't get a warning. I see that there is a convergence check in _SymmetricArpackParams.iterate(), but not in _UnsymmetricArpackParams.iterate(). Could that be the reason? >> 1. The sparse eigensolvers take a "maxiter" argument, which defaults to >> None. I would have expected this to mean that there is no cap on the >> amount of iterations. Looking at the code, it seems like it is set to 10 >> times the size of the matrix by default. Is this correct? > > Correct. ARPACK wants some limit, so something has to be given. I don't > know if 10*n is the correct choice, though. Thanks for confirming this. It would be great if that could be added to the docs. >> 2. The eigensolver also takes an optional tolerance argument, which >> defaults to 0. This seems like an odd choice for an iterative numerical >> algorithm. The ARPACK manual mentions that the default tolerance is set >> to the machine precision, is this what a tolerance of 0 means? This >> might explain why I don't get reasonable results using 64 bit floats, >> because the precision is just too high. > > Yes, the default 0 means machine epsilon, same as in ARPACK (where it's > also a 'default'). Documentation is lacking, though. > >> 3. Is there any way for the calling function to know whether convergence >> was achieved, or whether the eigensolver returned because the maxiter >> iterations were performed? Something like a "NotConverged" exception, >> maybe? > > It will be changed to raise an exception in Scipy 0.9 on non-convergence, > with the obtained partial result stuffed in the exception. That would very useful. Thanks for all your help! Lutz From lutz.maibaum at gmail.com Mon Oct 25 21:31:08 2010 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Mon, 25 Oct 2010 18:31:08 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: References: <4CC5C8F8.4050002@biostat.ucsf.edu> <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> Message-ID: <0F6FED9F-0EB4-468C-A417-18031C20304F@gmail.com> On Oct 25, 2010, at 5:39 PM, Lutz Maibaum wrote: > On Oct 25, 2010, at 3:27 PM, Pauli Virtanen wrote: >> The wrappers are currently written to print a warning if the iteration >> does not converge, and I do not see yet why that doesn't happen. > > Interesting, I don't get a warning. I see that there is a convergence check in _SymmetricArpackParams.iterate(), but not in _UnsymmetricArpackParams.iterate(). Could that be the reason? The test is actually there, the symmetric case just has an additional test. However, I noticed that the warning is triggered when the returned "info" status is -1. Is it possible that it should be compared to 1? From http://www.caam.rice.edu/software/ARPACK/UG/node137.html: c INFO Integer. (INPUT/OUTPUT) c If INFO .EQ. 0, a randomly initial residual vector is used. c If INFO .NE. 0, RESID contains the initial residual vector, c possibly from a previous run. c Error flag on output. c = 0: Normal exit. c = 1: Maximum number of iterations taken. c All possible eigenvalues of OP has been found. IPARAM(5) c returns the number of wanted converged Ritz values. c = 2: No longer an informational error. Deprecated starting c with release 2 of ARPACK. c = 3: No shifts could be applied during a cycle of the c Implicitly restarted Arnoldi iteration. One possibility c is to increase the size of NCV relative to NEV. c See remark 4 below. c = -1: N must be positive. c = -2: NEV must be positive. c = -3: NCV-NEV >= 2 and less than or equal to N. c = -4: The maximum number of Arnoldi update iteration c must be greater than zero. c = -5: WHICH must be one of 'LM', 'SM', 'LR', 'SR', 'LI', 'SI' c = -6: BMAT must be one of 'I' or 'G'. c = -7: Length of private work array is not sufficient. c = -8: Error return from LAPACK eigenvalue calculation; c = -9: Starting vector is zero. c = -10: IPARAM(7) must be 1,2,3,4. c = -11: IPARAM(7) = 1 and BMAT = 'G' are incompatable. c = -12: IPARAM(1) must be equal to 0 or 1. c = -9999: Could not build an Arnoldi factorization. c IPARAM(5) returns the size of the current Arnoldi c factorization. Thanks, Lutz From dvnganga at yahoo.fr Mon Oct 25 21:32:22 2010 From: dvnganga at yahoo.fr (dvnganga at yahoo.fr) Date: Tue, 26 Oct 2010 01:32:22 +0000 Subject: [SciPy-User] SciPy-User Digest, Vol 86, Issue 52 Message-ID: <1264105765-1288056749-cardhu_decombobulator_blackberry.rim.net-1995834976-@bda346.bisx.produk.on.blackberry> I listened to your email using DriveCarefully and will respond as soon as I can. Download DriveCarefully for free at www.drivecarefully.com Sent via my BlackBerry from Vodacom - let your email find you! From aarchiba at physics.mcgill.ca Mon Oct 25 21:42:09 2010 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Mon, 25 Oct 2010 21:42:09 -0400 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: References: <4CC4E076.9030609@biostat.ucsf.edu> Message-ID: On 25 October 2010 06:03, Pauli Virtanen wrote: > Sun, 24 Oct 2010 18:42:14 -0700, Hao Xiong wrote: >> I am trying to compute the eigenvectors corresponding to the d+1 >> smallest eigenvalues of A=W.T*W. I started with W as a dense matrix and >> then >> W = sparse.csr_matrix(W) >> A = W.dot(W) # W.T * W > > That is W*W and not (W.T)*W Just quickly: is your matrix really symmetric? This could lead to all sorts of bizarre failures to converge. Anne From yliu73 at wisc.edu Mon Oct 25 23:46:26 2010 From: yliu73 at wisc.edu (22Nick22 Yanpei Liu) Date: Mon, 25 Oct 2010 22:46:26 -0500 Subject: [SciPy-User] How to select multiple columns at the same time? Message-ID: <73209f37c0c5.4cc608c2@wiscmail.wisc.edu> Hi: Suppose A is an n * m matrix. I want to extract its first b columns just like Matlab command A(1:n, 1:b). What's the best way to do that? Thanks Nick -- --------------------------------------------------- "Nick" Yanpei Liu Ph.D Student Electrical & Computer Engineering University of Wisconsin - Madison http://homepages.cae.wisc.edu/~yanpei/ --------------------------------------------------- From lutz.maibaum at gmail.com Tue Oct 26 00:02:05 2010 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Mon, 25 Oct 2010 21:02:05 -0700 Subject: [SciPy-User] How to select multiple columns at the same time? In-Reply-To: <73209f37c0c5.4cc608c2@wiscmail.wisc.edu> References: <73209f37c0c5.4cc608c2@wiscmail.wisc.edu> Message-ID: <259E0490-2BC2-4602-A9A9-D4D51E5376BB@gmail.com> On Oct 25, 2010, at 8:46 PM, 22Nick22 Yanpei Liu wrote: > Suppose A is an n * m matrix. I want to extract its first b columns just like Matlab command A(1:n, 1:b). What's the best way to do that? Try A[:,:b] Hope this helps, Lutz From kwgoodman at gmail.com Tue Oct 26 00:02:31 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Mon, 25 Oct 2010 21:02:31 -0700 Subject: [SciPy-User] How to select multiple columns at the same time? In-Reply-To: <73209f37c0c5.4cc608c2@wiscmail.wisc.edu> References: <73209f37c0c5.4cc608c2@wiscmail.wisc.edu> Message-ID: On Mon, Oct 25, 2010 at 8:46 PM, 22Nick22 Yanpei Liu wrote: > Suppose A is an n * m matrix. I want to extract its first b columns just like Matlab command A(1:n, 1:b). What's the best way to do that? Make an array: >> import numpy as np >> a = np.arange(12).reshape(3,4) >> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) Extract first b columns: >> b = 2 >> a[:,:b] array([[0, 1], [4, 5], [8, 9]]) Extract first b columns and first n rows: >> n = 2 >> a[:n,:b] array([[0, 1], [4, 5]]) From hao at biostat.ucsf.edu Tue Oct 26 00:26:17 2010 From: hao at biostat.ucsf.edu (Hao Xiong) Date: Mon, 25 Oct 2010 21:26:17 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> References: <4CC5C8F8.4050002@biostat.ucsf.edu> <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> Message-ID: <4CC65869.2080301@biostat.ucsf.edu> Hi Lutz, I think our problems are related but with different symptoms. First, changing to float32 did not help; I got the same errors. Second, changing tolerance to 1e-4, 1e-5, 1e-15, all quash warning but do not solve the problem: all zero eigenvalues and eigenvectors. I have uploaded a 100x100 symmetric matrix at http://www.epibiostat.ucsf.edu/biostat/XiongH/sym_sp.zip On my machine, eigen_symmetric(A,3, which='SM', maxiter=10000) results in all zero eigenvalues and eigenvectors, if A is the uploaded matrix read using mmread. However, converting it to dense and calling np.linalg.eig results in correct answer. One way to check this is that the eigenvector corresponding to the least eigenvalue, which should be 0, is a multiple of a vector of 1's. I am wondering if a sensible starting value is needed. Iterative methods often need a reasonable starting value. Best, Hao On 10/25/2010 01:35 PM, Lutz Maibaum wrote: > On Oct 25, 2010, at 12:02 PM, Pauli Virtanen wrote: > >> Mon, 25 Oct 2010 11:14:16 -0700, Hao Xiong wrote: >> >>>> You can try playing with setting the maxiter parameter to allow ARPACK >>>> to spend more iterations on the problem. >>>> >>> I tried maxiter=100000 and still got zero vectors. I must be missing >>> something. >>> >> Can you perhaps write a small routine that generates such a fake matrix, >> or save it to a file and put one online. This might help in finding out >> why the routine doesn't behave as expected. >> > I think I am seeing the same problem as Hao. I have uploaded a sample matrix exhibiting this problem: > http://stanford.edu/~maibaum/matrix.mtx.gz > > A simple test to illustate the problem is the following: > > In [1]: from scipy.io import mmread > > In [2]: import scipy.sparse.linalg.eigen > > In [3]: import numpy as np > > In [4]: a = mmread("matrix.mtx.gz") > > In [5]: scipy.sparse.linalg.eigen(a.astype(np.float32))[0] > Out[5]: > array([ 1.00000119 +0.00000000e+00j, 0.99999952 +9.27230701e-07j, > 0.99999952 -9.27230701e-07j, 1.00000107 +0.00000000e+00j, > 1.00000715 +0.00000000e+00j, 1.00001323 +0.00000000e+00j]) > > In [6]: scipy.sparse.linalg.eigen(a.astype(np.float64))[0] > Out[6]: > array([ 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]) > > > Note that the 64-bit version takes about 10 times longer than the 32 bit version, and returns only zeros for the eigenvalues. This happens on NumPy 1.5.0, SciPy 0.8.0, Python 2.6.6 on 64-bit OS-X, where float64 is the default type of floats. > > I think this is essentially a convergence issue. Calling the eigensolver with an extra argument tol=1e-4 will give reasonable results. While debugging this, I came up with a few questions that the docs don't quite answer, and it would be great if someone could help me out: > > 1. The sparse eigensolvers take a "maxiter" argument, which defaults to None. I would have expected this to mean that there is no cap on the amount of iterations. Looking at the code, it seems like it is set to 10 times the size of the matrix by default. Is this correct? > > 2. The eigensolver also takes an optional tolerance argument, which defaults to 0. This seems like an odd choice for an iterative numerical algorithm. The ARPACK manual mentions that the default tolerance is set to the machine precision, is this what a tolerance of 0 means? This might explain why I don't get reasonable results using 64 bit floats, because the precision is just too high. > > 3. Is there any way for the calling function to know whether convergence was achieved, or whether the eigensolver returned because the maxiter iterations were performed? Something like a "NotConverged" exception, maybe? > > Hao, could you try to call the eigensolver with a generous tolerance argument, and see if this fixes your problem? If your issue is unrelated, I apologize for hijacking this thread. > > Thanks, > > Lutz > > From hao at biostat.ucsf.edu Tue Oct 26 01:32:02 2010 From: hao at biostat.ucsf.edu (Hao Xiong) Date: Mon, 25 Oct 2010 22:32:02 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> References: <4CC5C8F8.4050002@biostat.ucsf.edu> <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> Message-ID: <4CC667D2.1000107@biostat.ucsf.edu> I should report the version of various packages I use. I have a Debian machine running testing version but with a custom compiled Python-2.7 and NumPy-1.5.1RC1 and SciPy-0.8.0. I specified MKL when I compiled NumPy and Scipy and I passed -O3 to GCC. NumPy's basic testing finishes successively, but SciPy's crashes with a segfault. On a Gentoo machine with Python-2.6 and NumPy-1.5 and SciPy-0.8.0 linked against Atlas library, I can get NumPy's basic test finish successively, but SciPy's fails with one failure. Hope this helps. Thanks, Hao From lutz.maibaum at gmail.com Tue Oct 26 01:48:13 2010 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Mon, 25 Oct 2010 22:48:13 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: <4CC65869.2080301@biostat.ucsf.edu> References: <4CC5C8F8.4050002@biostat.ucsf.edu> <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> <4CC65869.2080301@biostat.ucsf.edu> Message-ID: On Mon, Oct 25, 2010 at 9:26 PM, Hao Xiong wrote: > Second, changing tolerance to 1e-4, 1e-5, 1e-15, all quash warning but > do not solve the problem: all zero eigenvalues and eigenvectors. Interesting. Did you get a warning about non-convergence before? I'm not sure what's going on with the 1e-15 tolerance, but the other ones are probably too large because the smallest eigenvalues seem to be very close to zero. For your matrix, I get In [28]: scipy.sparse.linalg.eigen_symmetric(a,3, which='SM', tol=1e-12, maxiter=10000000) Out[28]: (array([ 2.13492457e-17, 9.54397558e-07, 1.77823892e-06]), array([[-0.1 , 0.05414805, -0.03697394], [-0.1 , 0.04864488, 0.12934357], [-0.1 , 0.09785515, -0.02710373], [-0.1 , 0.12079696, -0.095882 ], [-0.1 , -0.0923547 , 0.06131896], [-0.1 , 0.08566995, 0.02677637], [-0.1 , -0.00999687, -0.03477074], [-0.1 , 0.10420394, 0.0904605 ], [-0.1 , -0.01824658, 0.12656016], [-0.1 , 0.03850215, -0.16266434], [-0.1 , 0.10411247, 0.12564386], [-0.1 , 0.0847864 , -0.08006353], [-0.1 , 0.06881941, -0.03651171], [-0.1 , 0.02100945, 0.12596802], [-0.1 , 0.00698142, -0.14227155], [-0.1 , -0.0994901 , -0.01865521], [-0.1 , 0.05150979, -0.13437869], [-0.1 , 0.12983085, 0.10247783], [-0.1 , 0.20811634, 0.04037155], [-0.1 , 0.14284242, 0.07440172], [-0.1 , 0.08759283, 0.00897286], [-0.1 , 0.11652933, 0.11583934], [-0.1 , 0.08273911, -0.12928089], [-0.1 , 0.15103551, 0.08544608], [-0.1 , -0.10887856, -0.03683742], [-0.1 , 0.08946787, 0.01810116], [-0.1 , -0.21466925, 0.08808048], [-0.1 , 0.01112506, 0.11875543], [-0.1 , 0.03862264, -0.03816272], [-0.1 , -0.08819346, 0.0469191 ], [-0.1 , -0.08715582, -0.10397484], [-0.1 , 0.09957673, 0.12540574], [-0.1 , -0.10165562, 0.10154619], [-0.1 , -0.02138075, 0.06997714], [-0.1 , -0.02087899, -0.04523328], [-0.1 , 0.07205966, 0.00801408], [-0.1 , 0.06474043, 0.00830429], [-0.1 , 0.08648864, -0.00438077], [-0.1 , 0.09298343, 0.04886763], [-0.1 , 0.07158097, 0.0782138 ], [-0.1 , 0.01239778, -0.15765419], [-0.1 , -0.05888361, 0.03320853], [-0.1 , 0.08010641, 0.08588525], [-0.1 , 0.03127534, -0.15888655], [-0.1 , 0.15375382, -0.00072328], [-0.1 , 0.1309185 , 0.01948518], [-0.1 , -0.21072633, 0.05625481], [-0.1 , 0.00123581, -0.19868411], [-0.1 , -0.04948594, -0.1179604 ], [-0.1 , 0.03724257, -0.18880828], [-0.1 , -0.05376647, 0.11361879], [-0.1 , 0.05143578, -0.11411724], [-0.1 , -0.04570302, 0.13384669], [-0.1 , -0.05617232, -0.09347502], [-0.1 , -0.20512585, 0.0484587 ], [-0.1 , -0.027912 , -0.1848302 ], [-0.1 , 0.14621125, -0.00988872], [-0.1 , -0.10030626, -0.09077817], [-0.1 , -0.0363287 , 0.02784762], [-0.1 , -0.21623947, 0.06780762], [-0.1 , -0.06138235, -0.1349 ], [-0.1 , -0.09814152, -0.04398249], [-0.1 , 0.12720599, 0.00705402], [-0.1 , -0.01507454, -0.18508998], [-0.1 , -0.00798772, -0.23027451], [-0.1 , 0.0084914 , -0.14105232], [-0.1 , -0.00326878, 0.1905542 ], [-0.1 , -0.11332749, -0.01003244], [-0.1 , 0.16373491, 0.07366324], [-0.1 , -0.15722344, 0.05073253], [-0.1 , 0.04282908, 0.05747035], [-0.1 , -0.11459224, 0.1258188 ], [-0.1 , -0.03079556, 0.12889243], [-0.1 , -0.06469642, 0.15025778], [-0.1 , 0.18106343, 0.04211254], [-0.1 , -0.11284705, 0.05143415], [-0.1 , -0.14384552, -0.01344659], [-0.1 , 0.00723068, -0.19844225], [-0.1 , -0.05921825, -0.12152038], [-0.1 , -0.20116698, 0.08917197], [-0.1 , -0.17052863, 0.05183343], [-0.1 , -0.01618908, -0.05137175], [-0.1 , -0.04433511, -0.06585839], [-0.1 , 0.03211274, 0.1278789 ], [-0.1 , 0.12588347, -0.08004173], [-0.1 , -0.08788273, -0.10250587], [-0.1 , 0.01156012, -0.00283793], [-0.1 , -0.05788733, -0.08254978], [-0.1 , 0.08076025, -0.03895826], [-0.1 , 0.02232925, 0.1221555 ], [-0.1 , -0.11745394, -0.02053012], [-0.1 , 0.01355673, 0.12304368], [-0.1 , -0.17090312, 0.03684983], [-0.1 , 0.21020815, 0.00314479], [-0.1 , -0.05367038, -0.13541344], [-0.1 , -0.11600589, -0.07075897], [-0.1 , -0.02728704, 0.09827715], [-0.1 , -0.09539752, 0.04939529], [-0.1 , 0.0058363 , 0.18280319], [-0.1 , -0.04509233, -0.0022041 ]])) which seems in good agreement with the dense solution. Best, Lutz From pav at iki.fi Tue Oct 26 05:05:19 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 26 Oct 2010 09:05:19 +0000 (UTC) Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix References: <4CC5C8F8.4050002@biostat.ucsf.edu> <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> <0F6FED9F-0EB4-468C-A417-18031C20304F@gmail.com> Message-ID: Mon, 25 Oct 2010 18:31:08 -0700, Lutz Maibaum wrote: > On Oct 25, 2010, at 5:39 PM, Lutz Maibaum wrote: >> On Oct 25, 2010, at 3:27 PM, Pauli Virtanen wrote: >>> The wrappers are currently written to print a warning if the iteration >>> does not converge, and I do not see yet why that doesn't happen. >> >> Interesting, I don't get a warning. I see that there is a convergence >> check in _SymmetricArpackParams.iterate(), but not in >> _UnsymmetricArpackParams.iterate(). Could that be the reason? > > The test is actually there, the symmetric case just has an additional > test. > > However, I noticed that the warning is triggered when the returned > "info" status is -1. Is it possible that it should be compared to 1? > From http://www.caam.rice.edu/software/ARPACK/UG/node137.html: [clip] > c INFO Integer. (INPUT/OUTPUT) > c If INFO .EQ. 0, a randomly initial residual vector is used. > c If INFO .NE. 0, RESID contains the initial residual vector, > c possibly from a previous run. > c Error flag on output. > c = 0: Normal exit. > c = 1: Maximum number of iterations taken. [clip] Seems indeed that the check is wrong. Needs fixing. -- Pauli Virtanen From dvnganga at yahoo.fr Tue Oct 26 05:07:40 2010 From: dvnganga at yahoo.fr (dvnganga at yahoo.fr) Date: Tue, 26 Oct 2010 09:07:40 +0000 Subject: [SciPy-User] SciPy-User Digest, Vol 86, Issue 53 Message-ID: <1263002801-1288084061-cardhu_decombobulator_blackberry.rim.net-1265857965-@bda346.bisx.produk.on.blackberry> I listened to your email using DriveCarefully and will respond as soon as I can. Download DriveCarefully for free at www.drivecarefully.com Sent via my BlackBerry from Vodacom - let your email find you! From almar.klein at gmail.com Tue Oct 26 05:41:39 2010 From: almar.klein at gmail.com (Almar Klein) Date: Tue, 26 Oct 2010 11:41:39 +0200 Subject: [SciPy-User] ANN: Simple Structured Data Format (SSDF) Message-ID: Hi all, I have had this little project lying around for a while, and recently got some positive remarks from a few people, so I thought it might be useful to other people too. http://code.google.com/p/ssdf/ Description ======== The Simple Structured Data Format (SSDF), is a format for storing structured (scientific) data. The goal of the format is to be easily readable by humans as well as computers. SSDF has similar goals as JSON, but also enables (relatively) efficient storing of binary data and is much easier to read. SSDF iswell suited for storing scientific data, and provides a way to create a datastore that can be opened and edited with any text editor. SSDF is also very suitable for configuration files or ini-like files. Cheers, Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonasmo287 at yahoo.de Tue Oct 26 06:17:19 2010 From: jonasmo287 at yahoo.de (Jonas Mo) Date: Tue, 26 Oct 2010 11:17:19 +0100 (BST) Subject: [SciPy-User] seg fault when loading any subpackage Message-ID: <685703.76580.qm@web27306.mail.ukl.yahoo.com> Hello scipy community, I hope you can help me. I have openSuse 11.2 on a x86_64 machine and have installed scipy via Yast from the download.opensuse.org/repositories/science/ repo. When I run scipy.test() an immediate segmentation fault is produced. Also whenever I want to load a subpackage, a segmentation fault occurs. Has anyone an idea how I can isolate this problem, or more informations? gdb gives also nothing, that would help me. (see http://pastebin.com/XcAt4NMQ for output) Thanks for your ideas and suggestions in advance, ?Jonas > python -c "import scipy;import scipy.io" Segmentation fault > python -c "import scipy;scipy.test()" Running unit tests for scipy NumPy version 1.3.0 NumPy is installed in /usr/lib64/python2.6/site-packages/numpy SciPy version 0.8.0 SciPy is installed in /usr/lib64/python2.6/site-packages/scipy Python version 2.6.2 (r262:71600, Jun 17 2010, 13:37:45) [GCC 4.4.1 [gcc-4_4-branch revision 150839]] nose version 0.11.4 Segmentation fault -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.holbrook at gmail.com Tue Oct 26 11:23:02 2010 From: josh.holbrook at gmail.com (Joshua Holbrook) Date: Tue, 26 Oct 2010 07:23:02 -0800 Subject: [SciPy-User] ANN: Simple Structured Data Format (SSDF) In-Reply-To: References: Message-ID: Interesting. Reminds me a little of YAML. Speaking of: How do you think this compares to YAML? --Josh On Tue, Oct 26, 2010 at 1:41 AM, Almar Klein wrote: > Hi all, > > I have had this little project lying around for a while, and recently got > some positive remarks from a few people, so I thought it might be useful to > other people too. > > http://code.google.com/p/ssdf/ > > Description > ======== > > The Simple Structured Data Format (SSDF), is a format for storing structured > (scientific) data. The goal of the format is to be easily readable by humans > as well as computers. > > SSDF has similar goals as JSON, but also enables (relatively) efficient > storing of binary data and is much easier to read. > > SSDF iswell suited for storing scientific data, and provides a way to create > a datastore that can be opened and edited with any text editor. SSDF is also > very suitable for configuration files or ini-like files. > > > Cheers, > ? Almar > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From almar.klein at gmail.com Tue Oct 26 11:47:57 2010 From: almar.klein at gmail.com (Almar Klein) Date: Tue, 26 Oct 2010 17:47:57 +0200 Subject: [SciPy-User] ANN: Simple Structured Data Format (SSDF) In-Reply-To: References: Message-ID: On 26 October 2010 17:23, Joshua Holbrook wrote: > Interesting. Reminds me a little of YAML. Speaking of: How do you > think this compares to YAML? > Well, I've never really used YAML, but looking at some example, here's a few observations: * I think SSDF looks just a bit more natural. For example strings are just as you'd define them in most programming languages (using quotes), while in YAML it's a string if it can't be interpreted as anything else. * SSDF supports binary arrays (of any dimension and of most common types), which are always compressed (using ZLIB). * SSDF is line based: there's one data element per line. This also means that you cannot write multi line strings, which you can in YAML. Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Oct 26 12:33:41 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 26 Oct 2010 16:33:41 +0000 (UTC) Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix References: <4CC5C8F8.4050002@biostat.ucsf.edu> <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> <0F6FED9F-0EB4-468C-A417-18031C20304F@gmail.com> Message-ID: Tue, 26 Oct 2010 09:05:19 +0000, Pauli Virtanen wrote: > Seems indeed that the check is wrong. Needs fixing. If you have time, please test: http://github.com/pv/scipy-work/tree/bug/1313-arpack Note that the function is there renamed from `eigen` to `eigs`, as the name shadowed with the package name (plus obvious reasons). -- Pauli Virtanen From hao at biostat.ucsf.edu Tue Oct 26 12:47:00 2010 From: hao at biostat.ucsf.edu (Hao Xiong) Date: Tue, 26 Oct 2010 09:47:00 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: References: <4CC5C8F8.4050002@biostat.ucsf.edu> <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> <4CC65869.2080301@biostat.ucsf.edu> Message-ID: <4CC70604.8080109@biostat.ucsf.edu> Thanks, Lutz, for checking this for me. I am beginning to suspect my problem is a compilation issue. On my Gentoo machine after installing arpack package separately, scipy.sparse.linalg.test() gets Running unit tests for scipy.sparse.linalg NumPy version 1.5.0 NumPy is installed in /usr/lib64/python2.6/site-packages/numpy SciPy version 0.8.0 SciPy is installed in /usr/lib64/python2.6/site-packages/scipy Python version 2.6.5 (release26-maint, Oct 24 2010, 22:13:01) [GCC 4.4.4] nose version 0.11.4 ..../usr/lib64/python2.6/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:259: DeprecationWarning: scipy.sparse.linalg.dsolve.umfpack will be removed, install scikits.umfpack instead ' install scikits.umfpack instead', DeprecationWarning ) ../usr/lib64/python2.6/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:75: DeprecationWarning: scipy.sparse.linalg.dsolve.umfpack will be removed, install scikits.umfpack instead ' install scikits.umfpack instead', DeprecationWarning ) ....................K...Segmentation fault So a few questions: 1. Should I install arpack package separately? I think SciPy source comes with arpack, so it may not be necessary; but could it hurt? 2. What is the recommended way of supplying flags to SciPy's build scripts? I have set my CFLAGS to "-march=native -mtune=native -pipe -O3 -msse." But if I compile SciPy directly with this, build fails. Gentoo filters flags so its build system succeeds. Thanks, Hao On 10/25/2010 10:48 PM, Lutz Maibaum wrote: > On Mon, Oct 25, 2010 at 9:26 PM, Hao Xiong wrote: >> Second, changing tolerance to 1e-4, 1e-5, 1e-15, all quash warning but >> do not solve the problem: all zero eigenvalues and eigenvectors. > Interesting. Did you get a warning about non-convergence before? I'm > not sure what's going on with the 1e-15 tolerance, but the other ones > are probably too large because the smallest eigenvalues seem to be > very close to zero. For your matrix, I get > > In [28]: scipy.sparse.linalg.eigen_symmetric(a,3, which='SM', > tol=1e-12, maxiter=10000000) > Out[28]: > (array([ 2.13492457e-17, 9.54397558e-07, 1.77823892e-06]), > array([[-0.1 , 0.05414805, -0.03697394], > [-0.1 , 0.04864488, 0.12934357], > [-0.1 , 0.09785515, -0.02710373], > [-0.1 , 0.12079696, -0.095882 ], > [-0.1 , -0.0923547 , 0.06131896], > [-0.1 , 0.08566995, 0.02677637], > [-0.1 , -0.00999687, -0.03477074], > [-0.1 , 0.10420394, 0.0904605 ], > [-0.1 , -0.01824658, 0.12656016], > [-0.1 , 0.03850215, -0.16266434], > [-0.1 , 0.10411247, 0.12564386], > [-0.1 , 0.0847864 , -0.08006353], > [-0.1 , 0.06881941, -0.03651171], > [-0.1 , 0.02100945, 0.12596802], > [-0.1 , 0.00698142, -0.14227155], > [-0.1 , -0.0994901 , -0.01865521], > [-0.1 , 0.05150979, -0.13437869], > [-0.1 , 0.12983085, 0.10247783], > [-0.1 , 0.20811634, 0.04037155], > [-0.1 , 0.14284242, 0.07440172], > [-0.1 , 0.08759283, 0.00897286], > [-0.1 , 0.11652933, 0.11583934], > [-0.1 , 0.08273911, -0.12928089], > [-0.1 , 0.15103551, 0.08544608], > [-0.1 , -0.10887856, -0.03683742], > [-0.1 , 0.08946787, 0.01810116], > [-0.1 , -0.21466925, 0.08808048], > [-0.1 , 0.01112506, 0.11875543], > [-0.1 , 0.03862264, -0.03816272], > [-0.1 , -0.08819346, 0.0469191 ], > [-0.1 , -0.08715582, -0.10397484], > [-0.1 , 0.09957673, 0.12540574], > [-0.1 , -0.10165562, 0.10154619], > [-0.1 , -0.02138075, 0.06997714], > [-0.1 , -0.02087899, -0.04523328], > [-0.1 , 0.07205966, 0.00801408], > [-0.1 , 0.06474043, 0.00830429], > [-0.1 , 0.08648864, -0.00438077], > [-0.1 , 0.09298343, 0.04886763], > [-0.1 , 0.07158097, 0.0782138 ], > [-0.1 , 0.01239778, -0.15765419], > [-0.1 , -0.05888361, 0.03320853], > [-0.1 , 0.08010641, 0.08588525], > [-0.1 , 0.03127534, -0.15888655], > [-0.1 , 0.15375382, -0.00072328], > [-0.1 , 0.1309185 , 0.01948518], > [-0.1 , -0.21072633, 0.05625481], > [-0.1 , 0.00123581, -0.19868411], > [-0.1 , -0.04948594, -0.1179604 ], > [-0.1 , 0.03724257, -0.18880828], > [-0.1 , -0.05376647, 0.11361879], > [-0.1 , 0.05143578, -0.11411724], > [-0.1 , -0.04570302, 0.13384669], > [-0.1 , -0.05617232, -0.09347502], > [-0.1 , -0.20512585, 0.0484587 ], > [-0.1 , -0.027912 , -0.1848302 ], > [-0.1 , 0.14621125, -0.00988872], > [-0.1 , -0.10030626, -0.09077817], > [-0.1 , -0.0363287 , 0.02784762], > [-0.1 , -0.21623947, 0.06780762], > [-0.1 , -0.06138235, -0.1349 ], > [-0.1 , -0.09814152, -0.04398249], > [-0.1 , 0.12720599, 0.00705402], > [-0.1 , -0.01507454, -0.18508998], > [-0.1 , -0.00798772, -0.23027451], > [-0.1 , 0.0084914 , -0.14105232], > [-0.1 , -0.00326878, 0.1905542 ], > [-0.1 , -0.11332749, -0.01003244], > [-0.1 , 0.16373491, 0.07366324], > [-0.1 , -0.15722344, 0.05073253], > [-0.1 , 0.04282908, 0.05747035], > [-0.1 , -0.11459224, 0.1258188 ], > [-0.1 , -0.03079556, 0.12889243], > [-0.1 , -0.06469642, 0.15025778], > [-0.1 , 0.18106343, 0.04211254], > [-0.1 , -0.11284705, 0.05143415], > [-0.1 , -0.14384552, -0.01344659], > [-0.1 , 0.00723068, -0.19844225], > [-0.1 , -0.05921825, -0.12152038], > [-0.1 , -0.20116698, 0.08917197], > [-0.1 , -0.17052863, 0.05183343], > [-0.1 , -0.01618908, -0.05137175], > [-0.1 , -0.04433511, -0.06585839], > [-0.1 , 0.03211274, 0.1278789 ], > [-0.1 , 0.12588347, -0.08004173], > [-0.1 , -0.08788273, -0.10250587], > [-0.1 , 0.01156012, -0.00283793], > [-0.1 , -0.05788733, -0.08254978], > [-0.1 , 0.08076025, -0.03895826], > [-0.1 , 0.02232925, 0.1221555 ], > [-0.1 , -0.11745394, -0.02053012], > [-0.1 , 0.01355673, 0.12304368], > [-0.1 , -0.17090312, 0.03684983], > [-0.1 , 0.21020815, 0.00314479], > [-0.1 , -0.05367038, -0.13541344], > [-0.1 , -0.11600589, -0.07075897], > [-0.1 , -0.02728704, 0.09827715], > [-0.1 , -0.09539752, 0.04939529], > [-0.1 , 0.0058363 , 0.18280319], > [-0.1 , -0.04509233, -0.0022041 ]])) > > which seems in good agreement with the dense solution. > > Best, > > Lutz From dvnganga at yahoo.fr Tue Oct 26 13:01:03 2010 From: dvnganga at yahoo.fr (dvnganga at yahoo.fr) Date: Tue, 26 Oct 2010 17:01:03 +0000 Subject: [SciPy-User] SciPy-User Digest, Vol 86, Issue 54 Message-ID: <910932645-1288112464-cardhu_decombobulator_blackberry.rim.net-313880070-@bda346.bisx.produk.on.blackberry> I listened to your email using DriveCarefully and will respond as soon as I can. Download DriveCarefully for free at www.drivecarefully.com Sent via my BlackBerry from Vodacom - let your email find you! From castillohair at gmail.com Tue Oct 26 13:38:23 2010 From: castillohair at gmail.com (Sebastian Castillo) Date: Tue, 26 Oct 2010 17:38:23 +0000 (UTC) Subject: [SciPy-User] SciPy ODE integrator References: <838DE651-0C9F-4713-99D7-4997A234AEC9@gmail.com> <269E30BC-CF34-4971-B306-9A7327B126A4@gmail.com> Message-ID: David Pine gmail.com> writes: > > Anne, > > Thanks. Actually I finally figured this (the VODE option) out but I agree that scipy's ODE solvers need a > makeover. The routines under the hood seem to be quite nice but the interface to Python is clumsy at best and > the documentation on how to use it is pretty awful. I'll take a look at pydstool. Thanks. > > David > > On Jul 28, 2010, at 10:45 AM, Anne Archibald wrote: > > > On 26 July 2010 12:46, David Pine gmail.com> wrote: > >> Is there a SciPy ODE integrator that does adaptive stepsize integration AND produces output with the > adaptive time steps intact? > > > > It is not obvious, but the object-oriented integrator, based on VODE, > > can be run in this mode. You normally tell it how much to advance on > > each call and it does as many adaptive steps as it takes to get there, > > but there is an optional argument you can pass it that will make it > > take just one step of the underlying integrator. You can then write a > > python loop to produce the solution you want. > > > > If this seems messy, I have to agree. scipy's ODE integrators are in > > desperate need of an API redesign (they've had one already, which is > > why there are two completely different interfaces, but they need > > another). You could try pydstool, which is designed for the study of > > dynamical systems and has many more tools for working with ODEs and > > their solutions. > > > > Anne > > > >> The standard SciPy ODE integrator seems to be scipy.integrate.odeint and its simpler cousin > scipy.integrate.ode. These work just fine but both take a user-specified time series and returns the > solution at those points only. Often, I prefer to have a more classic adaptive stepsize integrator that > returns the solution at time steps determined by the integrator (and the degree of desired precision > input by the user). This is often the most useful kind of solution because it tends to produce more points > where the solution is varying rapidly and fewer where it is not varying much. A classic Runge-Kugga > adaptive stepsize ODE solver does this as to many others, but I can't find a nice implementation in SciPy or > NumPy. Please advise. Thanks. > >> > >> David > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > Hello. I am trying to use scipy.integrate.ode with time steps determined by the integrator. I can see that you have acomplished this already. I understand that there is an option in the VODE object, and I have found a "step" method in it, but I still don't get how to use it. Can you please post some example code of this? Thank you very much! From warren.weckesser at enthought.com Tue Oct 26 14:35:56 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 26 Oct 2010 13:35:56 -0500 Subject: [SciPy-User] SciPy ODE integrator In-Reply-To: References: <838DE651-0C9F-4713-99D7-4997A234AEC9@gmail.com> <269E30BC-CF34-4971-B306-9A7327B126A4@gmail.com> Message-ID: On Tue, Oct 26, 2010 at 12:38 PM, Sebastian Castillo wrote: > David Pine gmail.com> writes: > > > > > Anne, > > > > Thanks. Actually I finally figured this (the VODE option) out but I > agree > that scipy's ODE solvers need a > > makeover. The routines under the hood seem to be quite nice but the > interface > to Python is clumsy at best and > > the documentation on how to use it is pretty awful. I'll take a look at > pydstool. Thanks. > > > > David > > > > On Jul 28, 2010, at 10:45 AM, Anne Archibald wrote: > > > > > On 26 July 2010 12:46, David Pine gmail.com> wrote: > > >> Is there a SciPy ODE integrator that does adaptive stepsize > integration AND > produces output with the > > adaptive time steps intact? > > > > > > It is not obvious, but the object-oriented integrator, based on VODE, > > > can be run in this mode. You normally tell it how much to advance on > > > each call and it does as many adaptive steps as it takes to get there, > > > but there is an optional argument you can pass it that will make it > > > take just one step of the underlying integrator. You can then write a > > > python loop to produce the solution you want. > > > > > > If this seems messy, I have to agree. scipy's ODE integrators are in > > > desperate need of an API redesign (they've had one already, which is > > > why there are two completely different interfaces, but they need > > > another). You could try pydstool, which is designed for the study of > > > dynamical systems and has many more tools for working with ODEs and > > > their solutions. > > > > > > Anne > > > > > >> The standard SciPy ODE integrator seems to be scipy.integrate.odeint > and > its simpler cousin > > scipy.integrate.ode. These work just fine but both take a > user-specified > time series and returns the > > solution at those points only. Often, I prefer to have a more classic > adaptive stepsize integrator that > > returns the solution at time steps determined by the integrator (and the > degree of desired precision > > input by the user). This is often the most useful kind of solution > because it > tends to produce more points > > where the solution is varying rapidly and fewer where it is not varying > much. > A classic Runge-Kugga > > adaptive stepsize ODE solver does this as to many others, but I can't > find a > nice implementation in SciPy or > > NumPy. Please advise. Thanks. > > >> > > >> David > > >> _______________________________________________ > > >> SciPy-User mailing list > > >> SciPy-User scipy.org > > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > >> > > > _______________________________________________ > > > SciPy-User mailing list > > > SciPy-User scipy.org > > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > Hello. I am trying to use scipy.integrate.ode with time steps determined by > the > integrator. I can see that you have acomplished this already. I understand > that > there is an option in the VODE object, and I have found a "step" method in > it, > but I still don't get how to use it. Can you please post some example code > of > this? Thank you very much! > > Sebastian, Here's an example that uses the 'step=True' option of the integrate() method: ----- from numpy import array from scipy.integrate import ode from pylab import figure, show, xlabel, ylabel from mpl_toolkits.mplot3d import Axes3D def lorenz_sys(t, q, sigma, rho, beta): x = q[0] y = q[1] z = q[2] f = [sigma * (y - x), rho*x - y - x*z, x*y - beta*z] return f ic = [1.0, 2.0, 1.0] t0 = 0.0 t1 = 100.0 #dt = 0.1 sigma = 10.0 rho = 28.0 beta = 10.0/3 solver = ode(lorenz_sys) t = [] sol = [] solver.set_initial_value(ic, t0) solver.set_integrator('vode') solver.set_f_params(sigma, rho, beta) while solver.successful() and solver.t < t1: solver.integrate(t1, step=True) t.append(solver.t) sol.append(solver.y) t = array(t) sol = array(sol) fig = figure() ax = Axes3D(fig) ax.plot(sol[:,0], sol[:,1], sol[:,2]) xlabel('x') ylabel('y') show() ----- Warren > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From di_zou at yahoo.com Tue Oct 26 14:52:42 2010 From: di_zou at yahoo.com (Di Zou) Date: Tue, 26 Oct 2010 11:52:42 -0700 (PDT) Subject: [SciPy-User] Error installing scipy Message-ID: <553314.52025.qm@web110812.mail.gq1.yahoo.com> Hi, I am trying to install scipy 0.8 on Red Hat 5. I have numpy 1.5.1 and python 2.6. When I do 'python setup.py install --prefix=/share/dizou/local/scipy-0.8' I get this error: File "setup.py", line 160, in setup_package() File "setup.py", line 127, in setup_package from numpy.distutils.core import setup File "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/__init__.py", line 7, in import ccompiler File "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/ccompiler.py", line 7, in from distutils.ccompiler import * File "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/__init__.py", line 7, in import ccompiler File "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/ccompiler.py", line 8, in from distutils import ccompiler ImportError: cannot import name ccompiler When I do 'which gcc' at the terminal window I get: /usr/bin/gcc How do I fix this? Thanks, Di -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmay31 at gmail.com Tue Oct 26 15:06:06 2010 From: rmay31 at gmail.com (Ryan May) Date: Tue, 26 Oct 2010 14:06:06 -0500 Subject: [SciPy-User] SciPy-User Digest, Vol 86, Issue 54 In-Reply-To: <910932645-1288112464-cardhu_decombobulator_blackberry.rim.net-313880070-@bda346.bisx.produk.on.blackberry> References: <910932645-1288112464-cardhu_decombobulator_blackberry.rim.net-313880070-@bda346.bisx.produk.on.blackberry> Message-ID: Can, uh, you *not* send these out to the entire mailing list? (Or can the list admins make this crap die in a filter somewhere?) On Tue, Oct 26, 2010 at 12:01 PM, wrote: > I listened to your email using DriveCarefully and will respond as soon as I can. > ?Download DriveCarefully for free at www.drivecarefully.com > Sent via my BlackBerry from Vodacom - let your email find you! > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From robert.kern at gmail.com Tue Oct 26 15:20:28 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 26 Oct 2010 14:20:28 -0500 Subject: [SciPy-User] SciPy-User Digest, Vol 86, Issue 54 In-Reply-To: References: <910932645-1288112464-cardhu_decombobulator_blackberry.rim.net-313880070-@bda346.bisx.produk.on.blackberry> Message-ID: On Tue, Oct 26, 2010 at 14:06, Ryan May wrote: > Can, uh, you *not* send these out to the entire mailing list? > > (Or can the list admins make this crap die in a filter somewhere?) I have asked him twice off-list without a (non-automated) reply. If this persists, I will simply unsubscribe him. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From castillohair at gmail.com Tue Oct 26 18:29:59 2010 From: castillohair at gmail.com (Sebastian Castillo) Date: Tue, 26 Oct 2010 22:29:59 +0000 (UTC) Subject: [SciPy-User] SciPy ODE integrator References: <838DE651-0C9F-4713-99D7-4997A234AEC9@gmail.com> <269E30BC-CF34-4971-B306-9A7327B126A4@gmail.com> Message-ID: Warren Weckesser enthought.com> writes: > Sebastian,Here's an example that uses the 'step=True' option of the integrate() method:-----from numpy import array > > from scipy.integrate import odefrom pylab import figure, show, xlabel, ylabelfrom mpl_toolkits.mplot3d import Axes3Ddef lorenz_sys(t, q, sigma, rho, beta):??? x = q[0]??? y = q[1]??? z = q[2] > > ??? f = [sigma * (y - x),???????? rho*x - y - x*z,???????? x*y - beta*z]??? return fic = [1.0, 2.0, 1.0]t0 = 0.0t1 = 100.0#dt = 0.1sigma = 10.0rho = 28.0beta = 10.0/3solver = ode(lorenz_sys)t = []sol = []solver.set_initial_value(ic, t0)solver.set_integrator('vode')solver.set_f_params(sigma, rho, beta)while solver.successful() and solver.t < t1: > > ??? solver.integrate(t1, step=True)??? t.append(solver.t)??? sol.append(solver.y)t = array(t)sol = array(sol)fig = figure()ax = Axes3D(fig)ax.plot(sol[:,0], sol[:,1], sol[:,2])xlabel('x') > > ylabel('y')show()-----Warren? > > > > > > _______________________________________________ > SciPy-User mailing listSciPy-User scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user Warren: Thank you for your quick answer! Now it works with regular nonstiff-problems, but I'm still having problems with the stiff ones. I am trying to solve a stiff van-der-pol oscillator as described in matlab's ode15s reference (finish time tf=3000). The problem is that it takes too long, so long that I have to kill the process (in fact, if I change the finish time to tf=10, it takes approximately 1 minute to solve it whereas matlab takes 0.2s to solve it for tf=3000, and odeint took even less but with specified time points, which I don't want). Here is my code: def f_test(t,y): dy=numpy.zeros(2) dy[0]=y[1] dy[1]=1000*(1 - y[0]**2)*y[1] - y[0]; return dy import numpy import scipy import scipy.integrate import matplotlib.pyplot as plt # Define initial conditions t0=numpy.array(0.0) tf=numpy.array(3000.0) y0=numpy.array([2,0]) y=y0 t=t0 # Solve r=scipy.integrate.ode(f_test).set_integrator('vode', method='bdf', order=15) r.set_initial_value(y0,t0) while r.successful() and r.t References: <553314.52025.qm@web110812.mail.gq1.yahoo.com> Message-ID: <4CC77BB1.7070702@silveregg.co.jp> On 10/27/2010 03:52 AM, Di Zou wrote: > Hi, > > I am trying to install scipy 0.8 on Red Hat 5. I have numpy 1.5.1 and > python 2.6. > When I do 'python setup.py install > --prefix=/share/dizou/local/scipy-0.8' I get this error: > File "setup.py", line 160, in > setup_package() > File "setup.py", line 127, in setup_package > from numpy.distutils.core import setup > File > "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/__init__.py", > line 7, in > import ccompiler > File > "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/ccompiler.py", > line 7, in > from distutils.ccompiler import * > File > "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/__init__.py", > line 7, in > import ccompiler > File > "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/ccompiler.py", > line 8, in > from distutils import ccompiler > ImportError: cannot import name ccompiler > When I do 'which gcc' at the terminal window I get: /usr/bin/gcc > How do I fix this? This error is really strange. Could you quickly try this from your terminal ? python -c "from distutils import ccompiler" If this does not work, there is something wrong with your setup (what is your PYTHONPATH ?), or your python install is seriously broken (but you would not have been able to install numpy that way, so I suspect the configuration problem to be more likely), cheers, David From lutz.maibaum at gmail.com Wed Oct 27 02:19:48 2010 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Tue, 26 Oct 2010 23:19:48 -0700 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: References: <4CC5C8F8.4050002@biostat.ucsf.edu> <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> <0F6FED9F-0EB4-468C-A417-18031C20304F@gmail.com> Message-ID: <8D61F076-FD11-487A-8CE6-2176F7716021@gmail.com> On Oct 26, 2010, at 9:33 AM, Pauli Virtanen wrote: > If you have time, please test: > > http://github.com/pv/scipy-work/tree/bug/1313-arpack I am not sure how to use github, so I just replaced the arpack.py with the new version. The 32 bit version still works, and the 64 bit version now correctly tries to raise an exception. It seems to fail in doing so, however: Traceback (most recent call last): File "check.py", line 38, in compute_eigenvalues(a.astype(np.float64)) File "check.py", line 22, in compute_eigenvalues print scipy.sparse.linalg.eigs(a)[0] File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 521, in eigs params.iterate() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 332, in iterate self._raise_no_convergence() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 201, in _raise_no_convergence ev, vec = self.extract(True) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 358, in extract raise ArpackError(ierr, infodict=_NEUPD_ERRORS) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 121, in __init__ RuntimeError.__init__("ARPACK error %d: %s" % (info, msg)) TypeError: descriptor '__init__' requires a 'exceptions.RuntimeError' object but received a 'str' Again, this might be because I simply replaced the arpack.py file. Would it be possible to add to the docstrings that maxiter=None is interpreted as maxiter=10*n (size of matrix)? > Note that the function is there renamed from `eigen` to `eigs`, as the > name shadowed with the package name (plus obvious reasons). That's good to know. A while ago there was some discussion about this (http://mail.scipy.org/pipermail/scipy-user/2010-August/026401.html). In SciPy 0.7, I used scipy.sparse.linalg.eigen.arpack.eigen() which no longer worked in SciPy 0.8, where I now use scipy.sparse.linalg.eigen() Does your change mean that for SciPy 0.9, I will have to use scipy.sparse.linalg.eigen.arpack.eigs() ? I help maintain a software package that is supposed to work with all 3 versions of SciPy. What would be a good way to handle this? To support 0.7 and 0.8, I am currently using something like try: import scipy.sparse.linalg.eigen.arpack as arpack # this works for 0.7 but fails for 0.8 scipy.sparse.linalg.eigen = arpack.eigen except: import scipy.sparse.linalg.eigen # this works for 0.8 Any suggestions would be much appreciated. Thanks for all your help, Lutz From david at silveregg.co.jp Wed Oct 27 02:40:36 2010 From: david at silveregg.co.jp (David) Date: Wed, 27 Oct 2010 15:40:36 +0900 Subject: [SciPy-User] Eigenvectors of sparse symmetric matrix In-Reply-To: <8D61F076-FD11-487A-8CE6-2176F7716021@gmail.com> References: <4CC5C8F8.4050002@biostat.ucsf.edu> <14D8B1B6-932F-43A5-A3CA-A7A2797CDE11@gmail.com> <0F6FED9F-0EB4-468C-A417-18031C20304F@gmail.com> <8D61F076-FD11-487A-8CE6-2176F7716021@gmail.com> Message-ID: <4CC7C964.5080104@silveregg.co.jp> On 10/27/2010 03:19 PM, Lutz Maibaum wrote: > On Oct 26, 2010, at 9:33 AM, Pauli Virtanen wrote: >> If you have time, please test: >> >> http://github.com/pv/scipy-work/tree/bug/1313-arpack > > I am not sure how to use github There is a big download button so that you can download the working tree corresponding to that revision :) http://github.com/pv/scipy-work/archives/bug/1313-arpack > I help maintain a software package that is supposed to work with all 3 versions of SciPy. What would be a good way to handle this? To support 0.7 and 0.8, I am currently using something like > > try: > import scipy.sparse.linalg.eigen.arpack as arpack # this works for 0.7 but fails for 0.8 > scipy.sparse.linalg.eigen = arpack.eigen > except: > import scipy.sparse.linalg.eigen # this works for 0.8 > > Any suggestions would be much appreciated. I think this is a fined method, except that you should use ImportError in the except clause to avoid hiding arbitrary exception (if you make a typo in the try section, for example). An alternative is to explicitly handle the versions (using scipy.__version__). This has the advantage of being more explicit, but potentially more fragile (making errors in version handling is suprisingly easy in my experience), cheers, David From di_zou at yahoo.com Wed Oct 27 10:58:07 2010 From: di_zou at yahoo.com (Di Zou) Date: Wed, 27 Oct 2010 07:58:07 -0700 (PDT) Subject: [SciPy-User] Error installing scipy In-Reply-To: <4CC77BB1.7070702@silveregg.co.jp> References: <553314.52025.qm@web110812.mail.gq1.yahoo.com> <4CC77BB1.7070702@silveregg.co.jp> Message-ID: <72490.24985.qm@web110806.mail.gq1.yahoo.com> Well I got the latest svn scipy and scipy-0.8 installed today. I don't know what I changed. Maybe I did something different today that I didn't do yesterday with my PYTHONPATH. Now I am getting an error in my code. This is my code: import numpy as np import scipy as sp from scipy import stats I get a segfault with 'from scipy import stats'. I went through with the debugger and this is the traceback I get: 68 area321-n09.arl.army.mil> python -m pdb test.py > /share/dizou/Network_Analysis_Tools/test.py(1)() -> import numpy as np (Pdb) n > /share/dizou/Network_Analysis_Tools/test.py(2)() -> import scipy as sp (Pdb) n > /share/dizou/Network_Analysis_Tools/test.py(3)() -> from scipy import stats (Pdb) s --Call-- >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/__init__.py(5)() >) -> from info import __doc__ (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/__init__.py(5)() >) -> from info import __doc__ (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/__init__.py(7)() >) -> from stats import * (Pdb) s --Call-- >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(131)() >) -> """ (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(131)() >) -> """ (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(194)() >) -> import warnings (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(195)() >) -> import math (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(198)() >) -> pysum = sum # save it before it gets overwritten (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(201)() >) -> from numpy import array, asarray, dot, ma, zeros, sum (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(202)() >) -> import scipy.special as special (Pdb) s --Call-- >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/__init__.py(5)() >) -> from info import __doc__, __docformat__ (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/__init__.py(5)() >) -> from info import __doc__, __docformat__ (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/__init__.py(8)() >) -> from basic import * (Pdb) s --Call-- >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/basic.py(5)() >) -> from numpy import * (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/basic.py(5)() >) -> from numpy import * (Pdb) s >/share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/basic.py(6)() >) -> from _cephes import * (Pdb) s Segmentation fault (core dumped) I do have a '_cephes.so' library in my 'scipy/lib/python2.6/site-packages/scipy/special' directory. ________________________________ From: David To: SciPy Users List Sent: Tue, October 26, 2010 9:09:05 PM Subject: Re: [SciPy-User] Error installing scipy On 10/27/2010 03:52 AM, Di Zou wrote: > Hi, > > I am trying to install scipy 0.8 on Red Hat 5. I have numpy 1.5.1 and > python 2.6. > When I do 'python setup.py install > --prefix=/share/dizou/local/scipy-0.8' I get this error: > File "setup.py", line 160, in > setup_package() > File "setup.py", line 127, in setup_package > from numpy.distutils.core import setup > File >"/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/__init__.py", >, > line 7, in > import ccompiler > File >"/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/ccompiler.py", >, > line 7, in > from distutils.ccompiler import * > File >"/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/__init__.py", >, > line 7, in > import ccompiler > File >"/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/ccompiler.py", >, > line 8, in > from distutils import ccompiler > ImportError: cannot import name ccompiler > When I do 'which gcc' at the terminal window I get: /usr/bin/gcc > How do I fix this? This error is really strange. Could you quickly try this from your terminal ? python -c "from distutils import ccompiler" If this does not work, there is something wrong with your setup (what is your PYTHONPATH ?), or your python install is seriously broken (but you would not have been able to install numpy that way, so I suspect the configuration problem to be more likely), cheers, David _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Wed Oct 27 14:24:06 2010 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 27 Oct 2010 14:24:06 -0400 Subject: [SciPy-User] SciPy ODE integrator In-Reply-To: References: <838DE651-0C9F-4713-99D7-4997A234AEC9@gmail.com> <269E30BC-CF34-4971-B306-9A7327B126A4@gmail.com> Message-ID: Sebastian, > Thank you for your quick answer! Now it works with regular nonstiff-problems, > but I'm still having problems with the stiff ones. I am trying to solve a stiff > van-der-pol oscillator as described in matlab's ode15s reference (finish time > tf=3000). The problem is that it takes too long, so long that I have to kill > the process (in fact, if I change the finish time to tf=10, it takes > approximately 1 minute to solve it whereas matlab takes 0.2s to solve it for > tf=3000, and odeint took even less but with specified time points, which I don't > want). Here is my code: In case it helps, PyDSTool's Radau stiff integrator will solve this system much faster than Matlab (with adaptive time step, of course), and there is a van der pol example (actually involving a bifurcation analysis) provided in the /tests directory, as PyCont_vanDerPol.py. Let me know if you need help setting up. The full wiki documentation pages are offline right now (the old server crashed) but some of the core pages are being mirrored at my web site, linked from pydstool.sourceforge.net. These do include installation information for the Dopri and Radau integrators. -Rob From athanastasiou at gmail.com Wed Oct 27 15:19:21 2010 From: athanastasiou at gmail.com (Athanasios Anastasiou) Date: Wed, 27 Oct 2010 20:19:21 +0100 Subject: [SciPy-User] Loading a CSV with loadtxt Message-ID: Hello everyone I have a CSV file with the following format: Unicode<100, Unicode<100, float, float, uint and i am trying to read it into a numpy array using: theData = numpy.loadtxt('primData.csv',delimiter=',',skiprows=1,dtype={'names':('NodeS','NodeT','flowF','flowB','ID'),'formats':('U100','U100','f','f','uint')}) This fails with "expected a readable buffer object". I have also tried first opening the file through codecs as a utf-8 file and passing the file object into loadtxt but i still get the same error. And i have also tried defining the dtype outside of loadtxt and making sure that it works before using it any further. However, when i tried the code that is available at the cookbook: (with some minor modifications) http://www.scipy.org/Cookbook/InputOutput#head-b0366eac0be19c3d7c32fc81c47a7c02508b6f52 It works exactly as expected. I don't mind using the latter but i am just wondering what it is that i am doing wrong with loadtxt(?) Any ideas why loadtxt fails in this way? All the best Athanasios From jsseabold at gmail.com Wed Oct 27 15:31:59 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 27 Oct 2010 15:31:59 -0400 Subject: [SciPy-User] Loading a CSV with loadtxt In-Reply-To: References: Message-ID: On Wed, Oct 27, 2010 at 3:19 PM, Athanasios Anastasiou wrote: > Hello everyone > > I have a CSV file with the following format: > Unicode<100, Unicode<100, float, float, uint > > and i am trying to read it into a numpy array using: > > theData = numpy.loadtxt('primData.csv',delimiter=',',skiprows=1,dtype={'names':('NodeS','NodeT','flowF','flowB','ID'),'formats':('U100','U100','f','f','uint')}) > > This fails with "expected a readable buffer object". I have also tried > first opening the file through codecs as a utf-8 file and passing the > file object into loadtxt but i still get the same error. And i have > also tried defining the dtype outside of loadtxt and making sure that > it works before using it any further. > > However, when i tried the code that is available at the cookbook: > (with some minor modifications) > http://www.scipy.org/Cookbook/InputOutput#head-b0366eac0be19c3d7c32fc81c47a7c02508b6f52 > > It works exactly as expected. > > I don't mind using the latter but i am just wondering what it is that > i am doing wrong with loadtxt(?) > > Any ideas why loadtxt fails in this way? > Hmm, I don't see what could be wrong. What numpy version? Can you post some data as an attachment or using StringIO.StringIO that replicates the error? You might also try theData = numpy.loadtxt('primData.csv', delimiter=',', skiprows=1,dtype=None) to narrow down where the error is coming from. Skipper From dtlussier at gmail.com Wed Oct 27 16:52:52 2010 From: dtlussier at gmail.com (Dan Lussier) Date: Wed, 27 Oct 2010 15:52:52 -0500 Subject: [SciPy-User] small error in numpy docs Message-ID: I found a slight error in the numpy docs, and wasn't sure how to best report it: It is at: http://docs.scipy.org/doc/numpy/reference/maskedarray.generic.html#unmasking-an-entry The second code block under that heading should be: ####################################### >>> x = ma.array([1, 2, 3], mask=[0, 0, 1], hard_mask=True) >>> x masked_array(data = [1 2 --], mask = [False False True], fill_value = 999999) >>> x[-1] = 5 >>> x masked_array(data = [1 2 --], mask = [False False True], fill_value = 999999) >>> x.soften_mask() >>> x[-1] = 5 >>> x masked_array(data = [1 2 5], mask = [False False False], fill_value = 999999) >>> x.harden_mask() ####################################### The error was after the second attempt to assign to x[-1]. Cheers, Dan From robert.kern at gmail.com Wed Oct 27 16:57:24 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 Oct 2010 15:57:24 -0500 Subject: [SciPy-User] small error in numpy docs In-Reply-To: References: Message-ID: On Wed, Oct 27, 2010 at 15:52, Dan Lussier wrote: > I found a slight error in the numpy docs, and wasn't sure how to best report it: > > It is at: http://docs.scipy.org/doc/numpy/reference/maskedarray.generic.html#unmasking-an-entry You can fix it yourself, if you like: http://docs.scipy.org/numpy/docs/numpy-docs/reference/maskedarray.generic.rst/ You will need to create an account, then post here with your username in order to get it activated: http://docs.scipy.org/numpy/accounts/register/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From dtlussier at gmail.com Wed Oct 27 18:43:27 2010 From: dtlussier at gmail.com (Dan Lussier) Date: Wed, 27 Oct 2010 17:43:27 -0500 Subject: [SciPy-User] account to edit docs Message-ID: Hi, I came across a small bug in the online docs/userguide and wanted to edit them. I have registered on the system with userid dtlussier, so if that can be activated for editing that would be great! Thanks, Dan From pav at iki.fi Wed Oct 27 18:58:19 2010 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 27 Oct 2010 22:58:19 +0000 (UTC) Subject: [SciPy-User] account to edit docs References: Message-ID: Wed, 27 Oct 2010 17:43:27 -0500, Dan Lussier wrote: > I came across a small bug in the online docs/userguide and wanted to > edit them. I have registered on the system with userid dtlussier, so if > that can be activated for editing that would be great! Done, thanks for contributing! -- Pauli Virtanen From dtlussier at gmail.com Wed Oct 27 19:10:55 2010 From: dtlussier at gmail.com (Dan Lussier) Date: Wed, 27 Oct 2010 18:10:55 -0500 Subject: [SciPy-User] account to edit docs In-Reply-To: References: Message-ID: Great - thanks! On Wed, Oct 27, 2010 at 5:58 PM, Pauli Virtanen wrote: > Wed, 27 Oct 2010 17:43:27 -0500, Dan Lussier wrote: >> I came across a small bug in the online docs/userguide and wanted to >> edit them. ?I have registered on the system with userid dtlussier, so if >> that can be activated for editing that would be great! > > Done, thanks for contributing! > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nberg at atmos.ucla.edu Wed Oct 27 20:58:48 2010 From: nberg at atmos.ucla.edu (Neil Berg) Date: Wed, 27 Oct 2010 17:58:48 -0700 Subject: [SciPy-User] netCDF output error Message-ID: Hi all, The piece of code below is used to linearly interpolate 80-meter wind speeds at each grid point per hour using previously determined wind speeds at heights above and below 80m. The interpolated values are correct, however, when those values get outputted to a netCDF file, there is an issue. The issue is that for all "ntim" greater than 0, the variable ws80 is not being recorded correctly for longitude 0 and 1, leading to a 2-longitude shift between the calculated value and the outputted value, as seen below. Calculated ws80 values for ntim = 1(these values are correct): time = 1 lat = 0 lon = 0 ws80 = 3.53356 time = 1 lat = 0 lon = 1 ws80 = 3.44833 time = 1 lat = 0 lon = 2 ws80 = 3.35811 time = 1 lat = 0 lon = 3 ws80 = 3.26005 time = 1 lat = 0 lon = 4 ws80 = 3.15432 time = 1 lat = 0 lon = 5 ws80 = 3.04073 time = 1 lat = 0 lon = 6 ws80 = 2.91952 time = 1 lat = 0 lon = 7 ws80 = 2.7913 time = 1 lat = 0 lon = 8 ws80 = 2.65674 time = 1 lat = 0 lon = 9 ws80 = 2.51895 time = 1 lat = 0 lon = 10 ws80 = 2.37965 And here is what the netCDF output looks like for the same sample: time[1] latitude[0] longitude[0] ws80[5184]=1.4013e-45 m s-1 time[1] latitude[0] longitude[1] ws80[5185]=0 m s-1 time[1] latitude[0] longitude[2] ws80[5186]=3.53356 m s-1 time[1] latitude[0] longitude[3] ws80[5187]=3.44833 m s-1 time[1] latitude[0] longitude[4] ws80[5188]=3.35811 m s-1 time[1] latitude[0] longitude[5] ws80[5189]=3.26005 m s-1 time[1] latitude[0] longitude[6] ws80[5190]=3.15432 m s-1 time[1] latitude[0] longitude[7] ws80[5191]=3.04073 m s-1 time[1] latitude[0] longitude[8] ws80[5192]=2.91952 m s-1 time[1] latitude[0] longitude[9] ws80[5193]=2.7913 m s-1 time[1] latitude[0] longitude[10] ws80[5194]=2.65674 m s-1 -------------------------------------------------------------------------------------------- Mac OS X 10.6.4 Python 2.6.1 -------------------------------------------------------------------------------------------- from scipy.io.netcdf import netcdf_file as Dataset from scipy.interpolate import interp1d ..... ws80 = zeros([ntim,nlat,nlon],'float32') for tim_idx in range(0,ntim): # loops from hour 0 to hour 744, which covers entire month of January z_arr = (za1[tim_idx],za2[tim_idx]) # nearest heights below 80 meters (za1) and above 80 meter (za2) ws_arr = (wspd1[tim_idx],wspd2[tim_idx]) # wind speeds at za1 and za2 f = interp1d(z_arr,ws_arr, kind='linear') ws80[tim_idx,lat_idx,lon_idx] = f(80) # interpolated wind speed at 80 meters print "time =",tim_idx,"lat =",lat_idx,"lon =",lon_idx,"ws80 =",ws80[tim_idx,lat_idx,lon_idx] .... ncfile_out = Dataset('output_file','w') data = ncfile_out.createVariable('ws80',dtype('float32').char,('time','latitude','longitude',)) data.units = "m s-1" data[:,:,:] = ws80[:,:,:] ncfile_out.close() _______________________________________________________ I thank you in advance for any suggestions on why the first two longitudes are not being outputted in the netCDF correctly, leading to the rest of the output being incorrectly shifted. Neil Berg From david at silveregg.co.jp Wed Oct 27 21:05:59 2010 From: david at silveregg.co.jp (David) Date: Thu, 28 Oct 2010 10:05:59 +0900 Subject: [SciPy-User] Error installing scipy In-Reply-To: <72490.24985.qm@web110806.mail.gq1.yahoo.com> References: <553314.52025.qm@web110812.mail.gq1.yahoo.com> <4CC77BB1.7070702@silveregg.co.jp> <72490.24985.qm@web110806.mail.gq1.yahoo.com> Message-ID: <4CC8CC77.8000707@silveregg.co.jp> On 10/27/2010 11:58 PM, Di Zou wrote: > Well I got the latest svn scipy and scipy-0.8 installed today. I don't > know what I changed. Maybe I did something different today that I didn't > do yesterday with my PYTHONPATH. Now I am getting an error in my code. > This is my code: > > import numpy as np > import scipy as sp > from scipy import stats > > I get a segfault with 'from scipy import stats'. I went through with the > debugger and this is the traceback I get: Using the python debugger is not very useful for segfaults (which happen in C code, not python code). You may want to try running the code under gdb instead. But I suspect an issue with numpy/scipy imports. You need to use the same numpy as the one used when building scipy - recent versions of numpy has a safe guard to warn against versions mismatches, but older (< 1.4) don't have it. To know for sure which version of numpy you import, you can use: python -c "import numpy; print numpy.__version__, numpy.__file__" Make sure this is the same when you build scipy and when you import it later. cheers, David From rmay31 at gmail.com Wed Oct 27 21:47:35 2010 From: rmay31 at gmail.com (Ryan May) Date: Wed, 27 Oct 2010 20:47:35 -0500 Subject: [SciPy-User] netCDF output error In-Reply-To: References: Message-ID: On Wed, Oct 27, 2010 at 7:58 PM, Neil Berg wrote: > from scipy.io.netcdf import netcdf_file as Dataset > from scipy.interpolate import interp1d > ..... > ws80 = zeros([ntim,nlat,nlon],'float32') > for tim_idx in range(0,ntim): # loops from hour 0 to hour 744, which covers entire month of January > ? ? ? ? ? ?z_arr = (za1[tim_idx],za2[tim_idx]) ? ? # nearest heights below 80 meters (za1) and above 80 meter (za2) > ? ? ? ? ? ?ws_arr = (wspd1[tim_idx],wspd2[tim_idx]) # wind speeds at za1 and za2 > ? ? ? ? ? ?f = interp1d(z_arr,ws_arr, kind='linear') > ? ? ? ? ? ?ws80[tim_idx,lat_idx,lon_idx] = f(80) # interpolated wind speed at 80 meters > ? ? ? ? ? ?print "time =",tim_idx,"lat =",lat_idx,"lon =",lon_idx,"ws80 =",ws80[tim_idx,lat_idx,lon_idx] > .... > ncfile_out = Dataset('output_file','w') > data = ncfile_out.createVariable('ws80',dtype('float32').char,('time','latitude','longitude',)) > data.units = "m s-1" > data[:,:,:] = ws80[:,:,:] > ncfile_out.close() > _______________________________________________________ Where's the code where you create the dimensions ('time','latitude','longitude',)? My first gut guess is that the shape of the variable as written doesn't match the array. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From nberg at atmos.ucla.edu Wed Oct 27 22:00:26 2010 From: nberg at atmos.ucla.edu (Neil Berg) Date: Wed, 27 Oct 2010 19:00:26 -0700 Subject: [SciPy-User] netCDF output error In-Reply-To: References: Message-ID: <4ACEB2C5-DE13-442B-A294-877536F0CFE0@atmos.ucla.edu> Ryan, Your gut was right, as my line to create the 'time' dimension mismatched with ntim. I greatly appreciate your help! Cheers, Neil On Oct 27, 2010, at 6:47 PM, Ryan May wrote: > On Wed, Oct 27, 2010 at 7:58 PM, Neil Berg wrote: >> from scipy.io.netcdf import netcdf_file as Dataset >> from scipy.interpolate import interp1d >> ..... >> ws80 = zeros([ntim,nlat,nlon],'float32') >> for tim_idx in range(0,ntim): # loops from hour 0 to hour 744, which covers entire month of January >> z_arr = (za1[tim_idx],za2[tim_idx]) # nearest heights below 80 meters (za1) and above 80 meter (za2) >> ws_arr = (wspd1[tim_idx],wspd2[tim_idx]) # wind speeds at za1 and za2 >> f = interp1d(z_arr,ws_arr, kind='linear') >> ws80[tim_idx,lat_idx,lon_idx] = f(80) # interpolated wind speed at 80 meters >> print "time =",tim_idx,"lat =",lat_idx,"lon =",lon_idx,"ws80 =",ws80[tim_idx,lat_idx,lon_idx] >> .... >> ncfile_out = Dataset('output_file','w') >> data = ncfile_out.createVariable('ws80',dtype('float32').char,('time','latitude','longitude',)) >> data.units = "m s-1" >> data[:,:,:] = ws80[:,:,:] >> ncfile_out.close() >> _______________________________________________________ > > Where's the code where you create the dimensions > ('time','latitude','longitude',)? My first gut guess is that the shape > of the variable as written doesn't match the array. > > Ryan > > -- > Ryan May > Graduate Research Assistant > School of Meteorology > University of Oklahoma > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From noreply at badoo.com Thu Oct 28 01:54:31 2010 From: noreply at badoo.com (Badoo) Date: Thu, 28 Oct 2010 05:54:31 +0000 Subject: [SciPy-User] =?utf-8?q?=C2=A1Luis_Cota_te_ha_dejado_un_mensaje_en?= =?utf-8?q?_Badoo!?= Message-ID: ?Tienes un nuevo mensaje en Badoo! Luis Cota te dej? un mensaje. Haz click en este enlace para verlo: http://us1.badoo.com/lcota/in/gAt0hVXrtWE/?lang_id=7 M?s gente que tambi?n te est? esperando: Lorenzo (London, Reino Unido) Helen (New York, Estados Unidos) Jenifer Robertson (New York, Estados Unidos) http://us1.badoo.com/lcota/in/gAt0hVXrtWE/?lang_id=7 Si al hacer click sobre el enlace, no funciona, copia y pega la direcci?n en tu barra del navegador. Este email es parte del procedimiento para que leas los mensajes de Luis Cota. Si has recibido este email por equivocaci?n, por favor, ign?ralo. Tras un corto periodo de tiempo el mensaje sera eliminado del sistema. ?Divi?rtete! El Equipo de Badoo Has recibido este email porque un usuario de Badoo te ha dejado un mensaje en Badoo. Este mensaje es autom?tico. Las respuestas a este mensaje no estan controladas y no ser?n contestadas. Si no quieres recibir m?s mensajes de Badoo, h?znoslo saber: http://us1.badoo.com/impersonation.phtml?lang_id=7&mail_code=63&email=scipy-user%40scipy.org&secret=&invite_id=435248&user_id=1093354112 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cw at pariserplatz.eu Thu Oct 28 05:15:19 2010 From: cw at pariserplatz.eu (Cornelius Wefelscheid) Date: Thu, 28 Oct 2010 11:15:19 +0200 Subject: [SciPy-User] bug import scipy.stats Message-ID: Hello, i'm having the following problem when importing scipy.stats: /usr/local/lib/python2.6/dist-packages/scipy/integrate/vode.so: undefined symbol: _gfortran_st_write_done I'm running scipy 0.8 with numpy 1.5.1rc1 on ubuntu 10.10 I already recompiled numpy und scipy but it did not helped at all. Any help or ideas are appreciated... Best Regards, Cornelius -------------- next part -------------- An HTML attachment was scrubbed... URL: From di_zou at yahoo.com Thu Oct 28 08:51:21 2010 From: di_zou at yahoo.com (Di Zou) Date: Thu, 28 Oct 2010 05:51:21 -0700 (PDT) Subject: [SciPy-User] Error installing scipy In-Reply-To: <4CC8CC77.8000707@silveregg.co.jp> References: <553314.52025.qm@web110812.mail.gq1.yahoo.com> <4CC77BB1.7070702@silveregg.co.jp> <72490.24985.qm@web110806.mail.gq1.yahoo.com> <4CC8CC77.8000707@silveregg.co.jp> Message-ID: <967613.16066.qm@web110805.mail.gq1.yahoo.com> That was the issue. I wasn't running the code with the same numpy as the numpy I used to build scipy with. Thanks for all your help! Di ________________________________ From: David To: SciPy Users List Sent: Wed, October 27, 2010 9:05:59 PM Subject: Re: [SciPy-User] Error installing scipy On 10/27/2010 11:58 PM, Di Zou wrote: > Well I got the latest svn scipy and scipy-0.8 installed today. I don't > know what I changed. Maybe I did something different today that I didn't > do yesterday with my PYTHONPATH. Now I am getting an error in my code. > This is my code: > > import numpy as np > import scipy as sp > from scipy import stats > > I get a segfault with 'from scipy import stats'. I went through with the > debugger and this is the traceback I get: Using the python debugger is not very useful for segfaults (which happen in C code, not python code). You may want to try running the code under gdb instead. But I suspect an issue with numpy/scipy imports. You need to use the same numpy as the one used when building scipy - recent versions of numpy has a safe guard to warn against versions mismatches, but older (< 1.4) don't have it. To know for sure which version of numpy you import, you can use: python -c "import numpy; print numpy.__version__, numpy.__file__" Make sure this is the same when you build scipy and when you import it later. cheers, David _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgdenno at gmail.com Thu Oct 28 16:13:14 2010 From: mgdenno at gmail.com (MDenno) Date: Thu, 28 Oct 2010 13:13:14 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Scipy Superpack Errors and Failures Message-ID: <30079638.post@talk.nabble.com> Hello: I have been trying to get numpy/scipy installed on my Mac (OSX 10.6.4) for a few weeks now and just can't seem to make it all work. I have tried 2 general approaches, building from source and using Chris Fonnesbeck's Scipy Superpack, both of which have been giving me problems. After much installing and uninstalling of various programs/libraries I just did a fresh install of OSX and gave the Superpack (10-22-2010 version) another try, but I am still having problems. I don't think it is working correctly based on running the tests. First, some information about my setup: $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5659) $ gfortran --version GNU Fortran (GCC) 4.2.1 (Apple Inc. build 5659) + GF 4.2.4 $ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] FFTW 3.2.2 built as follows: ./configure CC="gcc -arch x86_64" CXX="g++ -arch x86_64" CPP="gcc -E" CXXCPP="g++ -E" make sudo make install XCode installed from xcode322_2148_developerdvd.dmg Are there any other dependencies? UMFPACK? ARPACK? After installing and running the tests is it normal to have lots of warnings and some errors and failures for scipy? I ran the following: >>> import scipy >>> scipy.test() Running unit tests for scipy NumPy version 2.0.0.dev NumPy is installed in /Library/Python/2.6/site-packages/numpy-2.0.0.dev_20101022-py2.6-macosx-10.6-universal.egg/numpy SciPy version 0.9.0.dev SciPy is installed in /Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy Python version 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] nose version 0.11.0 [lots of warnings omitted - I would be happy to provide them if they are needed, but they are quite long] ====================================================================== ERROR: test_complex_nonsymmetric_modes (test_arpack.TestEigenComplexNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 269, in test_complex_nonsymmetric_modes self.eval_evec(m,typ,k,which) File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 250, in eval_evec eval,evec=eigen(a,k,which=which) File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", line 397, in eigen params.iterate() File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", line 222, in iterate raise RuntimeError("Error info=%d in arpack" % self.info) RuntimeError: Error info=-8 in arpack ====================================================================== ERROR: test_nonsymmetric_modes (test_arpack.TestEigenNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 206, in test_nonsymmetric_modes self.eval_evec(m,typ,k,which) File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 188, in eval_evec eval,evec=eigen(a,k,which=which,**kwds) File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", line 397, in eigen params.iterate() File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", line 222, in iterate raise RuntimeError("Error info=%d in arpack" % self.info) RuntimeError: Error info=-8 in arpack ====================================================================== ERROR: test_starting_vector (test_arpack.TestEigenNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 216, in test_starting_vector self.eval_evec(self.symmetric[0],typ,k,which='LM',v0=v0) File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 188, in eval_evec eval,evec=eigen(a,k,which=which,**kwds) File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", line 397, in eigen params.iterate() File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", line 222, in iterate raise RuntimeError("Error info=%d in arpack" % self.info) RuntimeError: Error info=-8 in arpack ====================================================================== FAIL: gaussian filter 3 ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/nose-0.11.0-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/ndimage/tests/test_ndimage.py", line 468, in test_gauss03 assert_almost_equal(output.sum(), input.sum()) File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_20101022-py2.6-macosx-10.6-universal.egg/numpy/testing/utils.py", line 463, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal ACTUAL: 49993304.0 DESIRED: 49992896.0 ====================================================================== FAIL: line-search Newton conjugate gradient optimization routine ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/optimize/tests/test_optimize.py", line 177, in test_ncg assert_(self.gradcalls == 18, self.gradcalls) # 0.8.0 File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_20101022-py2.6-macosx-10.6-universal.egg/numpy/testing/utils.py", line 34, in assert_ raise AssertionError(msg) AssertionError: 16 ====================================================================== FAIL: test_complex_symmetric_modes (test_arpack.TestEigenComplexSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 159, in test_complex_symmetric_modes self.eval_evec(self.symmetric[0],typ,k,which) File "/Library/Python/2.6/site-packages/scipy-0.9.0.dev_20101022-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 148, in eval_evec assert_array_almost_equal(eval,exact_eval,decimal=_ndigits[typ]) File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_20101022-py2.6-macosx-10.6-universal.egg/numpy/testing/utils.py", line 774, in assert_array_almost_equal header='Arrays are not almost equal') File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_20101022-py2.6-macosx-10.6-universal.egg/numpy/testing/utils.py", line 618, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.07188725 +6.23436023e-08j, 4.91291142 -3.25412906e-08j], dtype=complex64) y: array([ 5.+0.j, 6.+0.j], dtype=complex64) ---------------------------------------------------------------------- Ran 4771 tests in 78.922s FAILED (KNOWNFAIL=13, SKIP=41, errors=3, failures=3) Thank you in advance for your help. Matt -- View this message in context: http://old.nabble.com/Scipy-Superpack-Errors-and-Failures-tp30079638p30079638.html Sent from the Scipy-User mailing list archive at Nabble.com. From tgrav at mac.com Thu Oct 28 16:17:57 2010 From: tgrav at mac.com (Tommy Grav) Date: Thu, 28 Oct 2010 16:17:57 -0400 Subject: [SciPy-User] [SciPy-user] Scipy Superpack Errors and Failures In-Reply-To: <30079638.post@talk.nabble.com> References: <30079638.post@talk.nabble.com> Message-ID: On Oct 28, 2010, at 4:13 PM, MDenno wrote: > > Hello: > > I have been trying to get numpy/scipy installed on my Mac (OSX 10.6.4) for a > few weeks now and just can't seem to make it all work. I have tried 2 > general approaches, building from source and using Chris Fonnesbeck's Scipy > Superpack, both of which have been giving me problems. After much > installing and uninstalling of various programs/libraries I just did a fresh > install of OSX and gave the Superpack (10-22-2010 version) another try, but > I am still having problems. I don't think it is working correctly based on > running the tests. > > First, some information about my setup: > > $ gcc --version > i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5659) > > $ gfortran --version > GNU Fortran (GCC) 4.2.1 (Apple Inc. build 5659) + GF 4.2.4 > > $ python > Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) > [GCC 4.2.1 (Apple Inc. build 5646)] > > FFTW 3.2.2 built as follows: > ./configure CC="gcc -arch x86_64" CXX="g++ -arch x86_64" CPP="gcc -E" > CXXCPP="g++ -E" > make > sudo make install As far as I can tell the Superpack is 32bit only installation. Since you have installed FFTW as a 64bit library, I assume that python is also a 64bit version and then you are going to have to install numpy/scipy from scratch (something that is not a very easy thing to get right on the mac in 64bit). Cheers Tommy From cournape at gmail.com Thu Oct 28 18:46:00 2010 From: cournape at gmail.com (David Cournapeau) Date: Fri, 29 Oct 2010 07:46:00 +0900 Subject: [SciPy-User] [SciPy-user] Scipy Superpack Errors and Failures In-Reply-To: References: <30079638.post@talk.nabble.com> Message-ID: On Fri, Oct 29, 2010 at 5:17 AM, Tommy Grav wrote: > As far as I can tell the Superpack is 32bit only installation. Since you have installed > FFTW as a 64bit library, I assume that python is also a 64bit version and then you > are going to have to install numpy/scipy from scratch (something that is not a > very easy thing to get right on the mac in 64bit). FFTW is not used by scipy since 0.7.0, also, David From mgdenno at gmail.com Thu Oct 28 20:54:45 2010 From: mgdenno at gmail.com (MDenno) Date: Thu, 28 Oct 2010 17:54:45 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Scipy Superpack Errors and Failures In-Reply-To: References: <30079638.post@talk.nabble.com> Message-ID: <30081915.post@talk.nabble.com> Tommy Grav wrote: > > > As far as I can tell the Superpack is 32bit only installation. Since you > have installed > FFTW as a 64bit library, I assume that python is also a 64bit version and > then you > are going to have to install numpy/scipy from scratch (something that is > not a > very easy thing to get right on the mac in 64bit). > > Cheers > Tommy > > Tommy, Thanks for your reply. I am not sure how to tell if the Superpack is build for 64 bits for sure but the website says it is. http://stronginference.com/scipy-superpack/ I also tried building FFTW with 32 and 64 bit flags and it didn't help. However, I guess based on Dave's reply below FFTW is not used anymore. Maybe I will try from source again or macports or something. Matt -- View this message in context: http://old.nabble.com/Scipy-Superpack-Errors-and-Failures-tp30079638p30081915.html Sent from the Scipy-User mailing list archive at Nabble.com. From mgdenno at gmail.com Thu Oct 28 21:08:19 2010 From: mgdenno at gmail.com (MDenno) Date: Thu, 28 Oct 2010 18:08:19 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Scipy Superpack Errors and Failures In-Reply-To: References: <30079638.post@talk.nabble.com> Message-ID: <30081981.post@talk.nabble.com> > FFTW is not used by scipy since 0.7.0, also, David, Thanks for the clarification. I think I will wait and see if anyone else has any suggestions, etc regarding the superpack. If not maybe I will try from source again. I will start a separate thread for that if/when I have problems. Thanks again, Matt -- View this message in context: http://old.nabble.com/Scipy-Superpack-Errors-and-Failures-tp30079638p30081981.html Sent from the Scipy-User mailing list archive at Nabble.com. From tgrav at mac.com Thu Oct 28 21:15:33 2010 From: tgrav at mac.com (Tommy Grav) Date: Thu, 28 Oct 2010 21:15:33 -0400 Subject: [SciPy-User] [SciPy-user] Scipy Superpack Errors and Failures In-Reply-To: <30081981.post@talk.nabble.com> References: <30079638.post@talk.nabble.com> <30081981.post@talk.nabble.com> Message-ID: Also I think you can do file python (or other programs and libraries) to find out wether it is 32bit, 64bit or both [balder: tgrav ~]> file /usr/local/bin/python /usr/local/bin/python: Mach-O universal binary with 2 architectures /usr/local/bin/python (for architecture i386): Mach-O executable i386 /usr/local/bin/python (for architecture x86_64): Mach-O 64-bit executable x86_64 [balder: tgrav ~/Downloads/matplotlib-1.0.0/build/lib.macosx-10.6-intel-2.7/matplotlib]> file _delaunay.so _delaunay.so: Mach-O universal binary with 2 architectures _delaunay.so (for architecture i386): Mach-O bundle i386 _delaunay.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 Cheers Tommy On Oct 28, 2010, at 9:08 PM, MDenno wrote: > > > >> FFTW is not used by scipy since 0.7.0, also, > > David, > > Thanks for the clarification. I think I will wait and see if anyone else > has any suggestions, etc regarding the superpack. If not maybe I will try > from source again. I will start a separate thread for that if/when I have > problems. > > Thanks again, > > Matt > -- > View this message in context: http://old.nabble.com/Scipy-Superpack-Errors-and-Failures-tp30079638p30081981.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From mgdenno at gmail.com Thu Oct 28 21:39:09 2010 From: mgdenno at gmail.com (MDenno) Date: Thu, 28 Oct 2010 18:39:09 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Scipy Superpack Errors and Failures In-Reply-To: References: <30079638.post@talk.nabble.com> <30081981.post@talk.nabble.com> Message-ID: <30082129.post@talk.nabble.com> > Also I think you can do > file python Tommy, Thanks for the tip. I have checked a number of the libraries in the matplotlib, numpy and scipy directories and they all appear to be x86_64 at a minimum. Many of them are built for multiple arch. Can I ask you what method you used to get scipy/numpy installed? Matt -- View this message in context: http://old.nabble.com/Scipy-Superpack-Errors-and-Failures-tp30079638p30082129.html Sent from the Scipy-User mailing list archive at Nabble.com. From mgdenno at gmail.com Thu Oct 28 22:31:19 2010 From: mgdenno at gmail.com (MDenno) Date: Thu, 28 Oct 2010 19:31:19 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] trouble installing scipy on Mac10.6.4 with Python2.7 In-Reply-To: References: <3B13DA19-A94B-4B78-A4FB-C7BB12B18229@uci.edu> Message-ID: <30082360.post@talk.nabble.com> Hello: Sorry to piggy-back on an existing thread but I am wondering the current status of scipy and python 2.7 is as discussed below. Thanks, Matt Ralf Gommers-2 wrote: > > On Thu, Sep 16, 2010 at 8:52 AM, Eric Schow wrote: > >> Hi all, >> >> I am new to Python and NumPy/SciPy, and I'm having trouble getting scipy >> installed on my machine. >> >> I am working on an Intel Mac running OS 10.6.4, and I've recently updated >> the python installation to 2.7. I have installed NumPy 1.5.0, and I am >> trying to install scipy 0.8.0, which I downloaded as a tarball from >> sourceforge. > > > 0.8.0 was not fully working with 2.7, some fixes went into trunk after the > release. please use a recent svn checkout. > > I have gcc 4.2.1 installed (through Xcode), although the build process > seems >> to be calling gcc4.0. I have gfortran, which I just re-installed from >> http://r.research.att.com/tools/, as suggested in the installation notes >> that came with scipy. The problem seems to be with gfortran, although I >> admit I'm a little out of my league as I try to troubleshoot this one. >> The >> log file is attached. Finally, I initiated the build with this command: >> >> LDFLAGS="-arch x86_64" FFLAGS="-arch x86_64" python setup.py build >> > > This doesn't extend but overrides the build flags, that's how distutils > works. With numscons this should work as advertised (although it looks > like > numscons has some issue with 2.7 ATM). I've just added a warning that > describes this better at http://www.scipy.org/Installing_SciPy/Mac_OS_X, > because people keep running into this issue. > > >> Any help on this issue will be appreciated. BTW, I would prefer to go >> with >> Python 2.7 (as opposed to 2.6, for which there is a .dmg available) >> because >> of its purported increased compatibility with Python 3.x. >> > > I'd go with 2.6 for now, that's properly tested and binaries are > available. > Any 2.7 issues should be worked out soon, but at the moment YMMV. > > Sorry for the very slow reply. > > Cheers, > Ralf > > >> Thanks a mole, >> >> Eric >> >> >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/trouble-installing-scipy-on-Mac10.6.4-with-Python2.7-tp29807543p30082360.html Sent from the Scipy-User mailing list archive at Nabble.com. From oliphant at enthought.com Fri Oct 29 00:38:13 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Fri, 29 Oct 2010 00:38:13 -0400 Subject: [SciPy-User] Error installing scipy In-Reply-To: <72490.24985.qm@web110806.mail.gq1.yahoo.com> References: <553314.52025.qm@web110812.mail.gq1.yahoo.com> <4CC77BB1.7070702@silveregg.co.jp> <72490.24985.qm@web110806.mail.gq1.yahoo.com> Message-ID: I'm not sure if you got an answer to this question: But, this error can happen if you have a different version of NumPy than scipy-0.8 was compiled with. -Travis On Oct 27, 2010, at 10:58 AM, Di Zou wrote: > Well I got the latest svn scipy and scipy-0.8 installed today. I don't know what I changed. Maybe I did something different today that I didn't do yesterday with my PYTHONPATH. Now I am getting an error in my code. This is my code: > > import numpy as np > import scipy as sp > from scipy import stats > > I get a segfault with 'from scipy import stats'. I went through with the debugger and this is the traceback I get: > > 68 area321-n09.arl.army.mil> python -m pdb test.py > > /share/dizou/Network_Analysis_Tools/test.py(1)() > -> import numpy as np > (Pdb) n > > /share/dizou/Network_Analysis_Tools/test.py(2)() > -> import scipy as sp > (Pdb) n > > /share/dizou/Network_Analysis_Tools/test.py(3)() > -> from scipy import stats > (Pdb) s > --Call-- > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/__init__.py(5)() > -> from info import __doc__ > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/__init__.py(5)() > -> from info import __doc__ > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/__init__.py(7)() > -> from stats import * > (Pdb) s > --Call-- > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(131)() > -> """ > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(131)() > -> """ > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(194)() > -> import warnings > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(195)() > -> import math > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(198)() > -> pysum = sum # save it before it gets overwritten > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(201)() > -> from numpy import array, asarray, dot, ma, zeros, sum > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/stats/stats.py(202)() > -> import scipy.special as special > (Pdb) s > --Call-- > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/__init__.py(5)() > -> from info import __doc__, __docformat__ > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/__init__.py(5)() > -> from info import __doc__, __docformat__ > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/__init__.py(8)() > -> from basic import * > (Pdb) s > --Call-- > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/basic.py(5)() > -> from numpy import * > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/basic.py(5)() > -> from numpy import * > (Pdb) s > > /share/dizou/local/scipy-0.8/lib/python2.6/site-packages/scipy/special/basic.py(6)() > -> from _cephes import * > (Pdb) s > Segmentation fault (core dumped) > > I do have a '_cephes.so' library in my 'scipy/lib/python2.6/site-packages/scipy/special' directory. > From: David > To: SciPy Users List > Sent: Tue, October 26, 2010 9:09:05 PM > Subject: Re: [SciPy-User] Error installing scipy > > On 10/27/2010 03:52 AM, Di Zou wrote: > > Hi, > > > > I am trying to install scipy 0.8 on Red Hat 5. I have numpy 1.5.1 and > > python 2.6. > > When I do 'python setup.py install > > --prefix=/share/dizou/local/scipy-0.8' I get this error: > > File "setup.py", line 160, in > > setup_package() > > File "setup.py", line 127, in setup_package > > from numpy.distutils.core import setup > > File > > "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/__init__.py", > > line 7, in > > import ccompiler > > File > > "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/ccompiler.py", > > line 7, in > > from distutils.ccompiler import * > > File > > "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/__init__.py", > > line 7, in > > import ccompiler > > File > > "/share/dizou/local/numpy-1.5.1/lib/python2.6/site-packages/numpy/distutils/ccompiler.py", > > line 8, in > > from distutils import ccompiler > > ImportError: cannot import name ccompiler > > When I do 'which gcc' at the terminal window I get: /usr/bin/gcc > > How do I fix this? > > This error is really strange. Could you quickly try this from your > terminal ? > > python -c "from distutils import ccompiler" > > If this does not work, there is something wrong with your setup (what is > your PYTHONPATH ?), or your python install is seriously broken (but you > would not have been able to install numpy that way, so I suspect the > configuration problem to be more likely), > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user --- Travis Oliphant Enthought, Inc. oliphant at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From esko.lehtonen at iki.fi Fri Oct 29 07:28:12 2010 From: esko.lehtonen at iki.fi (Esko Lehtonen) Date: Fri, 29 Oct 2010 14:28:12 +0300 Subject: [SciPy-User] Loading a CSV with loadtxt In-Reply-To: References: Message-ID: <4CCAAFCC.8070606@iki.fi> 27.10.2010 22:19, Athanasios Anastasiou kirjoitti: > Hello everyone > > I have a CSV file with the following format: > Unicode<100, Unicode<100, float, float, uint > > and i am trying to read it into a numpy array using: > > theData = numpy.loadtxt('primData.csv',delimiter=',',skiprows=1,dtype={'names':('NodeS','NodeT','flowF','flowB','ID'),'formats':('U100','U100','f','f','uint')}) > > This fails with "expected a readable buffer object". I have also tried > first opening the file through codecs as a utf-8 file and passing the > file object into loadtxt but i still get the same error. And i have > also tried defining the dtype outside of loadtxt and making sure that > it works before using it any further. > > However, when i tried the code that is available at the cookbook: > (with some minor modifications) > http://www.scipy.org/Cookbook/InputOutput#head-b0366eac0be19c3d7c32fc81c47a7c02508b6f52 > > It works exactly as expected. > > I don't mind using the latter but i am just wondering what it is that > i am doing wrong with loadtxt(?) > > Any ideas why loadtxt fails in this way? Hi I have encountered that error many times. Every time there was something wrong with the text file (e.g. a missing or extra comma somewhere, especially in the end of line, or maybe an empty line). loadtxt and genfromtxt are very picky with the format of files and the error message is not very helpful. - Esko From Nikolaus at rath.org Fri Oct 29 08:51:37 2010 From: Nikolaus at rath.org (Nikolaus Rath) Date: Fri, 29 Oct 2010 08:51:37 -0400 Subject: [SciPy-User] Field Liner Tracing Message-ID: <87sjzpf9gm.fsf@inspiron.ap.columbia.edu> Hello, I have a vector field defined on a 3D grid and would like to trace field lines. What's the best method to do this? I suppose I could use scipy.integrate.odeint, but then I would have to define my own dy/dt method that interpolates the values on the grid. It seems to me that this is a task common enough for having a standard solution somewhere in scipy already. Also, I am bothered by the cost of having to define dy/dt as a Python function. It seems to me that this function is probably called quite a lot by the integrator and that Python overhead may easily become a bottleneck... It would be wonderful to have a method like trace_field_lines((Vx, Vy, Vz), X, Y, Z) where (Vx, Vy, Vz) are 3d numpy arrays containing the components of the vector field and X, Y, Z 1d grids containing the grid coordinates. Is there anything like that around already? Thanks, -Nikolaus -- ?Time flies like an arrow, fruit flies like a Banana.? PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C From gael.varoquaux at normalesup.org Fri Oct 29 10:47:18 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 29 Oct 2010 16:47:18 +0200 Subject: [SciPy-User] Field Liner Tracing In-Reply-To: <87sjzpf9gm.fsf@inspiron.ap.columbia.edu> References: <87sjzpf9gm.fsf@inspiron.ap.columbia.edu> Message-ID: <20101029144718.GA32646@phare.normalesup.org> On Fri, Oct 29, 2010 at 08:51:37AM -0400, Nikolaus Rath wrote: > It would be wonderful to have a method like trace_field_lines((Vx, Vy, Vz), X, > Y, Z) where (Vx, Vy, Vz) are 3d numpy arrays containing the components of > the vector field and X, Y, Z 1d grids containing the grid coordinates. > Is there anything like that around already? If you just want plotting, there is: http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/auto/mlab_helper_functions.html#enthought.mayavi.mlab.flow Gael From denis-bz-gg at t-online.de Fri Oct 29 12:15:50 2010 From: denis-bz-gg at t-online.de (denis) Date: Fri, 29 Oct 2010 09:15:50 -0700 (PDT) Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: Message-ID: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> > On Sun, Oct 24, 2010 at 10:43 AM, O wrote: > > I'm a recent convert from MatLab. > > One thing I miss is the Central File Exchange.? Are there any plans ... O, can you describe a bit which parts of Matlab Central you want most ? As Josef says, Scipy uses mail forums, ask.scipy.org, stackoverflow ... to answer questions pretty fast. Do you / do other users want - package reviews / comments / advertising -- for new users, for experts ? Examples from Matlab ? - overviews of major areas, along the lines of Wikipedia articles with links to detailed doc and recipes ? I believe that Scipy experts get more points for new stuff and for answering questions than they would for either of these. Matlab has a different reward system: "I believe in Art, but my manager believes in money, and who am I to argue with such a baboon ?" -- Groucho Marx cheers -- denis From Nikolaus at rath.org Fri Oct 29 13:43:11 2010 From: Nikolaus at rath.org (Nikolaus Rath) Date: Fri, 29 Oct 2010 13:43:11 -0400 Subject: [SciPy-User] Field Liner Tracing In-Reply-To: <20101029144718.GA32646@phare.normalesup.org> (Gael Varoquaux's message of "Fri, 29 Oct 2010 16:47:18 +0200") References: <87sjzpf9gm.fsf@inspiron.ap.columbia.edu> <20101029144718.GA32646@phare.normalesup.org> Message-ID: <87mxpwgaj4.fsf@inspiron.ap.columbia.edu> Gael Varoquaux writes: > On Fri, Oct 29, 2010 at 08:51:37AM -0400, Nikolaus Rath wrote: >> It would be wonderful to have a method like trace_field_lines((Vx, Vy, Vz), X, >> Y, Z) where (Vx, Vy, Vz) are 3d numpy arrays containing the components of >> the vector field and X, Y, Z 1d grids containing the grid coordinates. >> Is there anything like that around already? > > If you just want plotting, there is: > > http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/auto/mlab_helper_functions.html#enthought.mayavi.mlab.flow Really interesting, thanks for the link. Unfortunately I can't use for my specific problem. My field lines go around in a circle, and I need to draw a Poincare plot (i.e., plot a point at the r, z coordinates whenever the trajectory crosses a fixed angle Phi). And in addition to that, I need to do some computations on the trajectories, so a purely graphical output would solve only part of the problem. Is there anything else I could try? Best, -Nikolaus -- ?Time flies like an arrow, fruit flies like a Banana.? PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C From jake.biesinger at gmail.com Fri Oct 29 16:15:45 2010 From: jake.biesinger at gmail.com (Jacob Biesinger) Date: Fri, 29 Oct 2010 13:15:45 -0700 Subject: [SciPy-User] Scipy views and slicing: Can I get a view-slice from only certain elements of an array? Message-ID: Hi! My question is on slicing and views. I'd like to be able to create a view of an array from some subset of indices. This can *almost* be done using array slices as follows: scores = scipy.ones((10,1)) subset = scores[:5] # changes to subset will reflect in scores-- reference to the same object subset[0] = 3 subset /= subset.sum() # renormalize subset, updating scores as well I can also do fancy slicing and the reference ("view") to the original array is intact... scores = scipy.ones((10,1)) subset = scores[:6:2] # elements 0,2,6 subset[0] = 3 subset /= subset.sum() # both subset and scores are updated, though subset is # not a contiguous slice of scores What I can't do is create a view with arbitrary indices: scores = scipy.ones((10,1)) subset = scores[[1,5,7]] # not a reference! subset[0] = 3 subset /= subset.sum() # does not update scores! Is there a way to do this? I've also tried: subset = [scores[1:2], scores[5:6], scores[6:7]] # these are references, but the container is a list, not an array # and the syntax is annoying subset = scipy.array([scores[1:2], scores[5:6], scores[6:7]) # no longer a reference... subset = scipy.array([scores[1:2], scores[5:6], scores[6:7], copy=True) # also not a reference... Any thoughts? The data I'm working on is millions of short high-throughput sequencing reads, each of which may have 2-100+ possible genomic alignments. Each alignment falls within a particular genomic bin (~150 bases) but also has a probability associated with the alignment (so the sum over all alignments for each read will be 1). I need to update all the alignments in a particular bin (from many different reads) and then (once all bins are updated) renormalize all the alignments for each read. My current strategy is to have a single 1D array with all the probabilities, then two lists with the indexes into the large array-- one list stores the indices that fall within a genomic bin, whereas the other list stores the indices associated with a particular alignment. This is working fine, but the memory requirements are a bit high (1.5GB) and it's a bit slow since there are millions of reads, meaning lots and lots of slices from that large array. I wonder if I could replace the indices in each list with a view of the original array-- it seems that would save me a bit of memory and would make the slicing faster. Thanks for your help! -- Jake Biesinger Graduate Student Xie Lab, UC Irvine (949) 231-7587 From robert.kern at gmail.com Fri Oct 29 16:30:51 2010 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 29 Oct 2010 15:30:51 -0500 Subject: [SciPy-User] Scipy views and slicing: Can I get a view-slice from only certain elements of an array? In-Reply-To: References: Message-ID: On Fri, Oct 29, 2010 at 15:15, Jacob Biesinger wrote: > What I can't do is create a view with arbitrary indices: > > scores = scipy.ones((10,1)) > subset = scores[[1,5,7]] ? # not a reference! > subset[0] = 3 > subset /= subset.sum() ?# does not update scores! > > Is there a way to do this? Not automatically, no. But if you keep that index list around, it's pretty straightforward to do the update manually. indices = [1, 5, 7] subset = scores[indices] subset[0] = 3 subset /= subset.sum() scores[indices] = subset It doesn't save you any memory, though. numpy arrays must be described by a starting memory location and uniform strides. It can't jump around arbitrarily in memory. Well, it could, but it would be slow for typical cases. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From jake.biesinger at gmail.com Fri Oct 29 16:51:39 2010 From: jake.biesinger at gmail.com (Jacob Biesinger) Date: Fri, 29 Oct 2010 13:51:39 -0700 Subject: [SciPy-User] Scipy views and slicing: Can I get a view-slice from only certain elements of an array? In-Reply-To: References: Message-ID: >> Is there a way to do this? > > Not automatically, no. But if you keep that index list around, it's > pretty straightforward to do the update manually. > > indices = [1, 5, 7] > subset = scores[indices] > subset[0] = 3 > subset /= subset.sum() > scores[indices] = subset > > It doesn't save you any memory, though. Right-- as I mentioned, this is my current strategy. It's just a bit slow since there are millions of sets of indices. > numpy arrays must be described by a starting memory location and > uniform strides. It can't jump around arbitrarily in memory. Well, it > could, but it would be slow for typical cases. I thought I was close with the list of array slices. For some operations (like taking the summation), I think this would work okay, but it would likely take more memory than a list of integer indices. One last thought-- It would be nice to be able to use array.array's as indices to scipy.array's. In other words: indices = array('l', [1,5,7]) scores[indices] --------------------------------------------------------------------------- IndexError Traceback (most recent call last) This works fine but is a bit too much overhead for the memory savings I get from array's: scores[list(indices)] From almar.klein at gmail.com Fri Oct 29 16:53:26 2010 From: almar.klein at gmail.com (Almar Klein) Date: Fri, 29 Oct 2010 22:53:26 +0200 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> Message-ID: I agree with the OP. The Matlab file exchange is a great tool for developers to easily publish small (but also larger) pieces of code, and for people to search for particular code. As to what features I think such a system should have: It should be a repository capable of storing thousands of entries, which should be indexed, and categorized so that users can easily find code they are looking for. A review system and comments would also be nice. One `problem? is that many Python developers that have a great tool, publish it as an open source project on googlecode for example. Maybe such projects could be entered in the database as well, with a reference to the googlecode website for the code itself. I think it is a great idea, as it would help unite the Python scientific community. One (maybe the only) disadvantage I found when converting from Matlab to the Holy Language, is that the Python world seems a bit fragmented; you have to download Python from www.python.org, then numpy and scipy from scipy.org, etc. A repository of the likes of the Matlab file exchange would put at least all non standard code in a single place, which would be a big advantage. But who's going to set up and maintain such a big project, and whos going to pay for the server? Cheers, Almar On 29 October 2010 18:15, denis wrote: > > On Sun, Oct 24, 2010 at 10:43 AM, O wrote: > > > I'm a recent convert from MatLab. > > > One thing I miss is the Central File Exchange. Are there any plans > ... > > O, > can you describe a bit which parts of Matlab Central you want most ? > As Josef says, Scipy uses mail forums, ask.scipy.org, > stackoverflow ... > to answer questions pretty fast. Do you / do other users want > > - package reviews / comments / advertising -- > for new users, for experts ? Examples from Matlab ? > > - overviews of major areas, along the lines of Wikipedia articles > with links to detailed doc and recipes ? > > I believe that Scipy experts get more points for new stuff > and for answering questions than they would for either of these. > Matlab has a different reward system: > > "I believe in Art, but my manager believes in money, > and who am I to argue with such a baboon ?" > -- Groucho Marx > > cheers > -- denis > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From athanastasiou at gmail.com Fri Oct 29 17:19:24 2010 From: athanastasiou at gmail.com (Athanasios Anastasiou) Date: Fri, 29 Oct 2010 22:19:24 +0100 Subject: [SciPy-User] Loading a CSV with loadtxt In-Reply-To: <4CCAAFCC.8070606@iki.fi> References: <4CCAAFCC.8070606@iki.fi> Message-ID: Hello everyone Thank you for your answers Esko and Skipper. It keeps giving me the same error with StringIO (at least its consistent :-) ). Setting dtype=None equals to passing no dtype at all in which case, loadtxt assumes that the file is full of floats which clearly will not work in my case. The file is a well formed CSV one. It gets loaded into gnumeric without any problems. My version of numpy is 1.3.0 I have also tried with a simple one line file containing dummy entries like: "????", "????", 1.0, 0.5, 1200 It still finds that buffer object unreadable :-) The traceback (or some of it) is: 501 # Then, view the array using the specified dtype. --> 502 X = np.array(X, dtype=np.dtype([('', t) for t in dtype_types])) 503 X = X.view(dtype) 504 else: So presumably, the file is read alright, but it's when it comes to doing that [('', t) for t in dtype_types] that the actual error is generated. So i am wondering if i am using the correct names or forms of those types...Then again...i have also tried with numpy.float and numpy.uint and it still fails with the same reason :-/ All the best Athanasios On Fri, Oct 29, 2010 at 12:28 PM, Esko Lehtonen wrote: > 27.10.2010 22:19, Athanasios Anastasiou kirjoitti: >> Hello everyone >> >> I have a CSV file with the following format: >> Unicode<100, Unicode<100, float, float, uint >> >> and i am trying to read it into a numpy array using: >> >> theData = numpy.loadtxt('primData.csv',delimiter=',',skiprows=1,dtype={'names':('NodeS','NodeT','flowF','flowB','ID'),'formats':('U100','U100','f','f','uint')}) >> >> This fails with "expected a readable buffer object". I have also tried >> first opening the file through codecs as a utf-8 file and passing the >> file object into loadtxt but i still get the same error. And i have >> also tried defining the dtype outside of loadtxt and making sure that >> it works before using it any further. >> >> However, when i tried the code that is available at the cookbook: >> (with some minor modifications) >> http://www.scipy.org/Cookbook/InputOutput#head-b0366eac0be19c3d7c32fc81c47a7c02508b6f52 >> >> It works exactly as expected. >> >> I don't mind using the latter but i am just wondering what it is that >> i am doing wrong with loadtxt(?) >> >> Any ideas why loadtxt fails in this way? > > Hi > > I have encountered that error many times. Every time there was something > wrong with the text file (e.g. a missing or extra comma somewhere, > especially in the end of line, or maybe an empty line). loadtxt and > genfromtxt are very picky with the format of files and the error message > is not very helpful. > > - Esko > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From alan.isaac at gmail.com Fri Oct 29 18:07:26 2010 From: alan.isaac at gmail.com (Alan G Isaac) Date: Fri, 29 Oct 2010 18:07:26 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> Message-ID: <4CCB459E.4060104@gmail.com> On 10/29/2010 4:53 PM, Almar Klein wrote: > The Matlab file exchange is a great tool for developers to easily publish small (but also larger) pieces of code The Cookbook holds small pieces of code: http://www.scipy.org/Cookbook fwiw, Alan Isaac From josh.holbrook at gmail.com Fri Oct 29 18:10:28 2010 From: josh.holbrook at gmail.com (Joshua Holbrook) Date: Fri, 29 Oct 2010 14:10:28 -0800 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <4CCB459E.4060104@gmail.com> References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> Message-ID: I'm surprised nobody's mentioned pypi yet: http://pypi.python.org/pypi --Josh On Fri, Oct 29, 2010 at 2:07 PM, Alan G Isaac wrote: > On 10/29/2010 4:53 PM, Almar Klein wrote: >> The Matlab file exchange is a great tool for developers to easily publish small (but also larger) pieces of code > > The Cookbook holds small pieces of code: > http://www.scipy.org/Cookbook > > fwiw, > Alan Isaac > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Sat Oct 30 03:54:07 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 30 Oct 2010 09:54:07 +0200 Subject: [SciPy-User] Field Liner Tracing In-Reply-To: <87mxpwgaj4.fsf@inspiron.ap.columbia.edu> References: <87sjzpf9gm.fsf@inspiron.ap.columbia.edu> <20101029144718.GA32646@phare.normalesup.org> <87mxpwgaj4.fsf@inspiron.ap.columbia.edu> Message-ID: <20101030075407.GA3238@phare.normalesup.org> On Fri, Oct 29, 2010 at 01:43:11PM -0400, Nikolaus Rath wrote: > Unfortunately I can't use for my specific problem. My field lines go > around in a circle, and I need to draw a Poincare plot (i.e., plot a > point at the r, z coordinates whenever the trajectory crosses a fixed > angle Phi). And in addition to that, I need to do some computations on > the trajectories, so a purely graphical output would solve only part of > the problem. > Is there anything else I could try? I would personnally go for the ODE integrator in scipy, unless I want something really efficient and that does the interpolation of the vector field right, in which case I would dig in to reuse the Runge-Kutta engine behind the Mayavi function that I linked to. Unfortunately, it is in VTK, so using it requires quite a lot of work. Ga?l From faltet at pytables.org Sat Oct 30 06:00:05 2010 From: faltet at pytables.org (Francesc Alted) Date: Sat, 30 Oct 2010 12:00:05 +0200 Subject: [SciPy-User] Scipy views and slicing: Can I get a view-slice from only certain elements of an array? In-Reply-To: References: Message-ID: <201010301200.05508.faltet@pytables.org> A Friday 29 October 2010 22:51:39 Jacob Biesinger escrigu?: > One last thought-- It would be nice to be able to use array.array's > as indices to scipy.array's. In other words: > > indices = array('l', [1,5,7]) > scores[indices] > --------------------------------------------------------------------- > ------ IndexError Traceback (most > recent call last) > > This works fine but is a bit too much overhead for the memory savings > I get from array's: > scores[list(indices)] Why not using a numpy.array object instead of array.array? Indexing with them is much faster than using plain lists: >>> a = np.arange(1000) >>> b = np.arange(1e8) >>> timeit b[a] 100000 loops, best of 3: 10.4 ?s per loop >>> l = a.tolist() >>> timeit b[l] 10000 loops, best of 3: 66.5 ?s per loop NumPy arrays helps saving space too: >>> sys.getsizeof(l) 8072 >>> a.size*a.itemsize 8000 # 72 bytes less, not a lot but better than nothing And, if you don't need to address arrays larger than 2**31, then you can save more space yet if you choose the int32 type for indexing: >>> a4 = a.astype('i4') >>> a4.size*a4.itemsize 4000 >>> timeit b[a4] 100000 loops, best of 3: 10.2 ?s per loop # similar performance than i8 Hope this helps, -- Francesc Alted From faltet at pytables.org Sat Oct 30 06:04:36 2010 From: faltet at pytables.org (Francesc Alted) Date: Sat, 30 Oct 2010 12:04:36 +0200 Subject: [SciPy-User] Scipy views and slicing: Can I get a view-slice from only certain elements of an array? In-Reply-To: <201010301200.05508.faltet@pytables.org> References: <201010301200.05508.faltet@pytables.org> Message-ID: <201010301204.36335.faltet@pytables.org> A Saturday 30 October 2010 12:00:05 Francesc Alted escrigu?: > NumPy arrays helps saving space too: > >>> sys.getsizeof(l) > > 8072 > > >>> a.size*a.itemsize > > 8000 # 72 bytes less, not a lot but better than nothing Ooops. I forgot to include the numpy headers to this. So, probably a NumPy container is not more efficient (space-wise) than a plain list (unless you use shorter integers ;-) -- Francesc Alted From gerrit.holl at ltu.se Sat Oct 30 06:43:18 2010 From: gerrit.holl at ltu.se (Gerrit Holl) Date: Sat, 30 Oct 2010 12:43:18 +0200 Subject: [SciPy-User] Artificial Neural Network thoughts Message-ID: Hi, Summary of e-mail: "There are so many ANN packages and none seem very good; which one should I choose?" I am in the process of a transition from using primarily Matlab? to using Python and additional packages. I've been getting increasingly annoyed with Matlab? for a long time for reasons I probably don't need to explain here. My colleagues often accuse OSS of being too instable; not in the sense that it crashes, but that API's change too often and code just breaks upon increasing versions. Indeed, to get a several years old Python package written by a former co-worker working, I spend some hours digging through the code, because scipy now requires explicit imports (see http://projects.scipy.org/scipy/changeset/5206/trunk/scipy/__init__.py ), and this package relied on "from scipy import *'. Not nice, I don't see a good reason why this should break. But as Matlab? in its latest upgrade from 2010A to 2010B started to do the same (changing error identifiers, changing behaviour in neural network toolbox, both undocumented), I decided to go for Python. Of course, Python is not enough; for my work, I need at least numpy, scipy, matplotlib, ScientificPython, pytables (or another HDF5 interface), and a neural network toolkit. Many dependencies for my code, but alas, it works. I found more or less actively maintained packages that do what I need for all but the last toolkit. I need to train a regression, with neural networks or maybe other machine learning systems (support vector machines, bayesian monte carlo integration, ...). And frankly, I'm a bit worried about what's "out there". I like OSS, but here it appears considerably less strong than the propriety ANN package from Mathworks?, particularly when it comes to documentation. I am looking for advice. Is anyone using ANNs in production code, if so, what are the experiences? I have found: - Fast Artificial Neural Network Library (FANN), written in C but with Python bindings. Looks good, but appears to lack active maintenance. Version 2.1.0beta was published 2007-02-01 and the CVS repository appears unchanged for years. That doesn't sound good. Do I really want to rely on that? - bpnn.py, a 171 LOC lone pure Python file implementing back-propagation NN that "could use numpy to speed this up". Looks really simple and probably quite good to understand the concept, but is it any good for production code? Looks like I would need to do a lot of coding to make it fast (at least numpy-based) and tweak the settings. A Numeric-based implementation exists at http://www.dacya.ucm.es/jam/download.htm. It would probably take some time porting it to numpy. - ffnet, last (beta...) version published 2009-10-27 and last activity in svn tree 9 months ago. Seems allright, but is it stable? Is it maintained? Can I use it? - Monte Python appears to do neural networks. No idea how good it is. Last published version in June this year, verison 0.2.0, "introduces incompatibilities with previous versions" and was published more than 2 years after the version before. - pyann, http://sourceforge.net/projects/pyann/ "version 0.1.0" published in 2007 and last svn activity 4 months ago. Another one that seems really fragile. There are others. That is in fact a disadvantage. There are >5 packages and as far as I can see none sticks out. Do I really have the time to try them all out? Different needs, different qualities. But active maintenance is something anyone will be interested in, I suppose. Hence my question: What experiences do people have in using Artificial Neural Networks in Python+friends? What packages are suitable for production code and can be more or less relied on without having to understand all the details? Is any package much more widely used than the others? Thoughts are welcome! regards, Gerrit. -- Gerrit Holl PhD student at Department of Space Science, Lule? University of Technology, Kiruna, Sweden http://www.sat.ltu.se/members/gerrit/ From matthieu.brucher at gmail.com Sat Oct 30 06:50:09 2010 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 30 Oct 2010 12:50:09 +0200 Subject: [SciPy-User] Artificial Neural Network thoughts In-Reply-To: References: Message-ID: Hi, For machine learning stuff, you may want to check : - scikits.learn - PyMPVA - ... Matthieu 2010/10/30 Gerrit Holl : > Hi, > > Summary of e-mail: "There are so many ANN packages and none seem very > good; which one should I choose?" > > I am in the process of a transition from using primarily Matlab? to > using Python and additional packages. I've been getting increasingly > annoyed with Matlab? for a long time for reasons I probably don't need > to explain here. My colleagues often accuse OSS of being too instable; > not in the sense that it crashes, but that API's change too often and > code just breaks upon increasing versions. Indeed, to get a several > years old Python package written by a former co-worker working, I > spend some hours digging through the code, because scipy now requires > explicit imports (see > http://projects.scipy.org/scipy/changeset/5206/trunk/scipy/__init__.py > ), and this package relied on "from scipy import *'. Not nice, I don't > see a good reason why this should break. But as Matlab? in its latest > upgrade from 2010A to 2010B started to do the same (changing error > identifiers, changing behaviour in neural network toolbox, both > undocumented), I decided to go for Python. Of course, Python is not > enough; for my work, I need at least numpy, scipy, matplotlib, > ScientificPython, pytables (or another HDF5 interface), and a neural > network toolkit. > > Many dependencies for my code, but alas, it works. I found more or > less actively maintained packages that do what I need for all but the > last toolkit. I need to train a regression, with neural networks or > maybe other machine learning systems (support vector machines, > bayesian monte carlo integration, ...). And frankly, I'm a bit worried > about what's "out there". I like OSS, but here it appears considerably > less strong than the propriety ANN package from Mathworks?, > particularly when it comes to documentation. I am looking for advice. > Is anyone using ANNs in production code, if so, what are the > experiences? I have found: > > - Fast Artificial Neural Network Library (FANN), written in C but with > Python bindings. Looks good, but appears to lack active maintenance. > Version 2.1.0beta was published 2007-02-01 and the CVS repository > appears unchanged for years. That doesn't sound good. Do I really want > to rely on that? > - bpnn.py, a 171 LOC lone pure Python file implementing > back-propagation NN that "could use numpy to speed this up". Looks > really simple and probably quite good to understand the concept, but > is it any good for production code? Looks like I would need to do a > lot of coding to make it fast (at least numpy-based) and tweak the > settings. A Numeric-based implementation exists at > http://www.dacya.ucm.es/jam/download.htm. It would probably take some > time porting it to numpy. > - ffnet, last (beta...) version published 2009-10-27 and last activity > in svn tree 9 months ago. Seems allright, but is it stable? Is it > maintained? Can I use it? > - Monte Python appears to do neural networks. No idea how good it is. > Last published version in June this year, verison 0.2.0, "introduces > incompatibilities with previous versions" and was published more than > 2 years after the version before. > - pyann, http://sourceforge.net/projects/pyann/ "version 0.1.0" > published in 2007 and last svn activity 4 months ago. Another one that > seems really fragile. > > There are others. That is in fact a disadvantage. There are >5 > packages and as far as I can see none sticks out. Do I really have the > time to try them all out? Different needs, different qualities. But > active maintenance is something anyone will be interested in, I > suppose. Hence my question: > > What experiences do people have in using Artificial Neural Networks in > Python+friends? What packages are suitable for production code and can > be more or less relied on without having to understand all the > details? Is any package much more widely used than the others? > > Thoughts are welcome! > > regards, > Gerrit. > > -- > Gerrit Holl > PhD student at Department of Space Science, Lule? University of > Technology, Kiruna, Sweden > http://www.sat.ltu.se/members/gerrit/ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From gael.varoquaux at normalesup.org Sat Oct 30 07:01:56 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 30 Oct 2010 13:01:56 +0200 Subject: [SciPy-User] Artificial Neural Network thoughts In-Reply-To: References: Message-ID: <20101030110156.GF3238@phare.normalesup.org> On Sat, Oct 30, 2010 at 12:43:18PM +0200, Gerrit Holl wrote: > There are others. That is in fact a disadvantage. Welcome to a world of choice and meritocracy. Choice is hard, for the user, but also for the developers. Different packages reflect different choices from the developers. Who can tell which choice is best without time for things to settle down? In my opinion, finding the right approach to neural-network-related studies is still very much an open research question. > What experiences do people have in using Artificial Neural Networks in > Python+friends? What packages are suitable for production code and can > be more or less relied on without having to understand all the > details? Is any package much more widely used than the others? I don't have experience. I don't do ANN (I tend to prefer other approaches to machine learning, and you have plenty of choice in Python for these too). But from what I hear I would suggest that you look at: * pybrain (http://pybrain.org/). It's a well-polished library with good documentation done by a research community that focuses on neural networks. My impression is that it will not scale on large datasets, but then some might say that neural networks won't scale on large datasets without relying on specificities of the learning problem :). * pylearn (http://deeplearning.net/software/pylearn/) is probably promising as these guys are leading the research field in modern approaches to ANN, and have shown very interesting approaches to programming (theano). It is probably a bit too early to use it as an end product, though. These guys however want to scale on large problems. HTH, Ga?l From almar.klein at gmail.com Sat Oct 30 07:44:45 2010 From: almar.klein at gmail.com (Almar Klein) Date: Sat, 30 Oct 2010 13:44:45 +0200 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <4CCB459E.4060104@gmail.com> References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> Message-ID: On 30 October 2010 00:07, Alan G Isaac wrote: > On 10/29/2010 4:53 PM, Almar Klein wrote: > > The Matlab file exchange is a great tool for developers to easily publish > small (but also larger) pieces of code > > The Cookbook holds small pieces of code: > http://www.scipy.org/Cookbook > Yes, but it wouldn't really work if hundreds (or thousands) of people would submit pieces of code. Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From gerrit.holl at gmail.com Sat Oct 30 08:02:46 2010 From: gerrit.holl at gmail.com (Gerrit Holl) Date: Sat, 30 Oct 2010 14:02:46 +0200 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> Message-ID: On 30 October 2010 13:44, Almar Klein wrote: > On 30 October 2010 00:07, Alan G Isaac wrote: >> On 10/29/2010 4:53 PM, Almar Klein wrote: >> > The Matlab file exchange is a great tool for developers to easily >> > publish small (but also larger) pieces of code >> >> The Cookbook holds small pieces of code: >> http://www.scipy.org/Cookbook > > Yes, but it wouldn't really work if hundreds (or thousands) of people would > submit pieces of code. Why not add it to scipy? Or if it doesn't fit put it somewhere and link it from http://www.scipy.org/Topical_Software Gerrit. From gael.varoquaux at normalesup.org Sat Oct 30 08:07:48 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 30 Oct 2010 14:07:48 +0200 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> Message-ID: <20101030120748.GA17768@phare.normalesup.org> On Sat, Oct 30, 2010 at 02:02:46PM +0200, Gerrit Holl wrote: > >> The Cookbook holds small pieces of code: > >> http://www.scipy.org/Cookbook > > Yes, but it wouldn't really work if hundreds (or thousands) of people would > > submit pieces of code. > Why not add it to scipy? Because code requires maintenance, releases, and quality assurance. If thousands of people start pushing code in scipy, they need to help doing both of these things. The goal of a repo with no garanties like Matlab Central is to lower the barrier to sharing code, but giving up on any garanties. > Or if it doesn't fit put it somewhere and > link it from http://www.scipy.org/Topical_Software That means buiding software packages, which is also more work than simply dumping code on a webpage. Ga?l From pav at iki.fi Sat Oct 30 08:14:41 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 30 Oct 2010 12:14:41 +0000 (UTC) Subject: [SciPy-User] Central File Exchange for SciPy References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> Message-ID: Sat, 30 Oct 2010 14:02:46 +0200, Gerrit Holl wrote: [clip] >> Yes, but it wouldn't really work if hundreds (or thousands) of people >> would submit pieces of code. > > Why not add it to scipy? Because Scipy should only receive general-purpose and good-quality code, and its release cycle is not that fast. On purpose: The scope of Scipy is mainly to contain "basic tools for numerical scientific computation". On quality: What you typically have at first is "research quality code" --- it works for your particular problem, but it might not do everything necessary, may actually be a poor way to solve the problem, you are not 100% sure it has no bugs, and you haven't tested it for other problems. Refining it from this point onwards takes quite a bit of effort. On speed: You typically would like to publish your code now and not wait for a year before it's out. -- Pauli Virtanen From matthew.brett at gmail.com Sat Oct 30 08:15:06 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 30 Oct 2010 13:15:06 +0100 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <20101030120748.GA17768@phare.normalesup.org> References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: Hi, >> Or if it doesn't fit put it somewhere and >> link it from http://www.scipy.org/Topical_Software > > That means buiding software packages, which is also more work than simply > dumping code on a webpage. It's probably worth pointing out that most Matlab utilities are Matlab only (no extensions), and the code dumped is usually just an archive that you unpack somewhere and put on your Matlab path. That's the rough equivalent of a python package that is pure python, and for which the install method is copying or linking the directory into some directory on your python path. I can imagine something like a 'snippet' distribution format, which is just a README file, and the directory. Obviously if someone wanted to be more pypi about the whole thing, that would be easy too. See y'all, Matthew From washakie at gmail.com Sat Oct 30 08:15:28 2010 From: washakie at gmail.com (John) Date: Sat, 30 Oct 2010 14:15:28 +0200 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> Message-ID: +1 on pypi, but it needs some features added to it. Creating something between Matlab file exchange and vim scripts would be ideal. I've just checked out ask.scipy.org, but this doesn't seem what the OP is after. Furthermore, is there really no search feature here?? I think what pypi needs to fill this void is some features such as: 1) a separation of simple scripts versus packages (so really we need PyScript... also) 2) a good rating system 3) a good comment system --john On Sat, Oct 30, 2010 at 12:10 AM, Joshua Holbrook wrote: > I'm surprised nobody's mentioned pypi yet: ?http://pypi.python.org/pypi > > --Josh > > On Fri, Oct 29, 2010 at 2:07 PM, Alan G Isaac wrote: >> On 10/29/2010 4:53 PM, Almar Klein wrote: >>> The Matlab file exchange is a great tool for developers to easily publish small (but also larger) pieces of code >> >> The Cookbook holds small pieces of code: >> http://www.scipy.org/Cookbook >> >> fwiw, >> Alan Isaac >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Configuration `````````````````````````` Plone 2.5.3-final, CMF-1.6.4, Zope (Zope 2.9.7-final, python 2.4.4, linux2), Python 2.6 PIL 1.1.6 Mailman 2.1.9 Postfix 2.4.5 Procmail v3.22 2001/09/10 Basemap: 1.0 Matplotlib: 1.0.0 From washakie at gmail.com Sat Oct 30 08:21:01 2010 From: washakie at gmail.com (John) Date: Sat, 30 Oct 2010 14:21:01 +0200 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: Speaking of pypi, does anyone know if there is a way to see where the downloads came from as a package distributor. It would be interesting information. -john On Sat, Oct 30, 2010 at 2:15 PM, Matthew Brett wrote: > Hi, > >>> Or if it doesn't fit put it somewhere and >>> link it from http://www.scipy.org/Topical_Software >> >> That means buiding software packages, which is also more work than simply >> dumping code on a webpage. > > It's probably worth pointing out that most Matlab utilities are Matlab > only (no extensions), and the code dumped is usually just an archive > that you unpack somewhere and put on your Matlab path. > > That's the rough equivalent of a python package that is pure python, > and for which the install method is copying or linking the > directory into some directory on your python path. > > I can imagine something like a 'snippet' distribution format, which is > just a README file, and the directory. ?Obviously if > someone wanted to be more pypi about the whole thing, that would be > easy too. > > See y'all, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Sat Oct 30 08:21:33 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 30 Oct 2010 08:21:33 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <20101030120748.GA17768@phare.normalesup.org> References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: On Sat, Oct 30, 2010 at 8:07 AM, Gael Varoquaux wrote: > On Sat, Oct 30, 2010 at 02:02:46PM +0200, Gerrit Holl wrote: >> >> The Cookbook holds small pieces of code: >> >> http://www.scipy.org/Cookbook there is also the python cookbook (the interface looks closer to stackoverflow now) http://code.activestate.com/recipes/tags/numeric/ > >> > Yes, but it wouldn't really work if hundreds (or thousands) of people would >> > submit pieces of code. > >> Why not add it to scipy? > > Because code requires maintenance, releases, and quality assurance. If > thousands of people start pushing code in scipy, they need to help doing > both of these things. > > The goal of a repo with no garanties like Matlab Central is to lower the > barrier to sharing code, but giving up on any garanties. The big advantage in my view of the matlab file exchange is the ability to comment and rate an existing package and fork it if it looks like it can be improved with attribution link and has inspired links. And given that it is all (new code) clearly defined as BSD, it is safe to do so. This improves the quality control problem for the user quite a bit. The problem with pypi and "Topical Software", and as seen in the question on neural networks, is, for example, that dead and active projects are indistinguishable without finding the source repository and checking the updates. Without user contributed commenting it is a lot of work to maintain a list, see the (non)speed in cleaning up dead links on the Topical page. Josef > >> Or if it doesn't fit put it somewhere and >> link it from http://www.scipy.org/Topical_Software > > That means buiding software packages, which is also more work than simply > dumping code on a webpage. > > Ga?l > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From almar.klein at gmail.com Sat Oct 30 08:25:54 2010 From: almar.klein at gmail.com (Almar Klein) Date: Sat, 30 Oct 2010 14:25:54 +0200 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> Message-ID: On 30 October 2010 00:10, Joshua Holbrook wrote: > I'm surprised nobody's mentioned pypi yet: http://pypi.python.org/pypi > You're right. Pypi already has quite a few of the required features. But still, for some reason I cannot put my finger on, Matlab central feels nicer. On 30 October 2010 14:15, John wrote: > > I think what pypi needs to fill this void is some features such as: > > 1) a separation of simple scripts versus packages (so really we need > PyScript... also) > > 2) a good rating system > > 3) a good comment system > These features might indeed be improved a bit. Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Oct 30 09:39:46 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 30 Oct 2010 09:39:46 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: On Sat, Oct 30, 2010 at 8:15 AM, Matthew Brett wrote: > Hi, > >>> Or if it doesn't fit put it somewhere and >>> link it from http://www.scipy.org/Topical_Software >> >> That means buiding software packages, which is also more work than simply >> dumping code on a webpage. > > It's probably worth pointing out that most Matlab utilities are Matlab > only (no extensions), and the code dumped is usually just an archive > that you unpack somewhere and put on your Matlab path. > > That's the rough equivalent of a python package that is pure python, > and for which the install method is copying or linking the > directory into some directory on your python path. > > I can imagine something like a 'snippet' distribution format, which is > just a README file, and the directory. ?Obviously if > someone wanted to be more pypi about the whole thing, that would be > easy too. For plain python packages, "paster create" provides a full package structure with just filling out a few questions >paster create --list-templates Available templates: basic_package: A basic setuptools-enabled package complete: Complete, documentable, testable Python project template ... Just to be more pypi about it. (The only explanation a very short google search provides, is how it can be used for zope templates http://plone.org/documentation/kb/use-paster) Josef > > See y'all, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From craig.bulk at brechmos.org Sat Oct 30 10:02:57 2010 From: craig.bulk at brechmos.org (Craig Jones) Date: Sat, 30 Oct 2010 10:02:57 -0400 Subject: [SciPy-User] faster expm Message-ID: Hi All, I have been using scipy etc for just over a year now and am now exclusively using it. One application I have been doing requires a matrix exponential (in lieu of integration). In Matlab it was very fast but using scipy.linalg.expm it is quite slow (as the actual algorithm is implemented in python). In fact about 50% of the code is spent in expm. My matrix is typically 9x9 to 12x12 and is about 50% zeros (most nonzero are along the diagonal and just off diagonal). What I would really like is a faster expm. I am on a MacBook Pro and use Enthought. I know of things like pyrex, etc. I was thinking that gsl might be good as it is compiled but I was having problems with libraries and gsl. Really I just want a solution and don't want to spend time on all these things. The other option, I guess, is to use a Fortran solution and then merge that into python. Has anyone got any ideas faster solutions (both for expm and for me to get going :-) ). Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Oct 30 10:13:09 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 30 Oct 2010 10:13:09 -0400 Subject: [SciPy-User] faster expm In-Reply-To: References: Message-ID: On Sat, Oct 30, 2010 at 10:02 AM, Craig Jones wrote: > Hi All, > > I have been using scipy etc for just over a year now and am now exclusively > using it.? One application I have been doing requires a matrix exponential > (in lieu of integration).? In Matlab it was very fast but using > scipy.linalg.expm it is quite slow (as the actual algorithm is implemented > in python).? In fact about 50% of the code is spent in expm.? My matrix is > typically 9x9 to 12x12 and is about 50% zeros (most nonzero are along the > diagonal and just off diagonal). > > What I would really like is a faster expm.? I am on a MacBook Pro and use > Enthought.? I know of things like pyrex, etc.? I was thinking that gsl might > be good as it is compiled but I was having problems with libraries and gsl. > Really I just want a solution and don't want to spend time on all these > things.? The other option, I guess, is to use a Fortran solution and then > merge that into python. > > Has anyone got any ideas faster solutions (both for expm and for me to get > going :-) ). Are the other two implementation also slow in your case ? scipy.linalg.expm2 scipy.linalg.expm3 Josef > > Craig > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From pav at iki.fi Sat Oct 30 10:24:01 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 30 Oct 2010 14:24:01 +0000 (UTC) Subject: [SciPy-User] faster expm References: Message-ID: Sat, 30 Oct 2010 10:02:57 -0400, Craig Jones wrote: > I have been using scipy etc for just over a year now and am now > exclusively using it. One application I have been doing requires a > matrix exponential (in lieu of integration). In Matlab it was very fast > but using scipy.linalg.expm it is quite slow (as the actual algorithm is > implemented in python). In fact about 50% of the code is spent in expm. > My matrix is typically 9x9 to 12x12 and is about 50% zeros (most > nonzero are along the diagonal and just off diagonal). Well, it's also quite possible the difference is just that the algorithm in Scipy is not optimal and does more matrix products than necessary, rather than the problem being in Python overhead. Would require benchmarking of the expm code. Moreover, Scipy's expm always uses a fixed degree for the Pade approximant, which might not be the optimal choice. -- Pauli Virtanen From william.ratcliff at gmail.com Sat Oct 30 11:00:47 2010 From: william.ratcliff at gmail.com (william ratcliff) Date: Sat, 30 Oct 2010 11:00:47 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: If we could automate it, how much do you think the bandwidth/hosting costs would be per month? Would it be restricted to just code (that is just text files, cutoff above a certain size)? No bug tracking and a simple rating system for packages? A section for comments about a given package. The submitter gives it up to 4 tags (for searching) and we start out with a given list of topics and let people additional ones later? People register for an account (to reduce spam) or do we just use Openid or Openauth? How do we deal with spam? Do we allow people to sort packages by date? Rating? Would people want to use Django? What would we call it? I'd be willing to purchase a domain name and pay for hosting on webfaction to try it out. If it gets too pricey then I may have to ask for help later. (I think we should avoid ads). What would you guys like to call it? PythonCentral (is that infringing?)? ScipyExchange? If anyone wants to help mock up a prototype in Django, I have some time next week. I have no design skills ;> Finally, licensing--I don't want to start a flame war or anything, but can we agree to make code on the site BSD, or should we allow the submitter to pick an open source license. If so, do we follow googlecode for the choice of license? One last question (sorry for so many), given how many people already have nice projects on github, sourceforge, googlecode, etc., should we provide an option for people to simply link to their repository rather than provide us with a direct copy of the code? Actually, one model could be that people host their code somewhere else and we merely provide an aggregation service so people can easily see what's out there in the scientific python universe and how the community has rated a given package. That way, developers can keep their existing codebases without changing their workflow.... William On Sat, Oct 30, 2010 at 9:39 AM, wrote: > On Sat, Oct 30, 2010 at 8:15 AM, Matthew Brett > wrote: > > Hi, > > > >>> Or if it doesn't fit put it somewhere and > >>> link it from http://www.scipy.org/Topical_Software > >> > >> That means buiding software packages, which is also more work than > simply > >> dumping code on a webpage. > > > > It's probably worth pointing out that most Matlab utilities are Matlab > > only (no extensions), and the code dumped is usually just an archive > > that you unpack somewhere and put on your Matlab path. > > > > That's the rough equivalent of a python package that is pure python, > > and for which the install method is copying or linking the > > directory into some directory on your python path. > > > > I can imagine something like a 'snippet' distribution format, which is > > just a README file, and the directory. Obviously if > > someone wanted to be more pypi about the whole thing, that would be > > easy too. > > For plain python packages, "paster create" provides a full package > structure with just filling out a few questions > > >paster create --list-templates > Available templates: > basic_package: A basic setuptools-enabled package > complete: Complete, documentable, testable Python project template > ... > > Just to be more pypi about it. > (The only explanation a very short google search provides, is how it > can be used for zope templates > http://plone.org/documentation/kb/use-paster) > > Josef > > > > See y'all, > > > > Matthew > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From craig.bulk at brechmos.org Sat Oct 30 12:25:45 2010 From: craig.bulk at brechmos.org (Craig Jones) Date: Sat, 30 Oct 2010 12:25:45 -0400 Subject: [SciPy-User] faster expm In-Reply-To: References: Message-ID: I tried expm, expm2 and expm3 and the total time was no different. Using q=2 on expm decreased the time by a bit (4m 20s to 3m 38s). Here is a profile when I use expm: ncalls tottime percall cumtime percall filename:lineno(function) 469200 58.505 0.000 158.295 0.000 matfuncs.py:26(expm) 4692000 23.989 0.000 23.989 0.000 {method 'any' of 'numpy.ndarray' objects} 2346000 23.508 0.000 52.313 0.000 function_base.py:523(asarray_chkfinite) 5664800 22.470 0.000 22.470 0.000 {numpy.core._dotblas.dot} 938400 19.571 0.000 83.525 0.000 basic.py:23(solve) 469200 18.089 0.000 76.861 0.000 bloch.py:41(updateA) 1878096 14.115 0.000 15.308 0.000 numerictypes.py:808(_can_coerce_all) 1 13.843 13.843 290.918 290.918 bloch.py:90(solve) In general my matrix looks something like: -13.59091 28.00000 8.00000 -6419.40476 0.00000 0.00000 0.00000 0.00000 0.00000 0.05091 -58.30303 0.00000 0.00000 -9227.89435 0.00000 0.00000 0.00000 0.00000 1.04000 0.00000 -100008.00000 0.00000 0.00000 -6419.40476 0.00000 0.00000 0.00000 6419.40476 0.00000 0.00000 -13.59091 28.00000 8.00000 0.00000 0.00000 0.00000 0.00000 9227.89435 0.00000 0.05091 -58.30303 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 6419.40476 1.04000 0.00000 -100008.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 -2.51948 28.00000 8.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.05091 -29.29870 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 1.04000 0.00000 -8.83333 And the call is: y = dot(scipy.linalg.expm(A*pulse[m,2], q=2),(y+Ainvb))-Ainvb In all my experience python/numpy/scipy is very fast. I think for this situation because I am calling this so many times it just feels bogged down. I do want to see if I can use this code with a fitting algorithm so the faster I can make this the better off I am. Any other ideas? Craig On Sat, Oct 30, 2010 at 10:24 AM, Pauli Virtanen wrote: > Sat, 30 Oct 2010 10:02:57 -0400, Craig Jones wrote: > > I have been using scipy etc for just over a year now and am now > > exclusively using it. One application I have been doing requires a > > matrix exponential (in lieu of integration). In Matlab it was very fast > > but using scipy.linalg.expm it is quite slow (as the actual algorithm is > > implemented in python). In fact about 50% of the code is spent in expm. > > My matrix is typically 9x9 to 12x12 and is about 50% zeros (most > > nonzero are along the diagonal and just off diagonal). > > Well, it's also quite possible the difference is just that the algorithm > in Scipy is not optimal and does more matrix products than necessary, > rather than the problem being in Python overhead. Would require > benchmarking of the expm code. Moreover, Scipy's expm always uses a fixed > degree for the Pade approximant, which might not be the optimal choice. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apalomba at austin.rr.com Sat Oct 30 12:30:51 2010 From: apalomba at austin.rr.com (Anthony Palomba) Date: Sat, 30 Oct 2010 11:30:51 -0500 Subject: [SciPy-User] searching SciPy-User email list Message-ID: Is there a way to search the SciPy-User email list? -ap -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Oct 30 12:34:57 2010 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 30 Oct 2010 09:34:57 -0700 Subject: [SciPy-User] faster expm In-Reply-To: References: Message-ID: On Sat, Oct 30, 2010 at 7:02 AM, Craig Jones wrote: > I have been using scipy etc for just over a year now and am now exclusively > using it.? One application I have been doing requires a matrix exponential > (in lieu of integration).? In Matlab it was very fast but using > scipy.linalg.expm it is quite slow (as the actual algorithm is implemented > in python).? In fact about 50% of the code is spent in expm.? My matrix is > typically 9x9 to 12x12 and is about 50% zeros (most nonzero are along the > diagonal and just off diagonal). If Matlab can do it quickly, then there's no reason Scipy couldn't do it just as quickly... if someone wants to figure out how. There was actually a discussion of this a ~year and a half ago: http://www.mail-archive.com/sage-devel at googlegroups.com/msg20281.html It sounds like Matlab uses a better algorithm, that's described in a paper linked from that message. But I don't think anyone followed up at the time. Now that you're actually running into problems maybe someone (maybe you?) will be more motivated :-). -- Nathaniel From josef.pktd at gmail.com Sat Oct 30 12:36:32 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 30 Oct 2010 12:36:32 -0400 Subject: [SciPy-User] searching SciPy-User email list In-Reply-To: References: Message-ID: On Sat, Oct 30, 2010 at 12:30 PM, Anthony Palomba wrote: > Is there a way to search the SciPy-User email list? I'm using google advanced search for the time when I wasn't subscribed, unfortunately it doesn't give a threaded view e.g. kde site:http://mail.scipy.org/pipermail/scipy-user Josef > > > > > -ap > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From pav at iki.fi Sat Oct 30 12:51:58 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 30 Oct 2010 16:51:58 +0000 (UTC) Subject: [SciPy-User] Central File Exchange for SciPy References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: Sat, 30 Oct 2010 11:00:47 -0400, william ratcliff wrote: > If we could automate it, how much do you think the bandwidth/hosting > costs would be per month? No idea. Probably the traffic wouldn't be too much, at least at first. > No bug tracking and a simple rating system for packages? Yes. > A section for comments about a given package. Perhaps with a possibility to up/downvote comments? > The submitter gives it up to 4 tags (for searching) and we > start out with a given list of topics and let people additional ones > later? Yes. It might be useful to try to follow PyPi style classifiers here, and extend them as needed. > People register for an account (to reduce spam) or do we just use > Openid or Openauth? How do we deal with spam? Email verification on registration + spam flagging by users + rel=nofollow in comments? > Do we allow people to sort packages by date? Rating? Yes and yes. > Would people want to use Django? Django will get the job done, and it's on the easier end of the spectrum of Python web frameworks. I'd pick it. > I'd be willing to purchase a domain name and pay for hosting on > webfaction to try it out. If it gets too pricey then I may have to > ask for help later. (I think we should avoid ads). One possibility might be to ask if Enthough would be interested in sponsoring such a thing, and running it on the scipy.org servers. But that's for later, when there's actually an something working to show. > What would you guys like to call it? > PythonCentral (is that infringing?)? ScipyExchange? Well, it might be worth to target it for the scientific audience, so the name choice should be in accord. Also, I'd avoid clone-ish names. > If anyone wants to help mock up a prototype in Django, I have some time > next week. I have no design skills ;> I know some Django. > Finally, licensing--I don't want to start a flame war or anything, but > can we agree to make code on the site BSD, or should we allow the > submitter to pick an open source license. If so, do we follow > googlecode for the choice of license? I'd believe allowing the submitter to pick an open-source license for bigger packages could be useful. However, for code snippets we might want to enforce BSD. > One last question (sorry for so many), given how many people already > have nice projects on github, sourceforge, googlecode, etc., should we > provide an option for people to simply link to their repository rather > than provide us with a direct copy of the code? Actually, one model > could be that people host their code somewhere else and we merely > provide an aggregation service so people can easily see what's out there > in the scientific python universe and how the community has rated a > given package. That way, developers can keep their existing codebases > without changing their workflow.... Here, it would be best to not forget that we already have the scikits.* namespace packages, and scikits.appspot.com. How that web app works, is that people just upload a package named scikits.something on PyPi, and the portal picks it up from there. The new system should be a "spiritual successor" to scikits, with more features etc., and a friendly hosting option for small snippets. So yes, it should definitely allow externally hosted packages, especially PyPi. Perhaps it would even be useful to automatically import science packages (including scikits) from PyPi. The package entry should also be usable only as an "advertisement" for a package, with the package itself being hosted elsewhere. (Here, users should be able to flag broken links etc.) Another thing that should be considered: the system should enforce that metadata is entered: package descriptions should be sufficiently detailed, a suitable number of tags should be entered, etc. -- Pauli Virtanen From apalomba at austin.rr.com Sat Oct 30 12:59:28 2010 From: apalomba at austin.rr.com (Anthony Palomba) Date: Sat, 30 Oct 2010 11:59:28 -0500 Subject: [SciPy-User] searching SciPy-User email list In-Reply-To: References: Message-ID: How can I search this! -ap On Sat, Oct 30, 2010 at 11:36 AM, wrote: > On Sat, Oct 30, 2010 at 12:30 PM, Anthony Palomba > wrote: > > Is there a way to search the SciPy-User email list? > > I'm using google advanced search for the time when I wasn't > subscribed, unfortunately it doesn't give a threaded view > > e.g. > kde site:http://mail.scipy.org/pipermail/scipy-user > > Josef > > > > > > > > > > > -ap > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From almar.klein at gmail.com Sat Oct 30 13:00:15 2010 From: almar.klein at gmail.com (Almar Klein) Date: Sat, 30 Oct 2010 19:00:15 +0200 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: On 30 October 2010 17:00, william ratcliff wrote: > If we could automate it, how much do you think the bandwidth/hosting costs > would be per month? Would it be restricted to just code (that is just text > files, cutoff above a certain size)? No bug tracking and a simple rating > system for packages? A section for comments about a given package. The > submitter gives it up to 4 tags (for searching) and we start out with a > given list of topics and let people additional ones later? People register > for an account (to reduce spam) or do we just use Openid or Openauth? > How do we deal with spam? Do we allow people to sort packages by date? > Rating? Would people want to use Django? What would we call it? I'd be > willing to purchase a domain name and pay for hosting on webfaction to try > it out. If it gets too pricey then I may have to ask for help later. (I > think we should avoid ads). What would you guys like to call it? > Woaw, I like your enthusiasm! However, let's first establish whether we should discard Pypi or if we can maybe make it suitable for our needs with a few changes (assuming that the rest of the Python community lets us make these changes). One maybe-downside is that Pypi is for Python in general. Is this a problem, do we want something purely for science and engineering? PythonCentral (is that infringing?)? ScipyExchange? > If we're doing this, I guess it'd be science focused, so I suggest a name with a reference to science of scipy. > If anyone wants to help mock up a prototype in Django, I have some time > next week. I have no design skills ;> > > Finally, licensing--I don't want to start a flame war or anything, but can > we agree to make code on the site BSD, or should we allow the submitter to > pick an open source license. If so, do we follow googlecode for the choice > of license? > Given that Python is mainly BSD oriented, I would vote for making all code hosted at the site BSD. Maybe that larger projects that are only referenced (as you also suggested) may choose their own license. (I actually own two non-BSD projects because I did not fully understand the value/importance of the BSD license in the Python world. I was recently convinced by a wise man and will convert both my projects to BSD.) One last question (sorry for so many), given how many people already have > nice projects on github, sourceforge, googlecode, etc., should we provide an > option for people to simply link to their repository rather than provide us > with a direct copy of the code? Actually, one model could be that people > host their code somewhere else and we merely provide an aggregation service > so people can easily see what's out there in the scientific python universe > and how the community has rated a given package. That way, developers can > keep their existing codebases without changing their workflow.... > I definitely think this is a good idea. The site would then serve the role as the central place to search for scientific Python projects, without the need for people to host their projects at two locations. Almar PS: While writing this, Pauli also sent his response. I'm happy to see that we agree on most topics :) William > > > On Sat, Oct 30, 2010 at 9:39 AM, wrote: > >> On Sat, Oct 30, 2010 at 8:15 AM, Matthew Brett >> wrote: >> > Hi >> > >> >>> Or if it doesn't fit put it somewhere and >> >>> link it from http://www.scipy.org/Topical_Software >> >> >> >> That means buiding software packages, which is also more work than >> simply >> >> dumping code on a webpage. >> > >> > It's probably worth pointing out that most Matlab utilities are Matlab >> > only (no extensions), and the code dumped is usually just an archive >> > that you unpack somewhere and put on your Matlab path. >> > >> > That's the rough equivalent of a python package that is pure python, >> > and for which the install method is copying or linking the >> > directory into some directory on your python path. >> > >> > I can imagine something like a 'snippet' distribution format, which is >> > just a README file, and the directory. Obviously if >> > someone wanted to be more pypi about the whole thing, that would be >> > easy too. >> >> For plain python packages, "paster create" provides a full package >> structure with just filling out a few questions >> >> >paster create --list-templates >> Available templates: >> basic_package: A basic setuptools-enabled package >> complete: Complete, documentable, testable Python project >> template >> ... >> >> Just to be more pypi about it. >> (The only explanation a very short google search provides, is how it >> can be used for zope templates >> http://plone.org/documentation/kb/use-paster) >> >> Josef >> > >> > See y'all, >> > >> > Matthew >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mondifero at gmail.com Sat Oct 30 13:01:57 2010 From: mondifero at gmail.com (O) Date: Sat, 30 Oct 2010 13:01:57 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: Message-ID: William, Bravo! If you decide to follow through on this, I think it could be *huge*. One other question is whether to do it for just scipy or python generally. I think people really need a place to deposit snipits of useful code w/rating system and commentary. If it takes off, I think it would attract lots more people to Python and scipy/numpy. I agree with all of your suggestions. Maybe contact python.org to see if you can get a link to it. As for what to call it, "PythonExchange" is a third option. With respect to licensing, I'd allow the submitters to choose, and supply guidelines on the site about how to do this properly in their code (if they want to). Allowing people to link to projects they have elsewhere is an excellent idea. O (phaustus) -------------- next part -------------- An HTML attachment was scrubbed... URL: From apalomba at austin.rr.com Sat Oct 30 13:13:51 2010 From: apalomba at austin.rr.com (Anthony Palomba) Date: Sat, 30 Oct 2010 12:13:51 -0500 Subject: [SciPy-User] Scipy and random number algorithms... Message-ID: Does anyone know if scipy has a library of random number generators like: brownian, Gaussian, fractal, etc. Alternately does anyone know of some other python package that does these things? Thanks, Anthony -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sat Oct 30 13:25:29 2010 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 30 Oct 2010 19:25:29 +0200 Subject: [SciPy-User] Scipy and random number algorithms... In-Reply-To: References: Message-ID: Hi, Numpy has the random generators ;) Matthieu 2010/10/30 Anthony Palomba : > Does anyone know if scipy has a library of random > number generators like: brownian, Gaussian, fractal, etc. > Alternately does anyone know of some other python > package that does these things? > > > > Thanks, > Anthony > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From josef.pktd at gmail.com Sat Oct 30 13:26:19 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 30 Oct 2010 13:26:19 -0400 Subject: [SciPy-User] Scipy and random number algorithms... In-Reply-To: References: Message-ID: On Sat, Oct 30, 2010 at 1:13 PM, Anthony Palomba wrote: > Does anyone know if scipy has a library of random > number generators like: brownian, Gaussian, fractal, etc. > Alternately does anyone know of some other python > package that does these things? I have no idea about fractals. numpy scipy has random variable generators for many distributions, pymc has many for mcmc, statsmodels has for ARMA processes, and some continuous time stochastic processes, and I gthink there are others around. Can you be a bit more specific? Brownian motion with discrete steps is in statsmodels, and I have seen one from Jonathan Taylor does also has interpolation. Josef > > > > Thanks, > Anthony > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From denis-bz-gg at t-online.de Sat Oct 30 13:32:56 2010 From: denis-bz-gg at t-online.de (denis) Date: Sat, 30 Oct 2010 10:32:56 -0700 (PDT) Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: <227d307f-edf1-4965-8a9e-e6979c20e639@j2g2000yqf.googlegroups.com> Folks, that's quite a few good comments in various directions. It looks to me as though we want a combination of two things: - a database of numpy/scipy packages, searchable on tags, date ... - a web front end for user comments on packages. These exist to some extent, as concrete models: for the database part, http://code.google.com/hosting/search?q=label%3Anumpy&projectsearch=Search+projects looks reasonable, but is for code.google packages only. For commenting, I like the stackoverflow (== Solace ?) realtime html subset. There must be other rating/commenting systems as partial models ? On rating: book ratings mostly 5-star are worthless unless the reviewer can articulate why. So I'd leave number ratings out; people can say "no doc" or "excellent doc" etc. in the text. On Pypi: imho way too big and old, improvements take years. Scipy is plenty -- start small. So what's next, towards a version 0 of "Scipy Review": - find an expert who's built a db + web interface or two -- perhaps the Ask.scipy.org people ? - 1-page spec (matter of taste, I like a paper spec first). cheers -- denis From gael.varoquaux at normalesup.org Sat Oct 30 13:42:18 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 30 Oct 2010 19:42:18 +0200 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: <20101030174218.GB17768@phare.normalesup.org> On Sat, Oct 30, 2010 at 07:00:15PM +0200, Almar Klein wrote: > Woaw, I like your enthusiasm! However, let's first establish whether we > should discard Pypi or if we can maybe make it suitable for our needs with > a few changes (assuming that the rest of the Python community lets us make > these changes). Yes, I am impressed by the positive attitude that this thread is taking. Congratulations for that, and for offering time and energy. I think the idea in general is an excellent one. In addition to what has already been said, here are a few remarks, in random order: * Don't disregard PyPI for well-maintained packages: we need the non-scientific Python community, let us not break appart from them. Besides, scientific users also need packages to read XML, talk over the network... On the other hand, for simply cookbook-like stuff, I believe that there might be some value in having a scipy-specific * repository. * Searching is important, but can be made easily using a Google custom search. For example, I have been impressed by what Fabian did for the scikit-learn's website, http://scikit-learn.sourceforge.net/, on the top right of the webpage. Try it out, it is cooler than it seems. * License: let's at least force users to choose an OSI-compatible license. I would try to push them using the BSD, as in my experience many people choose GPL by default, but I would not enforce this choice. * Self hosting, bandwidth... this should not stop anybody from start wild ideas. http://docs.scipy.org started as a crazy idea hosted on my girlfriend's server (and coded by Stefan, Pauli and herself :P). It migrated to Enthought-hosted servers when it became more 'production-ready'. By the way, speaking of migration of service, now that ask.scipy.org is in production, we should add a link to it on the sidebar of the scipy.org website, the docs frontpage, and the planet. I can do it for the planet, so if someone does it for the scipy.org wiki, I'll just copy the design. In general, I think that it is important that all these websites be linked together. As Eleftherios would say 'Go, go team' Ga?l From josef.pktd at gmail.com Sat Oct 30 14:18:40 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 30 Oct 2010 14:18:40 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <20101030174218.GB17768@phare.normalesup.org> References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> Message-ID: On Sat, Oct 30, 2010 at 1:42 PM, Gael Varoquaux wrote: > On Sat, Oct 30, 2010 at 07:00:15PM +0200, Almar Klein wrote: >> ? ?Woaw, I like your enthusiasm! However, let's first establish whether we >> ? ?should discard Pypi or if we can maybe make it suitable for our needs with >> ? ?a few changes (assuming that the rest of the Python community lets us make >> ? ?these changes). > > Yes, I am impressed by the positive attitude that this thread is taking. > Congratulations for that, and for offering time and energy. > > I think the idea in general is an excellent one. In addition to what has > already been said, here are a few remarks, in random order: > > ? ?* Don't disregard PyPI for well-maintained packages: we need the > ? ? ?non-scientific Python community, let us not break appart from them. > ? ? ?Besides, scientific users also need packages to read XML, talk over > ? ? ?the network... On the other hand, for simply cookbook-like stuff, > ? ? ?I believe that there might be some value in having a scipy-specific > ? ?* repository. > > ? ?* Searching is important, but can be made easily using a Google > ? ? ?custom search. For example, I have been impressed by what Fabian > ? ? ?did for the scikit-learn's website, > ? ? ?http://scikit-learn.sourceforge.net/, on the top right of the > ? ? ?webpage. Try it out, it is cooler than it seems. > > ? ?* License: let's at least force users to choose an OSI-compatible > ? ? ?license. I would try to push them using the BSD, as in my > ? ? ?experience many people choose GPL by default, but I would not > ? ? ?enforce this choice. I would strongly recommend (to users) that all shorter code, snippets and recipes, are BSD by default or made explicit by the user, and that the license is very easy to see on the web page. Given that we are writing BSD code and to avoid any conflicts, I essentially ignore all non-BSD code, for example on the matlab file exchange. Josef > > ? ?* Self hosting, bandwidth... this should not stop anybody from start > ? ? ?wild ideas. http://docs.scipy.org started as a crazy idea hosted on > ? ? ?my girlfriend's server (and coded by Stefan, Pauli and herself :P). > ? ? ?It migrated to Enthought-hosted servers when it became more > ? ? ?'production-ready'. > > By the way, speaking of migration of service, now that ask.scipy.org is > in production, we should add a link to it on the sidebar of the scipy.org > website, the docs frontpage, and the planet. I can do it for the planet, > so if someone does it for the scipy.org wiki, I'll just copy the design. > > In general, I think that it is important that all these websites be > linked together. > > As Eleftherios would say 'Go, go team' > > Ga?l > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From almar.klein at gmail.com Sat Oct 30 15:00:23 2010 From: almar.klein at gmail.com (Almar Klein) Date: Sat, 30 Oct 2010 21:00:23 +0200 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> Message-ID: > I would strongly recommend (to users) that all shorter code, snippets > and recipes, are BSD by default or made explicit by the user, and that > the license is very easy to see on the web page. > Given that we are writing BSD code and to avoid any conflicts, I > essentially ignore all non-BSD code, for example on the matlab file > exchange. > Hear hear! Since most Python code is BSD licensed, a module/package using non-BSD compatible license (for example GLP) would be incompatible with, well almost all Python code. This may sound trivial, but I, for one, did not fully understand this until someone explained it. I would even go so far as to force a BSD license for all code hosted on the site itself. Referenced code can then still choose a license. At the very least there should be a proper explanation that people should chose the BSD license in most cases, and *why*. Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.ratcliff at gmail.com Sat Oct 30 16:38:15 2010 From: william.ratcliff at gmail.com (william ratcliff) Date: Sat, 30 Oct 2010 16:38:15 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> Message-ID: How about "scisnippets"? I'll try to start a prototype tomorrow so we have something tangible. On Oct 30, 2010 1:00 PM, "Almar Klein" wrote: >> I would strongly recommend (to users) that all shorter code, snippets >> and recipes, are BSD by default or made explicit by the user, and that >> the license is very easy to see on the web page. >> Given that we are writing BSD code and to avoid any conflicts, I >> essentially ignore all non-BSD code, for example on the matlab file >> exchange. >> > > Hear hear! Since most Python code is BSD licensed, a module/package using > non-BSD compatible license (for example GLP) would be incompatible with, > well almost all Python code. This may sound trivial, but I, for one, did not > fully understand this until someone explained it. > > I would even go so far as to force a BSD license for all code hosted on the > site itself. Referenced code can then still choose a license. At the very > least there should be a proper explanation that people should chose the BSD > license in most cases, and *why*. > > Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sat Oct 30 17:42:02 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 30 Oct 2010 16:42:02 -0500 Subject: [SciPy-User] Comments wanted for the API of an enhanced firwin function in scipy.signal Message-ID: In ticket #902 (http://projects.scipy.org/scipy/ticket/902), we're working on an enhancement to the firwin function in scipy.signal. The implementation is basically done, but I would like some additional input on the new API. If you are interested in FIR filter design, take a look at that ticket, and add comments to that ticket (or reply here) if you have any suggestions. Thanks, Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From ocefpaf at gmail.com Sat Oct 30 17:43:32 2010 From: ocefpaf at gmail.com (Filipe Pires Alvarenga Fernandes) Date: Sat, 30 Oct 2010 17:43:32 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> Message-ID: On Sat, Oct 30, 2010 at 15:00, Almar Klein wrote: > >> I would strongly recommend (to users) that all shorter code, snippets >> and recipes, are BSD by default or made explicit by the user, and that >> the license is very easy to see on the web page. >> Given that we are writing BSD code and to avoid any conflicts, I >> essentially ignore all non-BSD code, for example on the matlab file >> exchange. > > Hear hear! Since most Python code is BSD licensed, a module/package using > non-BSD compatible license (for example GLP) would be incompatible with, > well almost all Python code. This may sound trivial, but I, for one, did not > fully understand this until someone explained it. > > I would even go so far as to force a BSD license for all code hosted on the > site itself. Referenced code can then still choose a license. At the very > least there should be a proper explanation that people should chose the BSD > license in most cases, and *why*. Well even the "original" file exchange ended up forcing BSD licensing: http://www.mathworks.com/matlabcentral/FX_transition_faq.html To me that's the first limitation for considering Pypi similar to a possible file exchange. Don't get me wring, I'm a big fan of Pypi, but it is far more complex than a file exchange (That's the second limitation). > ? Almar From fperez.net at gmail.com Sat Oct 30 18:22:56 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 30 Oct 2010 15:22:56 -0700 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: On Sat, Oct 30, 2010 at 8:00 AM, william ratcliff wrote: > If we could automate it, how much do you think the bandwidth/hosting costs > would be per month? ? Would it be restricted to just code (that is just text > files, cutoff above a certain size)? ?No bug tracking and a simple rating > system for packages? A section for comments about a given package. ?The > submitter gives it up to 4 tags (for searching) and we start out with a > given list of topics and let people additional ones later? ?People register > for an account (to reduce spam) or do we just use Openid or Openauth? > How do we deal with spam? ?Do we allow people to sort packages by date? > ?Rating? ?Would people want to use Django? ? What would we call it? ?I'd be > willing to purchase a domain name and pay for hosting on webfaction to try > it out. ?If it gets too pricey then I may have to ask for help later. (I > think we should avoid ads). ? ?What would you guys like to call it? Just a few comments from the sidelines... I think it would be really great if every snippet had an automatic version control history associated with it. For me, the gist model at github is perfect in this regard. Consider for example (random gist I found that had numpy in it): http://gist.github.com/364369 This very simple page has all the code, a download button, space for comments, revision history and a 'fork' button. The last two for me are very, very important: they plant the seed that allows a simple script to very easily grow into something larger. The author has an easy way to make improvements and track those (with near-zero setup overhead), and the 'fork' button makes it easy for others to contribute. For multi-file projects, the obvious counterpart is obviously a real repo (github or whatever). I know it may feel a little harsh to push a specific version control system, but to me the idea of binding revision history and forking support as an integral part of a 'file exchange' is actually important. I think that we should try not just to replicate matlab's file exchange website, but rather to do better. And I think that pervasive version control 'as a way of life' is actually one ingredient in the right direction. In any case, there's zero chance that I'll do any actual work on this, so consider this idle chat from the peanut gallery :) I'll be happy to use anything those actually putting in the real elbow grease can come up with. Regards, f From almar.klein at gmail.com Sat Oct 30 18:54:03 2010 From: almar.klein at gmail.com (Almar Klein) Date: Sun, 31 Oct 2010 00:54:03 +0200 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> Message-ID: On 30 October 2010 22:38, william ratcliff wrote: > How about "scisnippets"? I'll try to start a prototype tomorrow so we have > something tangible. > That sounds a bit too narrow. From how I see it, we can distinguish three or four submit-categories (I'm not sure whether the first two should be combined): * snippets (few lines of code) * modules / scripts (single file) * packages (multiple files) * referenced project (linking to say googlecode or github) Thanks for taking this on, William. I really think this is a great project and I think this can become something big. Sadly, I'm not in a position to do any actual work, as I have no experience with Zope, and have not much free time to spare with my 5 month old son at home :) Cheers, Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From cycomanic at gmail.com Sat Oct 30 18:59:34 2010 From: cycomanic at gmail.com (=?UTF-8?B?Sm9jaGVuIFNjaHLDtmRlcg==?=) Date: Sun, 31 Oct 2010 09:59:34 +1100 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> Message-ID: <4CCCA356.8090000@gmail.com> On 31/10/10 06:00, Almar Klein wrote: > > I would strongly recommend (to users) that all shorter code, snippets > and recipes, are BSD by default or made explicit by the user, and that > the license is very easy to see on the web page. > Given that we are writing BSD code and to avoid any conflicts, I > essentially ignore all non-BSD code, for example on the matlab file > exchange. > > > Hear hear! Since most Python code is BSD licensed, a module/package > using non-BSD compatible license (for example GLP) would be incompatible > with, well almost all Python code. This may sound trivial, but I, for > one, did not fully understand this until someone explained it. > > I would even go so far as to force a BSD license for all code hosted on > the site itself. Referenced code can then still choose a license. At the > very least there should be a proper explanation that people should chose > the BSD license in most cases, and *why*. > > Almar Let me first say that I love your idea and the enthusiasm you've already created. However I really take issue with the above statement and the notion of forcing a specific OSS licence choice onto users. First your statement above is factually not correct: GPL is a BSD compatible licence (in the usual meaning of this phrase), i.e. you can include BSD code in a GPL project. You can also do otherwise, however then your project effectively becomes GPL. Secondly, the argument that most Python code is already BSD, one could just as well make the argument that most OSS code is GPL so use GPL. Furthermore your argument also ignores the fact that if you're using (ctypes, cython) wrappers around C-code you will probably be bound by the licence of the C-library so some code might not have a choice. Finally the biggest problem I have is with the notion that forcing a specific OSS choice onto developers is ok. If someone chooses a licence, they have a reason to do so and it is their choice. The funny thing is that the "free software crowd", often gets accused of this, however I've found that often the BSD crowd is a lot worse, and often quite hostile towards GPL licensing. Anyway I don't want to start a licence flamewar. Now a option to restrict search to common OSS licences I'm all for that! Cheers Jochen > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From matthew.brett at gmail.com Sat Oct 30 19:19:15 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 31 Oct 2010 00:19:15 +0100 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <4CCCA356.8090000@gmail.com> References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> <4CCCA356.8090000@gmail.com> Message-ID: Hi, > However I really take issue with the above statement and the notion of > forcing a specific OSS licence choice onto users. First your statement > above is factually not correct: GPL is a BSD compatible licence (in the > usual meaning of this phrase) BSD is GPL compatible (you can include BSD code in a GPL project and still stay GPL) but not the other other way round. You'll see that matlab file exchange does _enforce_ BSD - from the page cited above: http://www.mathworks.com/matlabcentral/FX_transition_faq.html "Can I ask you to consider [my favorite license]? No. For consistency, the BSD license will be the standard for the File Exchange. " Of course it's OK in general to have other licenses, but maybe as an exception for this site, and thus linked-to rather than hosted. Best, Matthew From william.ratcliff at gmail.com Sat Oct 30 19:36:13 2010 From: william.ratcliff at gmail.com (william ratcliff) Date: Sat, 30 Oct 2010 19:36:13 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: Let me think about how to implement the auto-repo part. In the mean time, what about something along the lines of: http://djangosnippets.org/ William On Sat, Oct 30, 2010 at 6:22 PM, Fernando Perez wrote: > On Sat, Oct 30, 2010 at 8:00 AM, william ratcliff > wrote: > > If we could automate it, how much do you think the bandwidth/hosting > costs > > would be per month? Would it be restricted to just code (that is just > text > > files, cutoff above a certain size)? No bug tracking and a simple rating > > system for packages? A section for comments about a given package. The > > submitter gives it up to 4 tags (for searching) and we start out with a > > given list of topics and let people additional ones later? People > register > > for an account (to reduce spam) or do we just use Openid or Openauth? > > How do we deal with spam? Do we allow people to sort packages by date? > > Rating? Would people want to use Django? What would we call it? I'd > be > > willing to purchase a domain name and pay for hosting on webfaction to > try > > it out. If it gets too pricey then I may have to ask for help later. (I > > think we should avoid ads). What would you guys like to call it? > > Just a few comments from the sidelines... I think it would be really > great if every snippet had an automatic version control history > associated with it. For me, the gist model at github is perfect in > this regard. Consider for example (random gist I found that had numpy > in it): > > http://gist.github.com/364369 > > This very simple page has all the code, a download button, space for > comments, revision history and a 'fork' button. The last two for me > are very, very important: they plant the seed that allows a simple > script to very easily grow into something larger. The author has an > easy way to make improvements and track those (with near-zero setup > overhead), and the 'fork' button makes it easy for others to > contribute. > > For multi-file projects, the obvious counterpart is obviously a real > repo (github or whatever). > > I know it may feel a little harsh to push a specific version control > system, but to me the idea of binding revision history and forking > support as an integral part of a 'file exchange' is actually > important. I think that we should try not just to replicate matlab's > file exchange website, but rather to do better. And I think that > pervasive version control 'as a way of life' is actually one > ingredient in the right direction. > > In any case, there's zero chance that I'll do any actual work on this, > so consider this idle chat from the peanut gallery :) I'll be happy > to use anything those actually putting in the real elbow grease can > come up with. > > Regards, > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Oct 30 19:37:01 2010 From: cournape at gmail.com (David Cournapeau) Date: Sun, 31 Oct 2010 08:37:01 +0900 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <4CCCA356.8090000@gmail.com> References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> <4CCCA356.8090000@gmail.com> Message-ID: 2010/10/31 Jochen Schr?der : > On 31/10/10 06:00, Almar Klein wrote: >> >> ? ? I would strongly recommend (to users) that all shorter code, snippets >> ? ? and recipes, are BSD by default or made explicit by the user, and that >> ? ? the license is very easy to see on the web page. >> ? ? Given that we are writing BSD code and to avoid any conflicts, I >> ? ? essentially ignore all non-BSD code, for example on the matlab file >> ? ? exchange. >> >> >> Hear hear! Since most Python code is BSD licensed, a module/package >> using non-BSD compatible license (for example GLP) would be incompatible >> with, well almost all Python code. This may sound trivial, but I, for >> one, did not fully understand this until someone explained it. >> >> I would even go so far as to force a BSD license for all code hosted on >> the site itself. Referenced code can then still choose a license. At the >> very least there should be a proper explanation that people should chose >> the BSD license in most cases, and *why*. >> >> ? ?Almar > > Let me first say that I love your idea and the enthusiasm you've already > created. > > However I really take issue with the above statement and the notion of > forcing a specific OSS licence choice onto users. First your statement > above is factually not correct: GPL is a BSD compatible licence (in the > usual meaning of this phrase), i.e. you can include BSD code in a GPL > project. You can also do otherwise, however then your project > effectively becomes GPL. > Secondly, the argument that most Python code is already BSD, one could > just as well make the argument that most OSS code is GPL so use GPL. > Furthermore your argument also ignores the fact that if you're using > (ctypes, cython) wrappers around C-code you will probably be bound by > the licence of the C-library so some code might not have a choice. > > Finally the biggest problem I have is with the notion that forcing a > specific OSS choice onto developers is ok This is for code snippets - if you want to choose a specific license, then nobody forces you not to use it. We just don't support it through code snippets. David From njs at pobox.com Sat Oct 30 20:43:39 2010 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 30 Oct 2010 17:43:39 -0700 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: On Sat, Oct 30, 2010 at 3:22 PM, Fernando Perez wrote: > Just a few comments from the sidelines... I think it would be really > great if every snippet had an automatic version control history > associated with it. ?For me, the gist model at github is perfect in > this regard. ?Consider for example (random gist I found that had numpy > in it): > > http://gist.github.com/364369 > > This very simple page has all the code, a download button, space for > comments, revision history and a 'fork' button. ?The last two for me > are very, very important: they plant the seed that allows a simple > script to very easily grow into something larger. ?The author has an > easy way to make improvements and track those (with near-zero setup > overhead), and the 'fork' button makes it easy for others to > contribute. gist.github.com is *really* slick, but... I'm guessing it wouldn't be so easy to reimplement for someone who hasn't just implemented github? And it seems to me that the sort of people who use git (i.e., people with a substantial investment of time and mental energy in "real programming") are already pretty well supported by existing infrastructure. I'm not going to be working on this either, so this is also from the peanut gallery, but... if I *were* doing this project, my focus would be on achieving exactly two things as quickly as possible: 1) A minimum ceremony way for your average scientific programmer to get some useful code they wrote online. Maximum five steps (or fewer would be better!): a) log-in, b) type some text about what the snippet does, c) check the box saying yeah they understand what BSD means, d) paste in the code, e) hit submit. Maybe there should be some extra optional steps for richer metadata or whatever, but srsly, you cannot make "understand the GPL" or "know what git is" or "fill out this complicated form to specify tags in our obscure Trove ontology" prerequisites for scientific programmers to contribute. 2) Solid one-stop-shopping support for scientific code. (If you do this right, then everyone will use the site, and then it's what they'll think of when they have something useful to upload!) That means, a good search function for all the snippets that have been uploaded. It also means the search function needs to know about "proper" packages -- searching for "wavelets" should find pywt, etc. I'm not sure if that's best done by searching pypi directly, or by having people explicitly enter pointers to scientific software into the database -- I'd probably do the latter because it's both quicker to implement and would keep the search results much more focused. And for real one-stop-shopping, searches should be able to find functions embedded inside larger packages (so e.g. searching for matrix exponential should give you a hit on scipy.linalg.expm). I guess this means, index the documentation for at least numpy and scipy, and maybe the docs for other packages as they get added? Obviously there are lots of enhancements one can imagine -- tracking of multiple versions of the same snippet, discussions, syntax highlighting, finding related snippets, git support, etc. etc., and there are lots more ideas in this thread -- but I'd start by lasering in on those two features, work hard on making the fundamentals as useful as possible, and then build up from there. Hope that's useful, -- Nathaniel From fperez.net at gmail.com Sat Oct 30 21:00:14 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 30 Oct 2010 18:00:14 -0700 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: On Sat, Oct 30, 2010 at 5:43 PM, Nathaniel Smith wrote: > gist.github.com is *really* slick, but... I'm guessing it wouldn't be > so easy to reimplement for someone who hasn't just implemented github? > And it seems to me that the sort of people who use git (i.e., people > with a substantial investment of time and mental energy in "real > programming") are already pretty well supported by existing > infrastructure. Well, part of the beauty of gist is that you don't have to set up *any* version control client-side if you don't want to. gist is literally copy/paste, finished. They do the VC for you, and you can later use it if you want to, and people can fork it if they want to, but you don't have to. The reason I'd like that is that it would increase the likelihood that small snippets would actually get improved over time in a reusable way, rather than in people's personal collections. But you're absolutely right in that it's probably a lot of work unless you've already created something like github, and most certainly not the first priority at all. Just a wish :) Cheers, f From josh.holbrook at gmail.com Sat Oct 30 21:04:05 2010 From: josh.holbrook at gmail.com (Joshua Holbrook) Date: Sat, 30 Oct 2010 17:04:05 -0800 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: Can you use gists directly? Just a thought. --Josh On Sat, Oct 30, 2010 at 5:00 PM, Fernando Perez wrote: > On Sat, Oct 30, 2010 at 5:43 PM, Nathaniel Smith wrote: >> gist.github.com is *really* slick, but... I'm guessing it wouldn't be >> so easy to reimplement for someone who hasn't just implemented github? >> And it seems to me that the sort of people who use git (i.e., people >> with a substantial investment of time and mental energy in "real >> programming") are already pretty well supported by existing >> infrastructure. > > Well, part of the beauty of gist is that you don't have to set up > *any* version control client-side if you don't want to. ?gist is > literally copy/paste, finished. ?They do the VC for you, and you can > later use it if you want to, and people can fork it if they want to, > but you don't have to. > > The reason I'd like that is that it would increase the likelihood that > small snippets would actually get improved over time in a reusable > way, rather than in people's personal collections. > > But you're absolutely right in that it's probably a lot of work unless > you've already created something like github, and most certainly not > the first priority at all. ?Just a wish :) > > Cheers, > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From esko.lehtonen at iki.fi Sat Oct 30 21:06:09 2010 From: esko.lehtonen at iki.fi (Esko Lehtonen) Date: Sun, 31 Oct 2010 03:06:09 +0200 Subject: [SciPy-User] Loading a CSV with loadtxt In-Reply-To: References: <4CCAAFCC.8070606@iki.fi> Message-ID: <4CCCC101.9090606@iki.fi> 30.10.2010 0:19, Athanasios Anastasiou kirjoitti: > I have also tried with a simple one line file containing dummy entries like: > "????", "????", 1.0, 0.5, 1200 > > It still finds that buffer object unreadable :-) It seems to be something with greek letters: I made two files, one containing "????", "????", 1.0, 0.5, 1200 and another containing "abcd", "bcde", 1.0, 0.5, 1200 three times. Both files were encoded in UTF-8. Only the file containing greek letters gave the error. Does not sound like correct behavior. My numpy version was the same, 1.3.0. - Esko From sunqiang at gmail.com Sat Oct 30 21:07:32 2010 From: sunqiang at gmail.com (sunqiang) Date: Sun, 31 Oct 2010 09:07:32 +0800 Subject: [SciPy-User] searching SciPy-User email list In-Reply-To: References: Message-ID: I'm using this: http://dir.gmane.org/gmane.comp.python.scientific.user or http://news.gmane.org/gmane.comp.python.scientific.user On Sun, Oct 31, 2010 at 12:30 AM, Anthony Palomba wrote: > Is there a way to search the SciPy-User email list? > > > > > -ap > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From jason-sage at creativetrax.com Sat Oct 30 21:42:49 2010 From: jason-sage at creativetrax.com (Jason Grout) Date: Sat, 30 Oct 2010 20:42:49 -0500 Subject: [SciPy-User] Permutation convention for LU decomposition Message-ID: <4CCCC999.2000203@creativetrax.com> I notice that in Lapack, Matlab, and Mathematica, the LU decomposition routine for a matrix A returns a P, L, and U matrices so that: PA=LU In scipy, however, the LU decomposition routine gives three matrices so that: A=PLU (i.e., the P matrix is the inverse of the P matrix returned by the other software) I'm curious why this design decision was made. Was there precedent with other software to make it A=PLU instead of PA=LU? Is it more natural in applications? I realize it's just a convention. Thanks, Jason From fperez.net at gmail.com Sat Oct 30 22:01:30 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 30 Oct 2010 19:01:30 -0700 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: On Sat, Oct 30, 2010 at 6:04 PM, Joshua Holbrook wrote: > Can you use gists directly? Just a thought. > Well, sure, but gists are buried in all the code at github. One can search for numpy/scipy, but at that point we might as well just use google. I really like the ideas being discussed, so we have something with a specific scipy focus and other things suggested (description, a filename, perhaps the ability to upload a figure, tags, etc). I simply was saying that in addition to those --not instead of those-- gist-like functionality would be great to have. But a tool with a scipy focus is more important than something like gist. Cheers, f From josh.holbrook at gmail.com Sat Oct 30 22:11:06 2010 From: josh.holbrook at gmail.com (Joshua Holbrook) Date: Sat, 30 Oct 2010 18:11:06 -0800 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: What I mean is more like, what if we could make a tool that basically tracks and organizes gists? Just an idea. I agree that it takes back seat to the essentials of a python file exchange kinda thing, so if this would just make things harder, disregard it. --Josh On Sat, Oct 30, 2010 at 6:01 PM, Fernando Perez wrote: > On Sat, Oct 30, 2010 at 6:04 PM, Joshua Holbrook > wrote: >> Can you use gists directly? Just a thought. >> > > Well, sure, but gists are buried in all the code at github. ?One can > search for numpy/scipy, but at that point we might as well just use > google. > > I really like the ideas being discussed, so we have something with a > specific scipy focus and other things suggested (description, a > filename, perhaps the ability to upload a figure, tags, etc). ?I > simply was saying that in addition to those --not instead of those-- > gist-like functionality would be great to have. > > But a tool with a scipy focus is more important than something like gist. > > Cheers, > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From fperez.net at gmail.com Sat Oct 30 22:14:54 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 30 Oct 2010 19:14:54 -0700 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: On Sat, Oct 30, 2010 at 7:11 PM, Joshua Holbrook wrote: > What I mean is more like, what if we could make a tool that basically > tracks and organizes gists? > > Just an idea. I agree that it takes back seat to the essentials of a > python file exchange kinda thing, so if this would just make things > harder, disregard it. > Ah, I think I misunderstood you, sorry. Yes, using gist as the 'backend' could work... I guess if it's possible to use a cross-site authentication solution (OpenID or one of those things, I'm not very familiar with those tools), then one option would be to offer upon upload a checkbox 'Create gist for your contribution?'. If checked, the system could automatically create the relevant gist and display a prominent link to it on the snippet's page, fetch the code for display and download from gist (getting versioning), etc. I don't know how hard/practical it would be, but it's certainly an intriguing idea. Cheers, f From fperez.net at gmail.com Sat Oct 30 22:32:09 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 30 Oct 2010 19:32:09 -0700 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <4CCCA356.8090000@gmail.com> References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> <4CCCA356.8090000@gmail.com> Message-ID: 2010/10/30 Jochen Schr?der : > Secondly, the argument that most Python code is already BSD, one could > just as well make the argument that most OSS code is GPL so use GPL. It's not an argument of majority, it's one of free flow of code across projects and of reciprocity and fairness. The GPL has an asymmetric relationship re. BSD code: gpl projects can incorporate all the bsd code they want, but bsd projects can't incorporate gpl code (without relicensing, which is often impossible when there are many copyright holders, and is in any case a major burden on a project). This asymmetry is at the heart of this discussion: numpy, scipy, matpotlib, mayavi, ipython and most of the open source projects around here are BSD-licensed and it means we can all freely share code across all of them (and we do, very often, freely copy pieces from one to the other as needed, this is not a hypothetical statement). In fact, I relicensed ipython from its early LGPL license (the one that I'm probably happiest with *personally*) to BSD precisely based on this argument of free flow of code across projects, made by John Hunter at the time. And I'm glad I did, as we've been able to copy code at various points in time across projects without any worries. When an author takes a piece of BSD code, modifies or builds upon it, and makes the new work available as GPL (something I've sadly seen done many times), he's most certainly *not* behaving in a spirit of reciprocity towards the author of the original BSD code. The BSD author can no longer benefit from the improvements to his code: despite the fact that those improvements remain open source, they are no longer available to him unless he relinquishes his original license terms and switches to the GPL. I find that practice actually worse than building proprietary extensions on open source code, because when this is done typically companies at least are doing some other business-related stuff that the open source developers are unlikely to engage in. > Furthermore your argument also ignores the fact that if you're using > (ctypes, cython) wrappers around C-code you will probably be bound by > the licence of the C-library so some code might not have a choice. In this case obviously there's no choice and no argument either, but I don't think anyone here is ignoring it, as it's the most basic ground truth of any licensing discussion. > Finally the biggest problem I have is with the notion that forcing a > specific OSS choice onto developers is ok. If someone chooses a licence, > they have a reason to do so and it is their choice. The funny thing is > that the "free software crowd", often gets accused of this, however I've > found that often the BSD crowd is a lot worse, and often quite hostile > towards GPL licensing. Anyway I don't want to start a licence flamewar. Nobody is *forcing* anything onto anyone. A community is free to say: if you want to use our tools, these are our terms. This is a community that shares code under the terms of the BSD license and sets up a website for that purpose. The rest of the whole internet is available to anyone who wishes to publish GPL improvements to Numpy and Scipy, just not on the Scipy servers :) My personal opinion is that in the long run, it would be beneficial to have this 'file exchange' have BSD-only code (or public domain, since employees of the US Federal government as far as I understand must publish their codes under public domain terms). The reason is simple: snippets put there, when good, are prime candidates for integration into numpy/scipy proper. It would be a shame, and frankly somewhat absurd, to have a bunch of great codes sitting on the scipy server that we couldn't integrate into scipy. At least it seems so to me... Cheers, f From josef.pktd at gmail.com Sat Oct 30 22:47:49 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 30 Oct 2010 22:47:49 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> <4CCCA356.8090000@gmail.com> Message-ID: On Sat, Oct 30, 2010 at 10:32 PM, Fernando Perez wrote: > 2010/10/30 Jochen Schr?der : >> Secondly, the argument that most Python code is already BSD, one could >> just as well make the argument that most OSS code is GPL so use GPL. > > It's not an argument of majority, it's one of free flow of code across > projects and of reciprocity and fairness. > > The GPL has an asymmetric relationship re. BSD code: gpl projects can > incorporate all the bsd code they want, but bsd projects can't > incorporate gpl code (without relicensing, which is often impossible > when there are many copyright holders, and is in any case a major > burden on a project). ?This asymmetry is at the heart of this > discussion: numpy, scipy, matpotlib, mayavi, ipython and most of the > open source projects around here are BSD-licensed and it means we can > all freely share code across all of them (and we do, very often, > freely copy pieces from one to the other as needed, this is not a > hypothetical statement). ?In fact, I relicensed ipython from its early > LGPL license (the one that I'm probably happiest with *personally*) to > BSD precisely based on this argument of free flow of code across > projects, made by John Hunter at the time. ?And I'm glad I did, as > we've been able to copy code at various points in time across projects > without any worries. > > When an author takes a piece of BSD code, modifies or builds upon it, > and makes the new work available as GPL (something I've sadly seen > done many times), he's most certainly *not* behaving in a spirit of > reciprocity towards the author of the original BSD code. ?The BSD > author can no longer benefit from the improvements to his code: > despite the fact that those improvements remain open source, they are > no longer available to him unless he relinquishes his original license > terms and switches to the GPL. ?I find that practice actually worse > than building proprietary extensions on open source code, because when > this is done typically companies at least are doing some other > business-related stuff that the open source developers are unlikely to > engage in. > >> Furthermore your argument also ignores the fact that if you're using >> (ctypes, cython) wrappers around C-code you will probably be bound by >> the licence of the C-library so some code might not have a choice. > > In this case obviously there's no choice and no argument either, but I > don't think anyone here is ignoring it, as it's the most basic ground > truth of any licensing discussion. > >> Finally the biggest problem I have is with the notion that forcing a >> specific OSS choice onto developers is ok. If someone chooses a licence, >> they have a reason to do so and it is their choice. The funny thing is >> that the "free software crowd", often gets accused of this, however I've >> found that often the BSD crowd is a lot worse, and often quite hostile >> towards GPL licensing. Anyway I don't want to start a licence flamewar. > > Nobody is *forcing* anything onto anyone. ?A community is free to say: > if you want to use our tools, these are our terms. ?This is a > community that shares code under the terms of the BSD license and sets > up a website for that purpose. ?The rest of the whole internet is > available to anyone who wishes to publish GPL improvements to Numpy > and Scipy, just not on the Scipy servers :) > > My personal opinion is that in the long run, it would be beneficial to > have this 'file exchange' have BSD-only code (or public domain, since > employees of the US Federal government as far as I understand must > publish their codes under public domain terms). ?The reason is simple: > snippets put there, when good, are prime candidates for integration > into numpy/scipy proper. ?It would be a shame, and frankly somewhat > absurd, to have a bunch of great codes sitting on the scipy server > that we couldn't integrate into scipy. ?At least it seems so to me... Or for integration into other Scipy related packages. That's exactly my opinion on this. Josef > > Cheers, > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From wardefar at iro.umontreal.ca Sun Oct 31 02:13:15 2010 From: wardefar at iro.umontreal.ca (David Warde-Farley) Date: Sun, 31 Oct 2010 02:13:15 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: On 2010-10-30, at 10:14 PM, Fernando Perez wrote: > Ah, I think I misunderstood you, sorry. Yes, using gist as the > 'backend' could work... I guess if it's possible to use a cross-site > authentication solution (OpenID or one of those things, I'm not very > familiar with those tools), then one option would be to offer upon > upload a checkbox 'Create gist for your contribution?'. If checked, > the system could automatically create the relevant gist and display a > prominent link to it on the snippet's page, fetch the code for display > and download from gist (getting versioning), etc. > > I don't know how hard/practical it would be, but it's certainly an > intriguing idea. I read this and had a slightly different idea: a mechanism for collecting gists and code snippets from sites like github. Basically a gist-indexing service. You submit a link to your gist, maybe some tags, and we fetch data about it, maybe have a fairly rapid moderation queue to make sure people aren't submitting spam. Ideally, we'd be able to use GitHub gists, or whatever the equivalent mechanism is for several other sites (bitbucket, gitorious, pastebin.com). GitHub provides a gists API which makes machine-readable info about the gist available via JSON, so even just an URL and tags would be enough and you could fetch the rest via API. Above all, I really like the gist.github.com model of forkable version-controlled snippets (under the hood). It would be nice if there was a software package that supported this that we could deploy on our own, but barring that, I think organizing topical gists from sites that *do* offer this model is a close second. David From fperez.net at gmail.com Sun Oct 31 02:24:20 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 30 Oct 2010 23:24:20 -0700 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: On Sat, Oct 30, 2010 at 11:13 PM, David Warde-Farley wrote: > I read this and had a slightly different idea: a mechanism for collecting gists and code snippets from sites like github. Basically a gist-indexing service. You submit a link to your gist, maybe some tags, and we fetch data about it, maybe have a fairly rapid moderation queue to make sure people aren't submitting spam. That sounds quite doable and a nice balance of features and implementation effort. People could still just paste in their code, as we want to make it easy for anyone who doesn't use github/bitbucket/etc to still be able to contribute with nothing more than a copy/paste. But if they do have already an account with such a service, that would be the preferred mode of operation. Cheers, f From wardefar at iro.umontreal.ca Sun Oct 31 02:35:56 2010 From: wardefar at iro.umontreal.ca (David Warde-Farley) Date: Sun, 31 Oct 2010 02:35:56 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <4CCCA356.8090000@gmail.com> References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> <4CCCA356.8090000@gmail.com> Message-ID: <0030CC91-56ED-49E2-9A26-C70891D152FB@iro.umontreal.ca> On 2010-10-30, at 6:59 PM, Jochen Schr?der wrote: > Finally the biggest problem I have is with the notion that forcing a > specific OSS choice onto developers is ok. Nobody's forcing them to release their code under any particular license -- merely suggesting that if you want to make your code available *on our servers* it's got to be liberally licensed. This isn't really any different from sites like Wikipedia, where you must consent to your contributions being used under the terms GFDL. If you don't like Wikipedia's terms you are free to make your content available elsewhere on the web, and the same goes for the hypothetical code-sharing site. > If someone chooses a licence, they have a reason to do so and it is their choice. Actually you'd be surprised how often developers who release GPL code are unaware of the implications or license specifics, and just chose it because it's popular. > The funny thing is that the "free software crowd", often gets accused of this, > however I've found that often the BSD crowd is a lot worse, and often quite > hostile towards GPL licensing. That shouldn't really be a surprise. As Fernando pointed out more eloquently than I can, the asymmetric nature of the two licenses means that developers and users of BSD-licensed code have nothing to gain and everything to lose in this situation. While there's no technical or legal grounds for objection, there's good reasons to be annoyed by it. David From william.ratcliff at gmail.com Sun Oct 31 03:13:50 2010 From: william.ratcliff at gmail.com (william ratcliff) Date: Sun, 31 Oct 2010 03:13:50 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: Would a first version without the gist feature be useful? We could try to add it in later once we see if the service is actually being used. One question about gist though---what is the overall benefit? If the original poster can edit the code is that sufficient? Also, how does the use of gist effect comments? For example, suppose someone posts some code and people make comments about the original code and those are incorporated--aren't the comments then out of sync with the code? Are the comments then synced with a given revision of the code? Is the fork option useful for our context, or is a simple option to download the current instance of the code sufficient? William On Sun, Oct 31, 2010 at 2:24 AM, Fernando Perez wrote: > On Sat, Oct 30, 2010 at 11:13 PM, David Warde-Farley > wrote: > > I read this and had a slightly different idea: a mechanism for collecting > gists and code snippets from sites like github. Basically a gist-indexing > service. You submit a link to your gist, maybe some tags, and we fetch data > about it, maybe have a fairly rapid moderation queue to make sure people > aren't submitting spam. > > That sounds quite doable and a nice balance of features and > implementation effort. People could still just paste in their code, > as we want to make it easy for anyone who doesn't use > github/bitbucket/etc to still be able to contribute with nothing more > than a copy/paste. But if they do have already an account with such a > service, that would be the preferred mode of operation. > > Cheers, > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wardefar at iro.umontreal.ca Sun Oct 31 03:23:56 2010 From: wardefar at iro.umontreal.ca (David Warde-Farley) Date: Sun, 31 Oct 2010 03:23:56 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: <444E7AD7-CC47-43A6-B719-753EE807C3B8@iro.umontreal.ca> On 2010-10-31, at 3:13 AM, william ratcliff wrote: > Would a first version without the gist feature be useful? We could try to add it in later once we see if the service is actually being used. One question about gist though---what is the overall benefit? If the original poster can edit the code is that sufficient? Basically it outsources the version-tracking burden, in addition to making it possible to use version control tools locally if you want to. > Also, how does the use of gist effect comments? For example, suppose someone posts some code and people make comments about the original code and those are incorporated--aren't the comments then out of sync with the code? Are the comments then synced with a given revision of the code? The Github API definitely provides either a SHA-1 hash or a last-modified date, and so recording that when a comment is made would make it easy to flag a comment as being "about a previous version". We'd have to do this bookkeeping for locally stored snippets, too, probably just with a timestamp. David From nahumoz at gmail.com Sun Oct 31 04:34:02 2010 From: nahumoz at gmail.com (Oz Nahum Tiram) Date: Sun, 31 Oct 2010 09:34:02 +0100 Subject: [SciPy-User] Order of numpy orperations is not equal to logic (and also octave)? Message-ID: Hi Everyone, So here is something which I discovered lately and is making me wonder. I want to define a scalar which I call Net Absolute Mass Balance Error or in short NAMBE. This NAMBE is the absolute difference between a base vector and another vector, divided by the base vector and multiplied by a hundred, in pseudo-code notation: NAMBE=sum(abs(a-b)/a)*100 When I do it in python, I decided to break the line into two lines so the code is more readable: >>> a=np.array([0.1,0.1,0.1,0.1,0.1]) >>> b=np.array([0.1,0.1,0.1,0.1,0.1])*2 >>> b array([ 0.2, 0.2, 0.2, 0.2, 0.2]) >>> a-b array([-0.1, -0.1, -0.1, -0.1, -0.1]) >>> s=np.sum(abs(a-b)) >>> s 0.5 >>> s/np.sum(a) 1.0 I thought the numpy does everything element wise so if I do it one line, I noticed the the result is different: >>> s=np.sum(abs(a-b)/a) >>> s 5.0 Now If I check myself on the data I have with a octave, I get different results: octave:1> a=[0.1,0.1,0.1,0.1,0.1] a = 0.10000 0.10000 0.10000 0.10000 0.10000 octave:2> b=a*2 b = 0.20000 0.20000 0.20000 0.20000 0.20000 octave:3> sum(a) ans = 0.50000 octave:4> sum(b) ans = 1 octave:5> sum(a-b) ans = -0.50000 octave:6> sum(abs(a-b)) ans = 0.50000 octave:7> s=sum(abs(a-b)) s = 0.50000 octave:8> s/sum(a) ans = 1 octave:9> s=sum(abs(a-b)/a) s = 1.0000 octave:10> s=sum(abs(a-b)/sum(a)) s = 1 Note that the is no difference in the output of 9 and 10 in Octave, although there is in Python ... So, my question is: Why is python is behaving like that ? Which one is right ? Octave or Python ? Thanks in advance, -- Oz Nahum Graduate Student Zentrum f?r Angewandte Geologie Universit?t T?bingen --- Imagine there's no countries it isn't hard to do Nothing to kill or die for And no religion too Imagine all the people Living life in peace -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sun Oct 31 04:46:45 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 31 Oct 2010 09:46:45 +0100 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> Message-ID: <20101031084645.GA2079@phare.normalesup.org> On Sun, Oct 31, 2010 at 12:54:03AM +0200, Almar Klein wrote: > That sounds a bit too narrow. From how I see it, we can distinguish three > or four submit-categories (I'm not sure whether the first two should be > combined): > ? * snippets (few lines of code) > ? * modules / scripts (single file) > ? * packages (multiple files) > ? * referenced project (linking to say googlecode or github) IMHO, packages should go to PyPI. We need to work together with the wider Python community to help gain momentum. Ga?l From cournape at gmail.com Sun Oct 31 04:48:49 2010 From: cournape at gmail.com (David Cournapeau) Date: Sun, 31 Oct 2010 17:48:49 +0900 Subject: [SciPy-User] Order of numpy orperations is not equal to logic (and also octave)? In-Reply-To: References: Message-ID: On Sun, Oct 31, 2010 at 5:34 PM, Oz Nahum Tiram wrote: > Hi Everyone, > > So here is something which I discovered lately and is making me wonder. > > I want to define a scalar which I call Net Absolute Mass Balance Error or in > short NAMBE. This NAMBE is the absolute difference between a base vector and > another vector, divided by the base vector and multiplied by a hundred, in > pseudo-code notation: > > NAMBE=sum(abs(a-b)/a)*100 > > > When I do it in python, I decided to break the line into two lines so the > code is more readable: > >>>> a=np.array([0.1,0.1,0.1,0.1,0.1]) > > >>>> b=np.array([0.1,0.1,0.1,0.1,0.1])*2 > > >>>> b > array([ 0.2, ?0.2, ?0.2, ?0.2, ?0.2]) > > >>>> a-b > array([-0.1, -0.1, -0.1, -0.1, -0.1]) > > >>>> s=np.sum(abs(a-b)) > > >>>> s > 0.5 >>>> s/np.sum(a) > > > 1.0 > > I thought the numpy does everything element wise so if I do it one line, I > noticed the the result is different: > >>>> s=np.sum(abs(a-b)/a) This is indeed different than the first line, because you are computing two different things. In numpy, every basic arithmetic operation is *element*-wise. So you have you have two vector x and y, x / y will compute x[0]/y[0], etc... abs(a-b)/a is the vector (a[0]-b[0])/a[0], ...., (a[4]-b[4])/a[4]. > > >>>> s > 5.0 > > Now If I check myself on the data I have with a octave, I get different > results: > > octave:1> a=[0.1,0.1,0.1,0.1,0.1] > > > a = > > ? ?0.10000 ? 0.10000 ? 0.10000 ? 0.10000 ? 0.10000 > > > > octave:2> b=a*2 > > > b = > > ? ?0.20000 ? 0.20000 ? 0.20000 ? 0.20000 ? 0.20000 > > > > octave:3> sum(a) > > > ans = ?0.50000 > octave:4> sum(b) > > > ans = ?1 > octave:5> sum(a-b) > > > ans = -0.50000 > octave:6> sum(abs(a-b)) > > > ans = ?0.50000 > octave:7> s=sum(abs(a-b)) > > > s = ?0.50000 > octave:8> s/sum(a) > > > ans = ?1 > octave:9> s=sum(abs(a-b)/a) > > > s = ?1.0000 > octave:10> s=sum(abs(a-b)/sum(a)) > > > s = ?1 > > Note that the is no difference in the output of 9 and 10 in Octave, although > there is in Python ... So, my question is: Why is python is behaving like > that ? Which one is right ? Octave or Python ? Neither is right or wrong - they both use different definitions of some operators (here /). I am not sure what abs(a-b)/a does in octave, though David From gael.varoquaux at normalesup.org Sun Oct 31 04:50:04 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 31 Oct 2010 09:50:04 +0100 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <0030CC91-56ED-49E2-9A26-C70891D152FB@iro.umontreal.ca> References: <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> <4CCCA356.8090000@gmail.com> <0030CC91-56ED-49E2-9A26-C70891D152FB@iro.umontreal.ca> Message-ID: <20101031085004.GB2079@phare.normalesup.org> On Sun, Oct 31, 2010 at 02:35:56AM -0400, David Warde-Farley wrote: > > If someone chooses a licence, they have a reason to do so and it is their choice. > Actually you'd be surprised how often developers who release GPL code are unaware of the implications or license specifics, and just chose it because it's popular. Yes, I have witness this a lot. Ga?l From cournape at gmail.com Sun Oct 31 04:50:46 2010 From: cournape at gmail.com (David Cournapeau) Date: Sun, 31 Oct 2010 17:50:46 +0900 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <20101031084645.GA2079@phare.normalesup.org> References: <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> <20101031084645.GA2079@phare.normalesup.org> Message-ID: On Sun, Oct 31, 2010 at 5:46 PM, Gael Varoquaux wrote: > > IMHO, packages should go to PyPI. We need to work together with the wider > Python community to help gain momentum. Regardless of anyone's opinion about Pypi, I agree we should not try to conflate code snippets /cookbook hosting and "real" projects hosting. David From gael.varoquaux at normalesup.org Sun Oct 31 04:57:00 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 31 Oct 2010 09:57:00 +0100 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: Message-ID: <20101031085700.GC2079@phare.normalesup.org> On Sun, Oct 31, 2010 at 03:13:50AM -0400, william ratcliff wrote: > Would a first version without the gist feature be useful? IMHO: Yes! I think the gist idea is excellent. Gist is awesome. I should start using it in my technical blogposts. However, I side with Fernando that the core scipy-specific functionality is more important than the version-control sweetness of gist. Ga?l, from the (crowded) peanut gallery From william.ratcliff at gmail.com Sun Oct 31 06:05:26 2010 From: william.ratcliff at gmail.com (william ratcliff) Date: Sun, 31 Oct 2010 06:05:26 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <20101031084645.GA2079@phare.normalesup.org> References: <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> <20101031084645.GA2079@phare.normalesup.org> Message-ID: I don't think anyone's considering forking from PyPi :> However, I think that there is some value in having a central repository for code snippets that are relevant to scientific programmers. Especially something that is easy for people to contribute to. On Sun, Oct 31, 2010 at 4:46 AM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > On Sun, Oct 31, 2010 at 12:54:03AM +0200, Almar Klein wrote: > > That sounds a bit too narrow. From how I see it, we can distinguish > three > > or four submit-categories (I'm not sure whether the first two should > be > > combined): > > * snippets (few lines of code) > > * modules / scripts (single file) > > * packages (multiple files) > > * referenced project (linking to say googlecode or github) > > IMHO, packages should go to PyPI. We need to work together with the wider > Python community to help gain momentum. > > Ga?l > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e.antero.tammi at gmail.com Sun Oct 31 06:18:14 2010 From: e.antero.tammi at gmail.com (eat) Date: Sun, 31 Oct 2010 10:18:14 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?Order_of_numpy_orperations_is_not_equal_to?= =?utf-8?q?_logic_=28and=09also_octave=29=3F?= References: Message-ID: >Oz Nahum Tiram gmail.com> writes: >octave:9> s=sum(abs(a-b)/a) Hi, Please note that / is not an element-wise operator in Octave. Rather it is "right division" like (pinv(a')* abs(a- b)')'. Octaves element-wise operator ./ may agree with numpy. Regards, eat From josef.pktd at gmail.com Sun Oct 31 07:28:47 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 31 Oct 2010 07:28:47 -0400 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <20101031085700.GC2079@phare.normalesup.org> References: <20101031085700.GC2079@phare.normalesup.org> Message-ID: On Sun, Oct 31, 2010 at 4:57 AM, Gael Varoquaux wrote: > On Sun, Oct 31, 2010 at 03:13:50AM -0400, william ratcliff wrote: >> ? ?Would a first version without the gist feature be useful? > > IMHO: Yes! > > I think the gist idea is excellent. Gist is awesome. I should start using > it in my technical blogposts. However, I side with Fernando that the core > scipy-specific functionality is more important than the version-control > sweetness of gist. One advantage of the matlab fileexchange is it's permanence. The problem I face quite often with topical link collections is that the original author moves to other things, changes jobs, internet hosting services and so on, and the source behind the links disappear. So, even when a gist or similar option for source controlled recipes is available, I would prefer if there is a local copy of the original code available. This way it would still be available 5 years later. Also, peanut gallery, but I think I will be a user and post my recipes there instead of "spamming" the scipy mailing list with it. Josef > > Ga?l, from the (crowded) peanut gallery > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From pav at iki.fi Sun Oct 31 07:46:22 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 31 Oct 2010 11:46:22 +0000 (UTC) Subject: [SciPy-User] Permutation convention for LU decomposition References: <4CCCC999.2000203@creativetrax.com> Message-ID: Sat, 30 Oct 2010 20:42:49 -0500, Jason Grout wrote: > I notice that in Lapack, Matlab, and Mathematica, the LU decomposition > routine for a matrix A returns a P, L, and U matrices so that: > > PA=LU LAPACK returns the P L U decomposition. Quote from the relevant manual page: """ DGETRF computes an LU factorization of a general M-by-N matrix A using partial pivoting with row interchanges. The factorization has the form A = P * L * U where P is a permutation matrix, L is lower triangular with unit diagonal elements (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n). This is the right-looking Level 3 BLAS version of the algorithm. """ -- Pauli Virtanen From sebastian.walter at gmail.com Sun Oct 31 08:18:32 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Sun, 31 Oct 2010 13:18:32 +0100 Subject: [SciPy-User] Permutation convention for LU decomposition In-Reply-To: <4CCCC999.2000203@creativetrax.com> References: <4CCCC999.2000203@creativetrax.com> Message-ID: On Sun, Oct 31, 2010 at 2:42 AM, Jason Grout wrote: > I notice that in Lapack, Matlab, and Mathematica, the LU decomposition > routine for a matrix A returns a P, L, and U matrices so that: > > PA=LU > > In scipy, however, the LU decomposition routine gives three matrices so > that: > > A=PLU > > (i.e., the P matrix is the inverse of the P matrix returned by the other > software) P should be orthogonal and thus P^{-1} = P^T so PA = LU or A = P L U shouldn't be a big issue. > > I'm curious why this design decision was made. ?Was there precedent with > other software to make it A=PLU instead of PA=LU? ?Is it more natural in > applications? ?I realize it's just a convention. Algorithms like dgetrf (in LAPACK) do not return a matrix P but an IPIV array that stores _successive_ row interchanges. E.g.: In [1]: import numpy In [2]: import scipy.linalg In [3]: A = numpy.random.rand(3,3) In [4]: print scipy.linalg.lapack.clapack.dgetrf(A.copy()) (array([[ 0.73664967, 0.2875308 , 0.67223657], [ 0.82181513, 0.62191285, -0.00461938], [ 0.13967213, 0.42672366, -0.02721027]]), array([1, 1, 2], dtype=int32), 0) In [5]: print scipy.linalg.lu(A.copy()) (array([[ 0., 1., 0.], [ 1., 0., 0.], [ 0., 0., 1.]]), array([[ 1. , 0. , 0. ], [ 0.82181513, 1. , 0. ], [ 0.13967213, 0.42672366, 1. ]]), array([[ 0.73664967, 0.2875308 , 0.67223657], [ 0. , 0.62191285, -0.00461938], [ 0. , 0. , -0.02721027]])) The array([1, 1, 2], dtype=int32) are the pivots, typically denoted IPIV in LAPACK. I.e., you first interchange row 0 <-> 1, then 1 <-> 1 and finally 2<->2. In total you have interchanged row 0 and 1. Row 2 remains unchanged. So, arguably the natural representation of the permutation is IPIV and not the permutation matrix P. I'm not familiar with the scipy.linalg.lu implementation. If it is based on dgetref it probably computes P from IPIV. Sebastian > > Thanks, > > Jason > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From almar.klein at gmail.com Sun Oct 31 09:27:13 2010 From: almar.klein at gmail.com (Almar Klein) Date: Sun, 31 Oct 2010 14:27:13 +0100 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: <20101031084645.GA2079@phare.normalesup.org> References: <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> <20101031084645.GA2079@phare.normalesup.org> Message-ID: On 31 October 2010 09:46, Gael Varoquaux wrote: > On Sun, Oct 31, 2010 at 12:54:03AM +0200, Almar Klein wrote: > > That sounds a bit too narrow. From how I see it, we can distinguish > three > > or four submit-categories (I'm not sure whether the first two should > be > > combined): > > * snippets (few lines of code) > > * modules / scripts (single file) > > * packages (multiple files) > > * referenced project (linking to say googlecode or github) > > IMHO, packages should go to PyPI. We need to work together with the wider > Python community to help gain momentum. > Fair enough, so packages are included by reference only. Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Sun Oct 31 10:01:14 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Sun, 31 Oct 2010 09:01:14 -0500 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> <20101030174218.GB17768@phare.normalesup.org> <4CCCA356.8090000@gmail.com> Message-ID: On Sat, Oct 30, 2010 at 9:32 PM, Fernando Perez wrote: > 2010/10/30 Jochen Schr?der : >> Secondly, the argument that most Python code is already BSD, one could >> just as well make the argument that most OSS code is GPL so use GPL. > > It's not an argument of majority, it's one of free flow of code across > projects and of reciprocity and fairness. > > The GPL has an asymmetric relationship re. BSD code: gpl projects can > incorporate all the bsd code they want, but bsd projects can't > incorporate gpl code (without relicensing, which is often impossible > when there are many copyright holders, and is in any case a major > burden on a project). ?This asymmetry is at the heart of this > discussion: numpy, scipy, matpotlib, mayavi, ipython and most of the > open source projects around here are BSD-licensed and it means we can > all freely share code across all of them (and we do, very often, > freely copy pieces from one to the other as needed, this is not a > hypothetical statement). ?In fact, I relicensed ipython from its early > LGPL license (the one that I'm probably happiest with *personally*) to > BSD precisely based on this argument of free flow of code across > projects, made by John Hunter at the time. ?And I'm glad I did, as > we've been able to copy code at various points in time across projects > without any worries. > > When an author takes a piece of BSD code, modifies or builds upon it, > and makes the new work available as GPL (something I've sadly seen > done many times), he's most certainly *not* behaving in a spirit of > reciprocity towards the author of the original BSD code. ?The BSD > author can no longer benefit from the improvements to his code: > despite the fact that those improvements remain open source, they are > no longer available to him unless he relinquishes his original license > terms and switches to the GPL. ?I find that practice actually worse > than building proprietary extensions on open source code, because when > this is done typically companies at least are doing some other > business-related stuff that the open source developers are unlikely to > engage in. > >> Furthermore your argument also ignores the fact that if you're using >> (ctypes, cython) wrappers around C-code you will probably be bound by >> the licence of the C-library so some code might not have a choice. > > In this case obviously there's no choice and no argument either, but I > don't think anyone here is ignoring it, as it's the most basic ground > truth of any licensing discussion. > >> Finally the biggest problem I have is with the notion that forcing a >> specific OSS choice onto developers is ok. If someone chooses a licence, >> they have a reason to do so and it is their choice. The funny thing is >> that the "free software crowd", often gets accused of this, however I've >> found that often the BSD crowd is a lot worse, and often quite hostile >> towards GPL licensing. Anyway I don't want to start a licence flamewar. > > Nobody is *forcing* anything onto anyone. ?A community is free to say: > if you want to use our tools, these are our terms. ?This is a > community that shares code under the terms of the BSD license and sets > up a website for that purpose. ?The rest of the whole internet is > available to anyone who wishes to publish GPL improvements to Numpy > and Scipy, just not on the Scipy servers :) > > My personal opinion is that in the long run, it would be beneficial to > have this 'file exchange' have BSD-only code (or public domain, since > employees of the US Federal government as far as I understand must > publish their codes under public domain terms). ?The reason is simple: > snippets put there, when good, are prime candidates for integration > into numpy/scipy proper. ?It would be a shame, and frankly somewhat > absurd, to have a bunch of great codes sitting on the scipy server > that we couldn't integrate into scipy. ?At least it seems so to me... > > Cheers, > > f I support Fernando's view of the licensing because the whole point of this 'file exchange' is about sharing.That is make life a little easier for those that are at some mental block (for whatever reason). Personal views are really are irrelevant in a community where people are freely helping other people without restriction by providing code (either complete or fixing errors). For that reason, public domain works best for 'trivial' code and BSD for more complex code since both allow simple inclusion into numpy/scipy. Yet for really complex code this is probably not the place and there are other options like scikits for that. Bruce From bsouthey at gmail.com Sun Oct 31 10:30:38 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Sun, 31 Oct 2010 09:30:38 -0500 Subject: [SciPy-User] Loading a CSV with loadtxt In-Reply-To: <4CCCC101.9090606@iki.fi> References: <4CCAAFCC.8070606@iki.fi> <4CCCC101.9090606@iki.fi> Message-ID: On Sat, Oct 30, 2010 at 8:06 PM, Esko Lehtonen wrote: > 30.10.2010 0:19, Athanasios Anastasiou kirjoitti: >> I have also tried with a simple one line file containing dummy entries like: >> "????", "????", 1.0, 0.5, 1200 >> >> It still finds that buffer object unreadable :-) > It seems to be something with greek letters: > > I made two files, one containing > > "????", "????", 1.0, 0.5, 1200 > > and another containing > > "abcd", "bcde", 1.0, 0.5, 1200 > > three times. Both files were encoded in UTF-8. > > Only the file containing greek letters gave the error. Does not sound like correct behavior. > > My numpy version was the same, 1.3.0. > > - Esko > > First this is a numpy issue that should be on the numpy list - a small self-contained example will help. About 6 months ago Pierre added Changeset 8416 that had a fix for ticket 1473 (http://projects.scipy.org/numpy/ticket/1473). This should be in numpy 1.5. If that does not help, please either add to that ticket or create a new one. Patches are welcome that allow numpy to read unicode file in Python 2.4+ (as I recognize that it is easier in Python 3). Bruce From athanastasiou at gmail.com Sun Oct 31 10:50:28 2010 From: athanastasiou at gmail.com (Athanasios Anastasiou) Date: Sun, 31 Oct 2010 14:50:28 +0000 Subject: [SciPy-User] Loading a CSV with loadtxt In-Reply-To: References: <4CCAAFCC.8070606@iki.fi> <4CCCC101.9090606@iki.fi> Message-ID: Hello Many thanks to Esko and Bruce for your answers. Esko, i have mixed entries (some of them are in Greek, others in English), if it works for UTF it should work for all UTF :-) Bruce, you are right, sorry for contributing to the "noise" of the list :-) I will have a look at Numpy 1.5 and whether the problem is rectified there and possibly follow it up with the Numpy list. All the best Athanasios 2010/10/31 Bruce Southey : > On Sat, Oct 30, 2010 at 8:06 PM, Esko Lehtonen wrote: >> 30.10.2010 0:19, Athanasios Anastasiou kirjoitti: >>> I have also tried with a simple one line file containing dummy entries like: >>> "????", "????", 1.0, 0.5, 1200 >>> >>> It still finds that buffer object unreadable :-) >> It seems to be something with greek letters: >> >> I made two files, one containing >> >> "????", "????", 1.0, 0.5, 1200 >> >> and another containing >> >> "abcd", "bcde", 1.0, 0.5, 1200 >> >> three times. Both files were encoded in UTF-8. >> >> Only the file containing greek letters gave the error. Does not sound like correct behavior. >> >> My numpy version was the same, 1.3.0. >> >> - Esko >> >> > > First this is a numpy issue that should be on the numpy list - a small > self-contained example will help. > > About 6 months ago Pierre added Changeset 8416 that had a fix for > ticket 1473 (http://projects.scipy.org/numpy/ticket/1473). This should > be in numpy 1.5. If that does not help, please either add to that > ticket or create a new one. > > Patches are welcome that allow numpy to read unicode file in Python > 2.4+ (as I recognize that it is easier in Python 3). > > Bruce > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From manini.luca at tiscali.it Sun Oct 31 11:03:16 2010 From: manini.luca at tiscali.it (Luca Manini) Date: Sun, 31 Oct 2010 16:03:16 +0100 Subject: [SciPy-User] Matrix indexing and updating Message-ID: <19661.34100.754738.920137@asterix.luke.org> Hi everybody, I'm new to this list and here comes my first question about matrix indexing and updating (in scipy, of course). I'm starting writing some code (for my wife) to solve finite elements problems. One typical action is to access (both for reading and writing) som "sparse" submatrices of a given BIG matrix. The submatrices are defined by two lists of indices, one for the rows and one for the cols. I've two problems with that. The first one is how to ACCESS (read) the matrix entries from two lists of indices, in "pseudo code": mat = scipy.matrix( range(0,100) ) mat( [1,3,4], [4,5,9] ) in order to get all the elements mat[i,j] for i in [1,3,4] and j in [4,5,9]. The second is how to CHANGE those elements. I manage to solve the first problem (but the sintax seems to me "uglier than needed") but not the second. Here goes what I can do.... Defining and filling the matrix is easy enough. import scipy mat = scipy.matrix( range(0,100) ) mat.shape = (10,10) print mat [[ 0 1 2 3 4 5 6 7 8 9] [10 11 12 13 14 15 16 17 18 19] [20 21 22 23 24 25 26 27 28 29] [30 31 32 33 34 35 36 37 38 39] [40 41 42 43 44 45 46 47 48 49] [50 51 52 53 54 55 56 57 58 59] [60 61 62 63 64 65 66 67 68 69] [70 71 72 73 74 75 76 77 78 79] [80 81 82 83 84 85 86 87 88 89] [90 91 92 93 94 95 96 97 98 99]] Getting a submatrix with CONTIGUOUS indices is also quite easy: print mat[0:3,4:6] [[ 4 5] [14 15] [24 25]] Getting a submatrix with NOT contiguous indices is more difficult, partly because the indices come from a "vector" that is really scipy (1,N) matrix (and scipy "insist" in keeping two indices). rr = scipy.matrix('[2 4 5]') cc = scipy.matrix('[1,2]') print rr print cc print mat[[1,2,5],:][:,[4,8]] --- rr: [[2 4 5]] --- cc: [[1 2]] --- mat: [[14 18] [24 28] [54 58]] Now I want to add a given matrix to the submatrix I've "selected". off = scipy.matrix( [1000] * 6 ) off.shape = (3,2) print '--- off\n', off print '--- mat + off' mat[[1,2,5],:][:,[4,8]] + off print --- 'try mat += off ... mat' mat[[1,2,5],:][:,[4,8]] += off print mat[[1,2,5],:][:,[4,8]] --- off [[1000 1000] [1000 1000] [1000 1000]] --- mat + off [[1014 1018] [1024 1028] [1054 1058]] --- try mat += off ... mat [[14 18] [24 28] [54 58]] So it seems that the submatrix is a kind of "read-on view" that i can not update, but updating is just what I need!!! Any hint? Thanks in advance, Luca PS: the matrix (mat) that I'm "constructing" will be (in the future) BIG (1 million x 1 million, up to 100K x 100K is already feasible in MATLAB) SPARSE (some 20 or 30 not null elements per row) and COMPLEX, and the associated linear problem will need to be solved with some iterative method (GMRES or similar) .... but that's another story :) From a.h.jaffe at gmail.com Sun Oct 31 11:33:11 2010 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Sun, 31 Oct 2010 15:33:11 +0000 Subject: [SciPy-User] Order of numpy orperations is not equal to logic (and also octave)? In-Reply-To: References: Message-ID: <4CCD8C37.3020903@gmail.com> On 31/10/2010 08:34, Oz Nahum Tiram wrote: > Hi Everyone, > > So here is something which I discovered lately and is making me wonder. > > I want to define a scalar which I call Net Absolute Mass Balance Error > or in short NAMBE. This NAMBE is the absolute difference between a base > vector and another vector, divided by the base vector and multiplied by > a hundred, in pseudo-code notation: > > |NAMBE=sum(abs(a-b)/a)*100 > > When I do it in python, I decided to break the line into two lines so > the code is more readable: > > |>>> a=np.array([0.1,0.1,0.1,0.1,0.1]) > >>>> b=np.array([0.1,0.1,0.1,0.1,0.1])*2 > >>>> b > array([ 0.2, 0.2, 0.2, 0.2, 0.2]) > >>>> a-b > array([-0.1, -0.1, -0.1, -0.1, -0.1]) > >>>> s=np.sum(abs(a-b)) > >>>> s > 0.5 >>>> s/np.sum(a) > > 1.0 > > I thought the numpy does everything element wise so if I do it one line, > I noticed the the result is different: > > |>>> s=np.sum(abs(a-b)/a) > >>>> s > 5.0 Math error here: sum((a-b)/a) IS NOT EQUAL TO sum(a-b)/sum(a). Andrew From a.h.jaffe at gmail.com Sun Oct 31 11:33:11 2010 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Sun, 31 Oct 2010 15:33:11 +0000 Subject: [SciPy-User] Order of numpy orperations is not equal to logic (and also octave)? In-Reply-To: References: Message-ID: <4CCD8C37.3020903@gmail.com> On 31/10/2010 08:34, Oz Nahum Tiram wrote: > Hi Everyone, > > So here is something which I discovered lately and is making me wonder. > > I want to define a scalar which I call Net Absolute Mass Balance Error > or in short NAMBE. This NAMBE is the absolute difference between a base > vector and another vector, divided by the base vector and multiplied by > a hundred, in pseudo-code notation: > > |NAMBE=sum(abs(a-b)/a)*100 > > When I do it in python, I decided to break the line into two lines so > the code is more readable: > > |>>> a=np.array([0.1,0.1,0.1,0.1,0.1]) > >>>> b=np.array([0.1,0.1,0.1,0.1,0.1])*2 > >>>> b > array([ 0.2, 0.2, 0.2, 0.2, 0.2]) > >>>> a-b > array([-0.1, -0.1, -0.1, -0.1, -0.1]) > >>>> s=np.sum(abs(a-b)) > >>>> s > 0.5 >>>> s/np.sum(a) > > 1.0 > > I thought the numpy does everything element wise so if I do it one line, > I noticed the the result is different: > > |>>> s=np.sum(abs(a-b)/a) > >>>> s > 5.0 Math error here: sum((a-b)/a) IS NOT EQUAL TO sum(a-b)/sum(a). Andrew From charlesr.harris at gmail.com Sun Oct 31 14:13:11 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 31 Oct 2010 12:13:11 -0600 Subject: [SciPy-User] Permutation convention for LU decomposition In-Reply-To: References: <4CCCC999.2000203@creativetrax.com> Message-ID: On Sun, Oct 31, 2010 at 6:18 AM, Sebastian Walter < sebastian.walter at gmail.com> wrote: > On Sun, Oct 31, 2010 at 2:42 AM, Jason Grout > wrote: > > I notice that in Lapack, Matlab, and Mathematica, the LU decomposition > > routine for a matrix A returns a P, L, and U matrices so that: > > > > PA=LU > > > > In scipy, however, the LU decomposition routine gives three matrices so > > that: > > > > A=PLU > > > > (i.e., the P matrix is the inverse of the P matrix returned by the other > > software) > > P should be orthogonal and thus P^{-1} = P^T > so PA = LU or A = P L U shouldn't be a big issue. > > > > > I'm curious why this design decision was made. Was there precedent with > > other software to make it A=PLU instead of PA=LU? Is it more natural in > > applications? I realize it's just a convention. > > Algorithms like dgetrf (in LAPACK) > do not return a matrix P but an IPIV array that stores _successive_ > row interchanges. > E.g.: > > In [1]: import numpy > > In [2]: import scipy.linalg > > In [3]: A = numpy.random.rand(3,3) > > In [4]: print scipy.linalg.lapack.clapack.dgetrf(A.copy()) > (array([[ 0.73664967, 0.2875308 , 0.67223657], > [ 0.82181513, 0.62191285, -0.00461938], > [ 0.13967213, 0.42672366, -0.02721027]]), array([1, 1, 2], > dtype=int32), 0) > > In [5]: print scipy.linalg.lu(A.copy()) > (array([[ 0., 1., 0.], > [ 1., 0., 0.], > [ 0., 0., 1.]]), array([[ 1. , 0. , 0. ], > [ 0.82181513, 1. , 0. ], > [ 0.13967213, 0.42672366, 1. ]]), array([[ 0.73664967, > 0.2875308 , 0.67223657], > [ 0. , 0.62191285, -0.00461938], > [ 0. , 0. , -0.02721027]])) > > > The array([1, 1, 2], dtype=int32) are the pivots, typically denoted > IPIV in LAPACK. I.e., you first interchange row 0 <-> 1, then 1 <-> 1 > and finally 2<->2. In total you have interchanged > row 0 and 1. Row 2 remains unchanged. > > So, arguably the natural representation of the permutation is IPIV and > not the permutation matrix P. > I'm not familiar with the scipy.linalg.lu implementation. If it is > based on dgetref it probably computes P from IPIV. > > The matrix form of P could be a problem with really big matrices. Same with the orthogonal matrices in SVD and QR, which are stored internally as Householder reflections. I don't see an easy way around this without complicating matters at the user level. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Sun Oct 31 15:36:46 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 31 Oct 2010 12:36:46 -0700 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: <8c959326-0891-426e-b945-88793388dad5@c20g2000yqj.googlegroups.com> <4CCB459E.4060104@gmail.com> <20101030120748.GA17768@phare.normalesup.org> Message-ID: On Sat, Oct 30, 2010 at 11:13 PM, David Warde-Farley wrote: > Ideally, we'd be able to use GitHub gists, or whatever the equivalent mechanism is for several other sites (bitbucket, gitorious, pastebin.com). GitHub provides a gists API which makes machine-readable info about the gist available via JSON, so even just an URL and tags would be enough and you could fetch the rest via API. >From where I sit, the gallery above the peanut gallery, I think it is disheartening that we all use github instead of a FOSS version like gitorious. If the linux kernel did the same (continued to use proprietary package management) we wouldn't even have git. I wonder what our choice to use github (along with what seems like a million other developers) means for the prospects of a gist-like feature in gitorious (which we could install on our own servers). From gael.varoquaux at normalesup.org Sun Oct 31 15:50:48 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 31 Oct 2010 20:50:48 +0100 Subject: [SciPy-User] Central File Exchange for SciPy In-Reply-To: References: Message-ID: <20101031195048.GA22536@phare.normalesup.org> On Sun, Oct 31, 2010 at 12:36:46PM -0700, Keith Goodman wrote: > From where I sit, the gallery above the peanut gallery, I think it is > disheartening that we all use github instead of a FOSS version like > gitorious. If the linux kernel did the same (continued to use > proprietary package management) we wouldn't even have git. I wonder > what our choice to use github (along with what seems like a million > other developers) means for the prospects of a gist-like feature in > gitorious (which we could install on our own servers). While I have (very) mixed feeling for git, github is really awesome. The git usability is, in my eyes, horrendous. It is compensated by a fantastic internal design, and great power. However, I wouldn't be using git if there was not github: its usability is fantastic and unlike git there is almost no learning curve[*]. I guess there is a lesson to be learned here: free software tends to make first good design and worry about normal users only very late down the pipeline. We have seen this story repeat over and over. So many people use Apple computers, even in the free software community. Gael [*] I must confess that I hadn't looked at gitorious for a while, and it seems that it is making a lot of progress in terms of being usable. It's probably doing better than most code sharing platforms. There is value in competition! From emanuele at relativita.com Sun Oct 31 16:21:07 2010 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sun, 31 Oct 2010 21:21:07 +0100 Subject: [SciPy-User] Matrix indexing and updating In-Reply-To: <19661.34100.754738.920137@asterix.luke.org> References: <19661.34100.754738.920137@asterix.luke.org> Message-ID: <4CCDCFB3.9040009@relativita.com> Hi Luca, If I understand you problem correctly, maybe this example can help you: ---- In [1]: import numpy as np In [2]: mat = np.arange(100).reshape(10,10) In [3]: rows = [[1],[3],[4]] In [4]: columns = [4,5,9] In [5]: mat[rows,columns] Out[5]: array([[14, 15, 19], [34, 35, 39], [44, 45, 49]]) In [6]: mat[rows,columns]=1 In [7]: mat Out[7]: array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 1, 1, 16, 17, 18, 1], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 1, 1, 36, 37, 38, 1], [40, 41, 42, 43, 1, 1, 46, 47, 48, 1], [50, 51, 52, 53, 54, 55, 56, 57, 58, 59], [60, 61, 62, 63, 64, 65, 66, 67, 68, 69], [70, 71, 72, 73, 74, 75, 76, 77, 78, 79], [80, 81, 82, 83, 84, 85, 86, 87, 88, 89], [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]]) ---- Best, Emanuele On 10/31/2010 04:03 PM, Luca Manini wrote: > Hi everybody, > > I'm new to this list and here comes my first question about matrix > indexing and updating (in scipy, of course). > > I'm starting writing some code (for my wife) to solve finite elements > problems. One typical action is to access (both for reading and > writing) som "sparse" submatrices of a given BIG matrix. The > submatrices are defined by two lists of indices, one for the rows and > one for the cols. > > I've two problems with that. > > The first one is how to ACCESS (read) the matrix entries from two > lists of indices, in "pseudo code": > > mat = scipy.matrix( range(0,100) ) > mat( [1,3,4], [4,5,9] ) > > in order to get all the elements mat[i,j] for i in [1,3,4] and j in > [4,5,9]. > > The second is how to CHANGE those elements. I manage to solve the > first problem (but the sintax seems to me "uglier than needed") but > not the second. > > From nahumoz at gmail.com Sun Oct 31 16:30:14 2010 From: nahumoz at gmail.com (Oz Nahum Tiram) Date: Sun, 31 Oct 2010 21:30:14 +0100 Subject: [SciPy-User] Order of numpy orperations is not equal to logic (and also octave)? Message-ID: Continuing this subject, I have a feeling that numpy behaves in very un-intuitive way for me. Here is an example that someone gave me in Stackoverflow.com: >>> import numpy as np >>> a=np.array([1,0]) >>> b=np.array([-1,1]) >>> np.sum(a)/np.sum(b) 0 >>> np.sum(a)/b array([-1, 1]) >>> np.sum(a)/0 0 Octave warns when dividing by zero: octave:1> a=[1,0] a = 1 0 octave:2> b=[-1,1] b = -1 1 octave:3> sum(a)/sum(b) warning: division by zero ans = Inf octave:4> sum(a)./sum(b) warning: division by zero ans = Inf octave:5> sum(a)./b ans = -1 1 I think this is very important that these differences are broad-casted to new comers from matlab/octave to python and numpy. I work interchangeably with both, and this is quite tricky... -- Oz Nahum Graduate Student Zentrum f?r Angewandte Geologie Universit?t T?bingen --- Imagine there's no countries it isn't hard to do Nothing to kill or die for And no religion too Imagine all the people Living life in peace -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Oct 31 16:39:55 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 31 Oct 2010 16:39:55 -0400 Subject: [SciPy-User] Order of numpy orperations is not equal to logic (and also octave)? In-Reply-To: References: Message-ID: On Sun, Oct 31, 2010 at 4:30 PM, Oz Nahum Tiram wrote: > Continuing this subject, I have a feeling that numpy behaves in very > un-intuitive way for me. > Here is an example that someone gave me in Stackoverflow.com: > >>>> import numpy as np >>>> a=np.array([1,0]) >>>> b=np.array([-1,1]) > >>>> np.sum(a)/np.sum(b) > 0 >>>> np.sum(a)/b > array([-1,? 1]) >>>> np.sum(a)/0 > 0 That's integer division not floating point division, something to get used to in python below version 3, or switch to from __future__ import division (???) It's python and not numpy specific. Josef > > > Octave warns when dividing by zero: > octave:1> a=[1,0] > a = > > ?? 1?? 0 > > octave:2> b=[-1,1] > b = > > ? -1?? 1 > > octave:3> sum(a)/sum(b) > warning: division by zero > ans = Inf > octave:4> sum(a)./sum(b) > warning: division by zero > ans = Inf > octave:5> sum(a)./b > ans = > > ? -1?? 1 > > > I think this is very important that these differences are broad-casted to > new comers from matlab/octave to python and numpy. > I work interchangeably with both, and this is quite tricky... > > > -- > Oz Nahum > Graduate Student > Zentrum f?r Angewandte Geologie > Universit?t T?bingen > > --- > > Imagine there's no countries > it isn't hard to do > Nothing to kill or die for > And no religion too > Imagine all the people > Living life in peace > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From charlesr.harris at gmail.com Sun Oct 31 16:40:26 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 31 Oct 2010 14:40:26 -0600 Subject: [SciPy-User] Order of numpy orperations is not equal to logic (and also octave)? In-Reply-To: References: Message-ID: On Sun, Oct 31, 2010 at 2:30 PM, Oz Nahum Tiram wrote: > Continuing this subject, I have a feeling that numpy behaves in very > un-intuitive way for me. > Here is an example that someone gave me in Stackoverflow.com: > > >>> import numpy as np > >>> a=np.array([1,0]) > >>> b=np.array([-1,1]) > > >>> np.sum(a)/np.sum(b) > 0 > >>> np.sum(a)/b > array([-1, 1]) > >>> np.sum(a)/0 > 0 > > It's a version problem: In [1]: a=np.array([1,0]) In [2]: b=np.array([-1,1]) In [3]: np.sum(a)/np.sum(b) Warning: divide by zero encountered in long_scalars Out[3]: 0 Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From washakie at gmail.com Sun Oct 31 19:13:11 2010 From: washakie at gmail.com (John) Date: Mon, 1 Nov 2010 00:13:11 +0100 Subject: [SciPy-User] HDF4, HDF5, netcdf solutions -- PyNIO/PyNGL or CDAT or ?? Message-ID: Hello all, I'm writing to ask what people are generally relying on as a 'toolchain' for reading/writing netcdf and hdf4/5 files. Also perhaps grib? I would believe that PyNIO could solve most my problems, but I've always encountered problems building it. I have now tried the binaries and get a foolish error that it is looking for libgfortran.so.1, when my system has libgfortran.so.3 (I have no admin rights). These types of errors / headaches seem to plague me often when trying to build hdf4 and link it to netcdf, etc. I've encountered similar challenges when trying to build CDAT from source. I've always managed to get close, but it seems something in the end causes a hang up. I have finally gotten netcdf4-python built, and it seems to work, but it does not seem to be able to read hdf4 files. I have been able to read hdf5, however which is terrific. Mostly, this is a call for comments / thoughts on the tools that are out there, and what people generally recommend. Perhaps, if anyone has any pro/con feelings on the investment in learning CDAT vs. PyNIO/PyNGL, that would also be welcomed. Regards, john