From ferrell at diablotech.com Tue Dec 1 00:32:14 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Mon, 30 Nov 2009 22:32:14 -0700 Subject: [SciPy-User] scikits.timeseries question In-Reply-To: <17E9CA7E-7446-4202-997C-9AB5081977C0@gmail.com> References: <4B145C25.7040303@noaa.gov> <9DF1735B-BD7A-456A-8DD2-08C281440221@gmail.com> <4B146218.9000305@noaa.gov> <4B146E79.7090407@noaa.gov> <2B27D020-8DE3-4BFE-8B84-936E4A1B9FBE@diablotech.com> <17E9CA7E-7446-4202-997C-9AB5081977C0@gmail.com> Message-ID: <9F875DA8-211C-45FF-B9F4-935C7588CCDF@diablotech.com> On Nov 30, 2009, at 9:03 PM, Pierre GM wrote: > On Nov 30, 2009, at 9:59 PM, Robert Ferrell wrote: > >> This seems natural to me, (array + Date = array) although I do have >> to >> include an extra line sometimes to get a DateArray if I need it. >> If I >> need a timeseries, sometimes I can skip making the DateArray >> explicitly. > > Well, keep in mind that Date was implemented a few years ago > already, far before the new datetime64 dtype, and it was the easiest > way we had to define a new datatype (well, a kind of datatype). I'll > check how we can merge the two approaches when I'll have some time. > Anyhow, in practice, a Date object will be seen as a np.object by > numpy, and you end up having a ndarray with a np.object dtype. Since a DateArray has more methods than a bare np.array, it would surprise me if an array of Dates were a DateArray. That's why it seems natural that np.array + Date = np.array. I think that behavior will continue, right? In any case, I find the timeseries scikit very useful, and while there's always room for improvement, as it is you & Matt did a great job. > >> Is the issue that sd is a Date and not a DateArray? You can always >> make a DataArray with sd, of the correct length, and then add to >> that: >> >> In [83]: sd = ts.Date('d', '2001-01-01') >> >> In [84]: d1 = ts.date_array(zeros(4) + sd) > > Wow, that's overkill ! Just make sd a DateArray: >>>> np.arange(4) + ts.DateArray(sd) > > Now, because DateArray is a subclass of ndarray with a higher > priority, its _add__ method takes over and the ouput is a DateArray. Of course. I was just trying to show something by example. (For silly reasons, I avoid the np.array + DateArray construct. A few times I have mistakenly made DateArray + DateArray constructs (forgetting I'd already promoted the first argument to a DateArray), which rarely gives me what I want/expect. Silly, yes, but I code with the brain I have, not the brain I wish I had:) > >> >>> and I'm still confused about what this means: >>> >>>>> In [43]: da = ts.date_array((1,2,3,4), start_date=sd) >> >> This throws an exception for me. >> >> : year=1 is before 1900; the datetime >> strftime() methods require year >= 1900 > > > What version are you using ? And anyway, you get the exception only > if you try to print it (as strftime is called only when calling repr/ > str) In [178]: ts.__version__ Out[178]: '0.67.0.dev-r1570' From pgmdevlist at gmail.com Tue Dec 1 00:45:07 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 1 Dec 2009 00:45:07 -0500 Subject: [SciPy-User] scikits.timeseries question In-Reply-To: <9F875DA8-211C-45FF-B9F4-935C7588CCDF@diablotech.com> References: <4B145C25.7040303@noaa.gov> <9DF1735B-BD7A-456A-8DD2-08C281440221@gmail.com> <4B146218.9000305@noaa.gov> <4B146E79.7090407@noaa.gov> <2B27D020-8DE3-4BFE-8B84-936E4A1B9FBE@diablotech.com> <17E9CA7E-7446-4202-997C-9AB5081977C0@gmail.com> <9F875DA8-211C-45FF-B9F4-935C7588CCDF@diablotech.com> Message-ID: <7288DC84-A0C6-4254-80C5-D37A54EC2A0E@gmail.com> On Dec 1, 2009, at 12:32 AM, Robert Ferrell wrote: > > Since a DateArray has more methods than a bare np.array, it would > surprise me if an array of Dates were a DateArray. That's why it > seems natural that np.array + Date = np.array. I think that behavior > will continue, right? Most likely, yes... But it's still premature to tell. > In any case, I find the timeseries scikit very useful, and while > there's always room for improvement, as it is you & Matt did a great > job. Thanks a lot ! > > In [178]: ts.__version__ > Out[178]: '0.67.0.dev-r1570' My, the SVN version matches, but the base version does not... Gonna have to fix that From almar.klein at gmail.com Tue Dec 1 09:36:16 2009 From: almar.klein at gmail.com (Almar Klein) Date: Tue, 1 Dec 2009 15:36:16 +0100 Subject: [SciPy-User] ANN: visvis In-Reply-To: References: Message-ID: Hi all, I should have tested visvis on IPython before publishing it. I use an IDE that hijacks and keeps up to date the mainloops of qt4 and wx. Therefore visvis does not work out of the box for most people. This is now fixed (available in the next version of via svn). In the Python and Ipython shell one has to apply the following to use visvis interactively: import visvis as vv vv.use('qt4') # Use this for version 1.0. In newer version this will not be required. This works for QT. Unfortunately, this does not work for wxPython though, but I'm looking into this. Almar 2009/11/30 Almar Klein : > Hi all, > > I am pleased to announce the first release of visvis, a Python > visualization library for of 1D to 4D data. > > Website: http://code.google.com/p/visvis/ > Discussion group: http://groups.google.com/group/visvis/ > > Since this is the first release, it hasn't been tested on a large > scale yet. Therefore I'm specifically interested to know whether it > works for everyone. > > === Description === > > Visvis is a pure Python visualization library that uses OpenGl to > display 1D to 4D data; it can be used from simple plotting tasks to > rendering 3D volumetric data that moves in time. > > Visvis can be used in Python scripts, interactive Python sessions (as > with IPython or IEP) and can be embedded in applications. > > Visvis employs an object oriented structure; each object being > visualized (e.g. a line or a texture) has various properties that can > be modified to change its behaviour or appearance. A Matlab-like > interface in the form of a set of functions allows easy creation of > these objects (e.g. plot(), imshow(), volshow()). > > Regards, > ?Almar > From dalloliogm at gmail.com Tue Dec 1 09:46:52 2009 From: dalloliogm at gmail.com (Giovanni Marco Dall'Olio) Date: Tue, 1 Dec 2009 15:46:52 +0100 Subject: [SciPy-User] ANN: visvis In-Reply-To: References: Message-ID: <5aa3b3570912010646o4a732a6axfe16e431a30625bb@mail.gmail.com> On Mon, Nov 30, 2009 at 10:05 AM, Almar Klein wrote: > Hi all, > > I am pleased to announce the first release of visvis, a Python > visualization library for of 1D to 4D data. > > Website: http://code.google.com/p/visvis/ > Discussion group: http://groups.google.com/group/visvis/ > This is interesting; but which are the differences with matplotlib/pylab? Can you make some examples of functions that are available with vv and maybe more difficult to implement with matplotlib? I am preparing a seminar on how to plot data with python and I am interested in adding references to new alternatives. cheers! -- Giovanni Dall'Olio, phd student Department of Biologia Evolutiva at CEXS-UPF (Barcelona, Spain) My blog on bioinformatics: http://bioinfoblog.it -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaedol at gmail.com Tue Dec 1 10:35:04 2009 From: gaedol at gmail.com (Marco) Date: Tue, 1 Dec 2009 16:35:04 +0100 Subject: [SciPy-User] matrix of vectors Message-ID: Hi all, I'd need to build a matrix (NxN) in which each element is a vector with 3 components. What is the fastest way to do this? TIA, marco From almar.klein at gmail.com Tue Dec 1 11:02:36 2009 From: almar.klein at gmail.com (Almar Klein) Date: Tue, 1 Dec 2009 17:02:36 +0100 Subject: [SciPy-User] ANN: visvis In-Reply-To: <5aa3b3570912010646o4a732a6axfe16e431a30625bb@mail.gmail.com> References: <5aa3b3570912010646o4a732a6axfe16e431a30625bb@mail.gmail.com> Message-ID: Hi Giovanni, Thanks for your interesting question. > This is interesting; but which are the differences with matplotlib/pylab? > Can you make some examples of functions that are available with vv and maybe > more difficult to implement with matplotlib? > I am preparing a seminar on how to plot data with python and I am interested > in adding references to new alternatives. The advantage of visvis over matplotlib is that it can plot 3D data. These can be 3D lines and points or 3D rendering of volumetric data (for example CT data). Also, because it uses OpenGl, displaying images, and zooming/panning them is much faster in visvis. Visvis is also designed to be extendable. If you know a bit of OpenGl you can easily create your own world-objects (I call them wobjects) and place them in a scene with other objects. I tried to make visvis a relatively complete visualization library by enabling easy plotting of high quality, with support for labels, title, legend, etc. However, compared to matplotlib, visvis 2D plotting capabilities are very limited. I plan to extend these a bit, by for example providing support for bar plots, but I do not intend to go much further than that. So you might say that visvis is more focussed at 3D visualization, and leaves the advanced 2D visualization to matplotlib. But there is of course some overlap. Another thing to note is that both visvis and matplotlib use an object oriented structure with a functional interface similar to Matlab, which makes the use of both libraries very easy and intuitive. Cheers, Almar From nmb at wartburg.edu Tue Dec 1 11:53:54 2009 From: nmb at wartburg.edu (Neil Martinsen-Burrell) Date: Tue, 01 Dec 2009 10:53:54 -0600 Subject: [SciPy-User] matrix of vectors In-Reply-To: References: Message-ID: <4B154A22.90208@wartburg.edu> On 2009-12-01 09:35 , Marco wrote: > Hi all, > > I'd need to build a matrix (NxN) in which each element is a vector > with 3 components. One possible way to do this is to use a rich datatype like >>> N = 64 >>> a = np.zeros((N,N), dtype=np.dtype("(3,)f8") >>> a[0,0] array([ 0., 0., 0.]) which defines an NxN array whose elements are length 3 vectors of floats. You can also just think of this as an NxNx3 array of floats where numpy's indexing rules let you index just the first two dimensions: In[8]: a = np.zeros((N,N,3)) In [9]: a[1,1] Out[9]: array([ 0., 0., 0.]) You should note that if you want to multiply these vectors by other matrices (such as rotations, etc.) then it is possible (but tricky) to use np.dot to do so. -Neil From gaedol at gmail.com Tue Dec 1 11:56:42 2009 From: gaedol at gmail.com (Marco) Date: Tue, 1 Dec 2009 17:56:42 +0100 Subject: [SciPy-User] matrix of vectors In-Reply-To: <4B154A22.90208@wartburg.edu> References: <4B154A22.90208@wartburg.edu> Message-ID: Thank you Neil! I guess I'll go with the first option. I wasn't aware of the chance to use these tricks on scipy. Any pointers to this kind of tricks? Thank you! marco On Tue, Dec 1, 2009 at 5:53 PM, Neil Martinsen-Burrell wrote: > On 2009-12-01 09:35 , Marco wrote: >> >> Hi all, >> >> I'd need to build a matrix (NxN) in which each element is a vector >> with 3 components. > > One possible way to do this is to use a rich datatype like > >>>> N = 64 >>>> a = np.zeros((N,N), dtype=np.dtype("(3,)f8") >>>> a[0,0] > array([ 0., ?0., ?0.]) > > which defines an NxN array whose elements are length 3 vectors of floats. > ?You can also just think of this as an NxNx3 array of floats where numpy's > indexing rules let you index just the first two dimensions: > > In[8]: a = np.zeros((N,N,3)) > > In [9]: a[1,1] > Out[9]: array([ 0., ?0., ?0.]) > > You should note that if you want to multiply these vectors by other matrices > (such as rotations, etc.) then it is possible (but tricky) to use np.dot to > do so. > > -Neil > From nwagner at iam.uni-stuttgart.de Tue Dec 1 15:48:44 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 01 Dec 2009 21:48:44 +0100 Subject: [SciPy-User] Kommander & python Message-ID: Hi all, Sorry if the topic is out of place here ... How can I execute python (including arguments) from Kommander ? http://kommander.kdewebdev.org/ Any pointer would be appreciated. Thanks in advance Nils From wnbell at gmail.com Tue Dec 1 19:14:30 2009 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 1 Dec 2009 19:14:30 -0500 Subject: [SciPy-User] isn't it a bug in scipy.sparse? + some questions In-Reply-To: References: Message-ID: 2009/11/8 Dmitrey : > > Questions: > 1) Seems like a[some_ind,:]=something works very, very slow for lil. I have > implemented a workaround, but can I use a[some_ind,:] for another format > than lil? (seems like all other ones doesn't support it). Please file a bug report if you haven't done so already. It's possible that this issue is already fixed [1] or will be fixed in 0.8. [1] http://projects.scipy.org/scipy/ticket/917 > 2) What is current situation with matmat and matvec functions? They say > "deprecated" but no alternative is mentioned. Those are implemented with A*x now (just like a dense matrix). Sorry for the confusion, the deprecation warning should be more explicit. Feel free to file a bug report :) > 3) What is current situation with scipy.sparse.linalg.spsolve? It says > /usr/lib/python2.6/dist-packages/scipy/sparse/linalg/dsolve/linsolve.py:78: > DeprecationWarning: scipy.sparse.linalg.dsolve.umfpack will be removed, > install scikits.umfpack instead > ? ' install scikits.umfpack instead', DeprecationWarning ) > But I don't want my code to be dependent on a scikits module. Are there > another default/autoselect solver for sparse SLEs? > If no, which one would you recommend me to use as default for sparse SLEs - > bicg, gmres, something else? Currently, if you have UMFPACK installed when SciPy is compiled (often this happens automatically) then it will be the default handler for spsolve(). Since UMFPACK is a GPL code a decision was made to move the wrappers an optional package (scikits.umfpack). Eventually, these wrappers will be removed from SciPy and the default will revert to SuperLU, which is BSD licensed sparse factorization method that ships with SciPy. -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ From vanleeuwen.martin at gmail.com Tue Dec 1 19:39:29 2009 From: vanleeuwen.martin at gmail.com (Martin van Leeuwen) Date: Tue, 1 Dec 2009 16:39:29 -0800 Subject: [SciPy-User] ANN: visvis In-Reply-To: References: <5aa3b3570912010646o4a732a6axfe16e431a30625bb@mail.gmail.com> Message-ID: Hi Almar, That is great news. I have used Mayavi quite a bit for working with 3D point clouds. It works very good, but as I am working a lot on these point clouds, I am always interested in other options. I am wondering in what way Visvis is different. Are there export functions to IDL? Or Constructive Solid Geometry functions/ways to incorporate these? Thanks Martin 2009/12/1 Almar Klein > Hi Giovanni, > > Thanks for your interesting question. > > > This is interesting; but which are the differences with matplotlib/pylab? > > Can you make some examples of functions that are available with vv and > maybe > > more difficult to implement with matplotlib? > > I am preparing a seminar on how to plot data with python and I am > interested > > in adding references to new alternatives. > > The advantage of visvis over matplotlib is that it can plot 3D data. > These can be 3D > lines and points or 3D rendering of volumetric data (for example CT > data). Also, > because it uses OpenGl, displaying images, and zooming/panning them is much > faster in visvis. Visvis is also designed to be extendable. If you > know a bit of OpenGl > you can easily create your own world-objects (I call them wobjects) > and place them > in a scene with other objects. > > I tried to make visvis a relatively complete visualization library by > enabling easy > plotting of high quality, with support for labels, title, legend, etc. > However, compared > to matplotlib, visvis 2D plotting capabilities are very limited. I > plan to extend these a > bit, by for example providing support for bar plots, but I do not > intend to go much > further than that. > > So you might say that visvis is more focussed at 3D visualization, and > leaves the > advanced 2D visualization to matplotlib. But there is of course some > overlap. Another > thing to note is that both visvis and matplotlib use an object > oriented structure with a > functional interface similar to Matlab, which makes the use of both > libraries very easy > and intuitive. > > Cheers, > Almar > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donovan.parks at gmail.com Tue Dec 1 20:11:11 2009 From: donovan.parks at gmail.com (Donovan Parks) Date: Tue, 1 Dec 2009 21:11:11 -0400 Subject: [SciPy-User] SciPy under Windows 7 (x64) Message-ID: <8fd818a30912011711m31aecc12i35c97a8f960769ea@mail.gmail.com> Hello, I recently moved to Windows 7 (x64) and am having trouble with SciPy (or perhaps NumPy). I'm using Python 2.6.4, the scipy-0.7.1-win32-superpack-python2.6.exe for SciPy, and the numpy-1.4.0rc1-win32-superpack-python2.6.exe for NumPy. A simple example of SciPy/NumPy not working is as follows: >>> from scipy.stats import chi2 Traceback (most recent call last): File "", line 1, in File "D:\Python26\lib\site-packages\scipy\stats\__init__.py", line 7, in from stats import * File "D:\Python26\lib\site-packages\scipy\stats\stats.py", line 203, in from morestats import find_repeats #is only reference to scipy.stats File "D:\Python26\lib\site-packages\scipy\stats\morestats.py", line 7, in import distributions File "D:\Python26\lib\site-packages\scipy\stats\distributions.py", line 27, in import vonmises_cython File "numpy.pxd", line 30, in scipy.stats.vonmises_cython (scipy\stats\vonmises_cython.c:2939) ValueError: numpy.dtype does not appear to be the correct type object Do I need to build NumPy and SciPy from source for them to work under Windows 7 (x64)? or is there a way I can use easy_install? Cheers, Donovan From zachary.pincus at yale.edu Tue Dec 1 20:12:44 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 1 Dec 2009 20:12:44 -0500 Subject: [SciPy-User] useful code: binary heaps and Dijkstra's algorithm/minimum-cost-path for arrays (cython) Message-ID: <0DEEA605-5E5D-4FB8-BD0A-F2F3B75D979B@yale.edu> Hello all, Thanks to generous contributions from Almar Klein, here are two useful bits of code for those interested: (1) A fast cython implementation, written by Almar, of a binary min heap. This of course provides O(log2(N)) pushes to and pops from the heap, and O(1) queries of the minimum value. Also, this implementation can track an arbitrary integer reference along with each value pushed. Further, there is another heap class that keeps an index, so that it is possible to query the value for a given reference in O(1) time. This also allows the heap to update the value for a given reference in O(log2(N)) time -- very useful for pathfinding algorithms, e.g., which keep track of the minimum cost of a the path to various points. (2) Cython code for finding the minimum cost path through an n-d array from any start point(s) to any/all points on the array. The code handles boundary conditions properly (and quickly), and can (optionally) account for the fact that diagonal moves between elements are geometrically longer than axial moves and thus weight the cost of the path accordingly. (This implementation, based on Almar's, is essentially Dijkstra's algorithm plus the optional distance-weighting. It's about an order of magnitude faster than the brute-force path- finding algorithm I posted earlier; plus it's not constrained to 2d arrays.) The latter class probably belongs in the imaging scikit, but I figured I'd post them both here in case anyone would find them of use. They're BSD licensed. Zach -------------- next part -------------- A non-text attachment was scrubbed... Name: mcp.zip Type: application/zip Size: 13765 bytes Desc: not available URL: From josef.pktd at gmail.com Tue Dec 1 20:27:07 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 1 Dec 2009 20:27:07 -0500 Subject: [SciPy-User] SciPy under Windows 7 (x64) In-Reply-To: <8fd818a30912011711m31aecc12i35c97a8f960769ea@mail.gmail.com> References: <8fd818a30912011711m31aecc12i35c97a8f960769ea@mail.gmail.com> Message-ID: <1cd32cbb0912011727w4bf99599te43aacf962b892c1@mail.gmail.com> On Tue, Dec 1, 2009 at 8:11 PM, Donovan Parks wrote: > Hello, > > I recently moved to Windows 7 (x64) and am having trouble with SciPy > (or perhaps NumPy). I'm using Python 2.6.4, the > scipy-0.7.1-win32-superpack-python2.6.exe for SciPy, and the > numpy-1.4.0rc1-win32-superpack-python2.6.exe for NumPy. A simple > example of SciPy/NumPy not working is as follows: > >>>> from scipy.stats import chi2 > Traceback (most recent call last): > ?File "", line 1, in > ?File "D:\Python26\lib\site-packages\scipy\stats\__init__.py", line > 7, in > ? ?from stats import * > ?File "D:\Python26\lib\site-packages\scipy\stats\stats.py", line 203, > in > ? ?from morestats import find_repeats #is only reference to scipy.stats > ?File "D:\Python26\lib\site-packages\scipy\stats\morestats.py", line > 7, in > ? ?import distributions > ?File "D:\Python26\lib\site-packages\scipy\stats\distributions.py", > line 27, in > ? ?import vonmises_cython > ?File "numpy.pxd", line 30, in scipy.stats.vonmises_cython > (scipy\stats\vonmises_cython.c:2939) > ValueError: numpy.dtype does not appear to be the correct type object > > Do I need to build NumPy and SciPy from source for them to work under > Windows 7 (x64)? or is there a way I can use easy_install? > > Cheers, > Donovan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > David will know better, but can you try scipy-0.7.1 with numpy 1.3. to see whether this combination works on Windows 7 and to see whether your error is numpy 1.4.0 related? (I'm still not convinced that binary compatibility hasn't been broken.) On WindowsXP compiling scipy trunk against numpy 1.4 work so far without problems (but I have problems with crashes of matplotlib which is compiled against an older numpy.) I had a similar error a while ago when scipy compiled against an earlier version wasn't binary compatible with numpy trunk. Josef From almar.klein at gmail.com Wed Dec 2 03:43:12 2009 From: almar.klein at gmail.com (Almar Klein) Date: Wed, 2 Dec 2009 09:43:12 +0100 Subject: [SciPy-User] ANN: visvis In-Reply-To: References: <5aa3b3570912010646o4a732a6axfe16e431a30625bb@mail.gmail.com> Message-ID: Hi Martin, I've no experience with mayavi, so I 'm not sure about the differences. Visvis is purely a visualization library, so (other than reading and writing images) it has not much import/export functionality. For what it's worth: when visualizing clouds with a large amount of points in visvis, you can chose your markers such that only OpenGl points are used (otherwise visvis uses sprites, which can be slower, especially for large amount of points). To do this, chose the markers of the Line object to be circles or squares, and if you use an alpha value smaller than 1, do not use a marker edge. Almar 2009/12/2 Martin van Leeuwen : > Hi Almar, > > That is great news. I have used Mayavi quite a bit for working with 3D point > clouds. It works very good, but as I am working a lot on these point clouds, > I am always interested in other options. > I am wondering in what way Visvis is different. Are there export functions > to IDL? Or Constructive Solid Geometry functions/ways to incorporate these? > > Thanks > Martin > > 2009/12/1 Almar Klein >> >> Hi Giovanni, >> >> Thanks for your interesting question. >> >> > This is interesting; but which are the differences with >> > matplotlib/pylab? >> > Can you make some examples of functions that are available with vv and >> > maybe >> > more difficult to implement with matplotlib? >> > I am preparing a seminar on how to plot data with python and I am >> > interested >> > in adding references to new alternatives. >> >> The advantage of visvis over matplotlib is that it can plot 3D data. >> These can be 3D >> lines and points or 3D rendering of volumetric data (for example CT >> data). Also, >> because it uses OpenGl, displaying images, and zooming/panning them is >> much >> faster in visvis. Visvis is also designed to be extendable. If you >> know a bit of OpenGl >> you can easily create your own world-objects (I call them wobjects) >> and place them >> in a scene with other objects. >> >> I tried to make visvis a relatively complete visualization library by >> enabling easy >> plotting of high quality, with support for labels, title, legend, etc. >> However, compared >> to matplotlib, visvis 2D plotting capabilities are very limited. I >> plan to extend these a >> bit, by for example providing support for bar plots, but I do not >> intend to go much >> further than that. >> >> So you might say that visvis is more focussed at 3D visualization, and >> leaves the >> advanced 2D visualization to matplotlib. But there is of course some >> overlap. Another >> thing to note is that both visvis and matplotlib use an object >> oriented structure with a >> functional interface similar to Matlab, which makes the use of both >> libraries very easy >> and intuitive. >> >> Cheers, >> ?Almar >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From gael.varoquaux at normalesup.org Wed Dec 2 03:49:28 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 2 Dec 2009 09:49:28 +0100 Subject: [SciPy-User] ANN: visvis In-Reply-To: References: <5aa3b3570912010646o4a732a6axfe16e431a30625bb@mail.gmail.com> Message-ID: <20091202084928.GB17292@phare.normalesup.org> On Wed, Dec 02, 2009 at 09:43:12AM +0100, Almar Klein wrote: > Hi Martin, > I've no experience with mayavi, so I 'm not sure about the differences. Extended documentation is available. The part of the documentation that describes functionality similar to visvis (from what I understood of visvis) is here: http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html Ga?l From gaedol at gmail.com Wed Dec 2 05:56:22 2009 From: gaedol at gmail.com (Marco) Date: Wed, 2 Dec 2009 11:56:22 +0100 Subject: [SciPy-User] Saving Graphs Message-ID: Hi all! In the simulations I write I have to save very often the graph, with all the attributes. If I save it using gml or something like this, I end up with many (say: thousands) of very small files, which are not very practical to carry around and tend to full the filesystem very quickly. I am looking for a way to save graphs (even not single graphs, but rather more graphs in the same file) which is a little more clever. So far I have been looking at pickling the whole thing, but I find it rather unsatisfactory. I have heard of hdf (or something like this). Looks to me that the overhead to use it is rather big. Someone has highlights, remarks, advice? Thank you in advance, marco -- restituiremo il riso ai salici From zachary.pincus at yale.edu Wed Dec 2 07:52:00 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 2 Dec 2009 07:52:00 -0500 Subject: [SciPy-User] useful code: binary heaps and Dijkstra's algorithm/minimum-cost-path for arrays (cython) In-Reply-To: <0DEEA605-5E5D-4FB8-BD0A-F2F3B75D979B@yale.edu> References: <0DEEA605-5E5D-4FB8-BD0A-F2F3B75D979B@yale.edu> Message-ID: <65785C39-A85D-487C-9ABF-A64FB6B3583C@yale.edu> Oops: I forgot to free some memory in the heap class. Here's a fixed version for those playing at home. -------------- next part -------------- A non-text attachment was scrubbed... Name: heap.pyx Type: application/octet-stream Size: 22987 bytes Desc: not available URL: -------------- next part -------------- From donovan.parks at gmail.com Wed Dec 2 08:19:23 2009 From: donovan.parks at gmail.com (Donovan Parks) Date: Wed, 2 Dec 2009 09:19:23 -0400 Subject: [SciPy-User] SciPy under Windows 7 (x64) Message-ID: <8fd818a30912020519j2bb498ednedb298efddc5aa17@mail.gmail.com> Hello Josef, > David will know better, but can you try scipy-0.7.1 with numpy 1.3. to > see whether this combination works on Windows 7 and to see whether > your error is numpy 1.4.0 related? (I'm still not convinced that > binary compatibility hasn't been broken.) I moved to numpy 1.3 and indeed this fixed the problem. Thanks for the help. Cheers, Donovan From bsouthey at gmail.com Wed Dec 2 09:26:42 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 02 Dec 2009 08:26:42 -0600 Subject: [SciPy-User] SciPy under Windows 7 (x64) In-Reply-To: <8fd818a30912020519j2bb498ednedb298efddc5aa17@mail.gmail.com> References: <8fd818a30912020519j2bb498ednedb298efddc5aa17@mail.gmail.com> Message-ID: <4B167922.8050605@gmail.com> On 12/02/2009 07:19 AM, Donovan Parks wrote: > Hello Josef, > > >> David will know better, but can you try scipy-0.7.1 with numpy 1.3. to >> see whether this combination works on Windows 7 and to see whether >> your error is numpy 1.4.0 related? (I'm still not convinced that >> binary compatibility hasn't been broken.) >> > I moved to numpy 1.3 and indeed this fixed the problem. Thanks for the help. > > Cheers, > Donovan > While I saw the same error, I was able to do "import scipy.stats as stats" and the chi2 function appeared to be present (but I did not try it). So perhaps some variable(s) are not correctly defined. Given the numpy and scipy changes, I do not know if it would exist with a svn version of scipy. So far compiling under Windows is way too hard for me to grasp (mainly time and effort to get the correct environment) but I can test things. Bruce From Jim.Vickroy at noaa.gov Wed Dec 2 12:04:35 2009 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Wed, 02 Dec 2009 10:04:35 -0700 Subject: [SciPy-User] Saving Graphs In-Reply-To: References: Message-ID: <4B169E23.1090400@noaa.gov> Marco wrote: > Hi all! > > In the simulations I write I have to save very often the graph, with > all the attributes. > If I save it using gml or something like this, I end up with many > (say: thousands) of very small files, which are not very practical to > carry around and tend to full the filesystem very quickly. > > I am looking for a way to save graphs (even not single graphs, but > rather more graphs in the same file) which is a little more clever. > > So far I have been looking at pickling the whole thing, but I find it > rather unsatisfactory. > I have heard of hdf (or something like this). Looks to me that the > overhead to use it is rather big. > > Someone has highlights, remarks, advice? > > Thank you in advance, > > marco > > > -- > restituiremo il riso ai salici > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi Marco, Depending on the size and number of graphs, you may consider storing them as BLOBs in a sqlite relational store (or stores). sqlite is easy to use and is included in recent distributions of python. -- jv -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Dec 2 12:10:14 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 3 Dec 2009 02:10:14 +0900 Subject: [SciPy-User] SciPy under Windows 7 (x64) In-Reply-To: <4B167922.8050605@gmail.com> References: <8fd818a30912020519j2bb498ednedb298efddc5aa17@mail.gmail.com> <4B167922.8050605@gmail.com> Message-ID: <5b8d13220912020910y3931db7kb8c29dcc957bbb4c@mail.gmail.com> On Wed, Dec 2, 2009 at 11:26 PM, Bruce Southey wrote: > On 12/02/2009 07:19 AM, Donovan Parks wrote: >> Hello Josef, >> >> >>> David will know better, but can you try scipy-0.7.1 with numpy 1.3. to >>> see whether this combination works on Windows 7 and to see whether >>> your error is numpy 1.4.0 related? (I'm still not convinced that >>> binary compatibility hasn't been broken.) >>> >> I moved to numpy 1.3 and indeed this fixed the problem. Thanks for the help. >> >> Cheers, >> Donovan >> > > While I saw the same error, I was able to do "import scipy.stats as > stats" and the chi2 function appeared to be present (but I did not try > it). So perhaps some variable(s) are not correctly defined. I have seen the error on linux as well, and it was always related to cython-generated code. I don't have the time to chase it right now, but I suspect that it is a forward incompatibility, not a backward compatibility (i.e. you have an extension built with numpy 1.4, and then try to load it under numpy 1.3). Starting from numpy 1.4.0, we have a way to check for this, and multiarray will raise an exception whenever an extension built against N is loaded under numpy M < N. Unfortunately, this will only work as long both N and M are >= 1.4, so it is not very useful right now :) David From jagan_cbe2003 at yahoo.co.in Wed Dec 2 12:46:41 2009 From: jagan_cbe2003 at yahoo.co.in (jagan prabhu) Date: Wed, 2 Dec 2009 23:16:41 +0530 (IST) Subject: [SciPy-User] Soecifying Inequality constraints as a penalty term to the objective function for fmin, l_bfgs, cobyla etc., Message-ID: <354873.54983.qm@web8316.mail.in.yahoo.com> Hi all, I like to specify my inequality constraints as penalty term to my objective function for the case of fmin or l_bfgs or for any optimization methods in scipy. For example: please see the attached *.py file Problem may be simple, but i need to write a penalty term for my inequality constraints to the objective function and solve the problem. Constraints are x>=y and u>=v How to write a penalty term for my constraints and include that it my objective function and solve the optimization using any Scipy optimization method? Please help me. Regards, Jagan The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Inequality constraints.py Type: text/x-python Size: 929 bytes Desc: not available URL: From dwf at cs.toronto.edu Wed Dec 2 14:30:10 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 2 Dec 2009 14:30:10 -0500 Subject: [SciPy-User] Saving Graphs In-Reply-To: References: Message-ID: <3191947A-696A-4607-920A-250BA364C91F@cs.toronto.edu> On 2-Dec-09, at 5:56 AM, Marco wrote: > I am looking for a way to save graphs (even not single graphs, but > rather more graphs in the same file) which is a little more clever. If you store them as certain kinds of sparse matrix, e.g. csr, csc or coo, the underlying storage is NumPy arrays. You can then use np.save or np.savez which are considerably faster than pickling. David From Chris.Barker at noaa.gov Wed Dec 2 18:14:08 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 02 Dec 2009 15:14:08 -0800 Subject: [SciPy-User] scikits.timeseries question In-Reply-To: <3EB215DA-3808-42CE-B2E0-6568B6B40C37@gmail.com> References: <4B145C25.7040303@noaa.gov> <9DF1735B-BD7A-456A-8DD2-08C281440221@gmail.com> <4B146218.9000305@noaa.gov> <4B146E79.7090407@noaa.gov> <3EB215DA-3808-42CE-B2E0-6568B6B40C37@gmail.com> Message-ID: <4B16F4C0.9090604@noaa.gov> Folks, Sorry ask a question and then disappear, but I had a full day away from work and email yesterday! I do appreciate all the help, though. It's going to ake a bit to digest this, but it looks like I can do what I want to do here. Pierre GM wrote: >>> np.array(...) + sd gives you a ndarray of Date objects (so its dtype >>> is np.object), and you use that as the input of date_array. The >>> frequency should be recognized properly. >> OK -- though it seems I SHOULD be able to go straight to an DateArray, >> and I'm still confused about what this means: > > Well, that depends on the type of starting date, actually. If it's a > Date, adding a ndarray to it will give you a ndarray of Date > objects. If it's a DateArray of length 1, it'll give you a DateArray. ah -- that does look helpful -- I guess I find it surprising that there is a difference -- I think this may be legacy issues -- what I imagine we'd want is a DateArray, and there would be a corresponding DateScalar -- like there is for an array of float128s or whatever. >>> * a sequence of integers corresponding to the representation of >>> :class:`Date` objects. >> That's what I have: a sequence of integers corresponding to the >> representation of the Date objects (doesn't it represent them as "units >> since start date" where units is the "freq" ? > > No, not exactly: the representation of a Date objects is relative to an absolute build-in reference (Day #1 being 01/01/01). (Likewise, nump.datetime64 uses the standard 1970/01/01). OK -- got that now. I suppose I could add the integer value of my start date and then I"d get what I want. >> Definition: >> ts_lib.mov_average(data, span, dtype=None) >> Docstring: >> Calculates the moving average of a series. >> >> Parameters >> ---------- >> data : array-like >> Input data, as a sequence or (subclass of) ndarray. >> Masked arrays and TimeSeries objects are also accepted. >> The input array should be 1D or 2D at most. >> If the input array is 2D, the function is applied on each >> column. >> >> I've got a 3-d array -- darn! Maybe I'll poke into it and see if it can >> be generalized. > > > 3D ? What are your actual variables? Keep in mind that when we talk about dimensions with time series, we zap the time one, so if you have a series of maps, your array is only 2D in our terminology. > If you have a time series of (lat, lon), mov_average will average your lats independently of your lons Then is should work -- what I've got is a 3-d array in a netcdf file, it has dimensions: (time, lat, long) -- so it sounds like that should work. Robert Ferrell wrote: > Is the issue that sd is a Date and not a DateArray? You can always > make a DataArray with sd, of the correct length, and then add to that: > > In [83]: sd = ts.Date('d', '2001-01-01') > In [86]: d1 + array([0,2,3,5]) > Out[86]: > DateArray([01-Jan-2001, 03-Jan-2001, 04-Jan-2001, 06-Jan-2001], > freq='D') yup - that's what I want! Thanks. > In any case, I get the impression that the OP has an array of integer > offsets generated in some other fashion entirely. exactly. Matt Knox wrote: > In addition to the methods described by Pierre and Robert, you could also do: > >>>> sd = ts.now('d') >>>> relative_days = np.array([1,5,8]) >>>> absolute_days = relative_days + sd.value >>>> darray = ts.date_array(absolute_days, freq = sd.freq) > > which I think probably has the lowest overhead (but don't hold me to that :) ) > if that matters for your application. Frankly, I don't think it does matter, but I like efficiency all the same. Thanks to all three of you, -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From vanleeuwen.martin at gmail.com Wed Dec 2 21:06:02 2009 From: vanleeuwen.martin at gmail.com (Martin van Leeuwen) Date: Wed, 2 Dec 2009 18:06:02 -0800 Subject: [SciPy-User] ANN: visvis In-Reply-To: <20091202084928.GB17292@phare.normalesup.org> References: <5aa3b3570912010646o4a732a6axfe16e431a30625bb@mail.gmail.com> <20091202084928.GB17292@phare.normalesup.org> Message-ID: Thanks, Yes, well I guess one advantage would be the 4th-Dimension, depending on the application of course. Martin 2009/12/2 Gael Varoquaux > On Wed, Dec 02, 2009 at 09:43:12AM +0100, Almar Klein wrote: > > Hi Martin, > > > I've no experience with mayavi, so I 'm not sure about the differences. > > Extended documentation is available. The part of the documentation that > describes functionality similar to visvis (from what I understood of > visvis) is here: > > http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html > > Ga?l > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.nien at gmail.com Thu Dec 3 03:22:03 2009 From: steven.nien at gmail.com (Steven Nien) Date: Thu, 3 Dec 2009 16:22:03 +0800 Subject: [SciPy-User] Problem with passing a 2D Numpy array in the C code by weave.inline Message-ID: Hello, I'm trying to pass a 2 dimensional numpy array into the 'weave.inline' and doing some computation in C. But it seems that the 'weave.inline' will convert a 2d array in python into a 1d array in C. So if I have a MxN array 'arr' I need to access arr[2][3] in C by the following code: """ arr[2*N+3] = 0; """ What I want is accessing the array by the original syntax in C , i.e., 'arr[2][3] = 0;' Since I have lot's of legacy 2D-array code write in C, I don't wan't use the Blitz method. And I still want to use the 'weave.inline' function. Does anyone have any idea? Regards, Steven Nien -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Dec 3 03:32:14 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 3 Dec 2009 10:32:14 +0200 Subject: [SciPy-User] Dijkstra's algorithm on a lattice In-Reply-To: References: <4B088A5B.8000604@bigpond.net.au> <66E0FDD3-4822-4722-BB6E-C7F21A679D32@yale.edu> <9457e7c80911220300l29370151y655159ab835e6359@mail.gmail.com> <9457e7c80911220307x28112983ud6e3b454b3b69911@mail.gmail.com> <6F8D1FC6-656C-4860-8D40-E61A7996920A@yale.edu> Message-ID: <9457e7c80912030032u8ef1ae3hfc7efa06cef7b3fa@mail.gmail.com> Hi Zach, Almar 2009/11/25 Almar Klein : > Ah, now I see. I'm sorry. Yes, your code should produce the correct result, > although it will probably evaluate a lot of pixels more than once :) Zach's code was integrated into scikits.image here: http://github.com/stefanv/scikits.image/blob/master/scikits/image/graph/trace_path.pyx Any patches or suggestions for improvements are appreciated. Thanks St?fan From jgomezdans at gmail.com Thu Dec 3 06:19:50 2009 From: jgomezdans at gmail.com (Jose Gomez-Dans) Date: Thu, 3 Dec 2009 11:19:50 +0000 Subject: [SciPy-User] Problem with passing a 2D Numpy array in the C code by weave.inline In-Reply-To: References: Message-ID: <91d218430912030319t2b5d3a1me9c72b153ec7816b@mail.gmail.com> Hi, 2009/12/3 Steven Nien > I'm trying to pass a 2 dimensional numpy array into the 'weave.inline' and > doing some computation in C. > But it seems that the 'weave.inline' will convert a 2d array in python into > a 1d array in C. > > So if I have a MxN array 'arr' > I need to access arr[2][3] in C by the following code: > I think weave inline compiles code into C++ and links against the blitz libraries, so that numpy arrays are blitz arrays. You can access the element in your example as arr(2,3). For internal consumption, I guess that the arrays are still chunks of memory. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.nien at gmail.com Thu Dec 3 07:03:29 2009 From: steven.nien at gmail.com (Steven Nien) Date: Thu, 3 Dec 2009 20:03:29 +0800 Subject: [SciPy-User] Problem with passing a 2D Numpy array in the C code by weave.inline In-Reply-To: <91d218430912030319t2b5d3a1me9c72b153ec7816b@mail.gmail.com> References: <91d218430912030319t2b5d3a1me9c72b153ec7816b@mail.gmail.com> Message-ID: Hello, Thanks, but it didn't work (with error message). Below is my code: from scipy.weave import inline import numpy arr = numpy.zeros((2,3), dtype='double') code = \ """ arr(1,1) = 1; """ inline(code,['arr'], verbose = 1, compiler = 'gcc') print arr But what I actually want is something like this: from scipy.weave import inline import numpy arr = numpy.zeros((2,3), dtype='double') code = \ """ arr[1][1] = 1; """ inline(code,['arr'], verbose = 1, compiler = 'gcc') print arr Maybe need to use the type_factories argument in the inline function or something like this? Any idea? Regards, Steven Nien 2009/12/3 Jose Gomez-Dans > Hi, > > 2009/12/3 Steven Nien > > I'm trying to pass a 2 dimensional numpy array into the 'weave.inline' and >> doing some computation in C. >> But it seems that the 'weave.inline' will convert a 2d array in python >> into a 1d array in C. >> >> So if I have a MxN array 'arr' >> I need to access arr[2][3] in C by the following code: >> > > I think weave inline compiles code into C++ and links against the blitz > libraries, so that numpy arrays are blitz arrays. You can access the element > in your example as arr(2,3). For internal consumption, I guess that the > arrays are still chunks of memory. > > J > -------------- next part -------------- An HTML attachment was scrubbed... URL: From almar.klein at gmail.com Thu Dec 3 07:11:46 2009 From: almar.klein at gmail.com (Almar Klein) Date: Thu, 3 Dec 2009 13:11:46 +0100 Subject: [SciPy-User] ANN: visvis In-Reply-To: References: <5aa3b3570912010646o4a732a6axfe16e431a30625bb@mail.gmail.com> <20091202084928.GB17292@phare.normalesup.org> Message-ID: > Extended documentation is available. The part of the documentation that > describes functionality similar to visvis (from what I understood of > visvis) is here: > > http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html Well, I guess one big difference is the fact that Mayavi is build on top of VTK, while visvis is directly build on top of OpenGl (which is also the base for VTK). This would make visvis probably a bit more lightweight. Since OpenGl allows more control, visvis gives users more freedom to create their own objects, and for example different volume renderers can relatively easily be implemented (in glsl). Of course, most people are interested in using it out of the box though. Mayavi, on the other hand, can make use of a lot of fancy VTK features. In the future, I hope to implement some of the fancy stuff shown on Mayavi's page for visvis. Specifically meshes and surfaces are things I believe should be part of visvis in the future. Another advantage might be that Visvis is designed to be very transparant and easy to use. For example, in visvis the Axes object represents the scene in which the world-objects are visualized. These objects are simply kept in a list lof the axes. Each object in the scene has a number of properties that can be changed to influence the behaviour (for example interpolation of anti aliasing of a texture). In VTK, on the other hand, a visualisation pipeline is build that is (partly) executed on each draw. Mayavi seems to do this behind the scene nicely though. Personally, I use visvis to display images and volumes and simulatenously display points and lines in the same scene. These points and lines mostly represent a result of some segmentation algorithm. In visvis it is very easy to attach callbacks to these objects, to for example get more information when clicking on a point or line in the scene. I'm not sure whether Mayavi can do the same. I hope this helps you get an idea what visvis is and what not. Mostly though, visvis is a project that started from a need to easily display volumes and points and lines, and will be extended as my own needs increase (or other people ask me hard enough). Cheers, Almar 2009/12/3 Martin van Leeuwen : > Thanks, > > Yes, well I guess one advantage would be the 4th-Dimension, depending on the > application of course. > > Martin > > > 2009/12/2 Gael Varoquaux >> >> On Wed, Dec 02, 2009 at 09:43:12AM +0100, Almar Klein wrote: >> > Hi Martin, >> >> > I've no experience with mayavi, so I 'm not sure about the differences. >> >> Extended documentation is available. The part of the documentation that >> describes functionality similar to visvis (from what I understood of >> visvis) is here: >> >> http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html >> >> Ga?l >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From jgomezdans at gmail.com Thu Dec 3 07:53:14 2009 From: jgomezdans at gmail.com (Jose Gomez-Dans) Date: Thu, 3 Dec 2009 12:53:14 +0000 Subject: [SciPy-User] Problem with passing a 2D Numpy array in the C code by weave.inline In-Reply-To: References: <91d218430912030319t2b5d3a1me9c72b153ec7816b@mail.gmail.com> Message-ID: <91d218430912030453o7c40e936k89e25e543f574a15@mail.gmail.com> Hi, 2009/12/3 Steven Nien > Hello, > Thanks, but it didn't work (with error message). > Are you sure you are not hitting this bug? I have applied the patch and your code works here: *$ python test_weave.py Compiling code... finished compiling (sec): 1.45934510231 [[ 1. 0. 0.] [ 0. 0. 0.]] * Jose -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.nien at gmail.com Thu Dec 3 08:32:14 2009 From: steven.nien at gmail.com (Steven Nien) Date: Thu, 3 Dec 2009 21:32:14 +0800 Subject: [SciPy-User] Problem with passing a 2D Numpy array in the C code by weave.inline In-Reply-To: <91d218430912030453o7c40e936k89e25e543f574a15@mail.gmail.com> References: <91d218430912030319t2b5d3a1me9c72b153ec7816b@mail.gmail.com> <91d218430912030453o7c40e936k89e25e543f574a15@mail.gmail.com> Message-ID: Hi, Thanks. I use the following code, and it works. inline(code,['arr'], verbose = 1, compiler = 'gcc', type_converters = converters.blitz) But it works in the arr(i,j) syntax. Is there any way to use the original C syntax ( arr[i][j] ) ? Regards, Steven Nien 2009/12/3 Jose Gomez-Dans > Hi, > > 2009/12/3 Steven Nien > >> Hello, >> >> Thanks, but it didn't work (with error message). >> > > Are you sure you are not hitting this bug? > > > I have applied the patch and your code works here: > > *$ python test_weave.py > > Compiling code... > finished compiling (sec): 1.45934510231 > [[ 1. 0. 0.] > [ 0. 0. 0.]] > * > Jose > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tiago at forked.de Thu Dec 3 09:29:45 2009 From: tiago at forked.de (Tiago de Paula Peixoto) Date: Thu, 03 Dec 2009 15:29:45 +0100 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: <4ADFB98F.6020108@american.edu> References: <4ADFB98F.6020108@american.edu> Message-ID: On 10/22/2009 03:46 AM, Alan G Isaac wrote: > On 10/5/2009 12:22 PM, Tiago de Paula Peixoto wrote: >> http://graph-tool.forked.de > > So is the idea, in contrast say to NetworkX, > that graph-tool wraps the Boost Graph Library? Yes, basically. It is not simply a thin wrapper, though. And it also has other algorithms not present in the BGL. > If so, why is the license more restrictive > than the BGL? Why shouldn't it? If the BGL developers wanted to avoid this, they would have used another licence. As for my own stuff, I prefer the GPL, to avoid additional restrictions. Cheers, Tiago -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 262 bytes Desc: OpenPGP digital signature URL: From dagss at student.matnat.uio.no Thu Dec 3 09:31:35 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 03 Dec 2009 15:31:35 +0100 Subject: [SciPy-User] Problem with passing a 2D Numpy array in the C code by weave.inline In-Reply-To: References: <91d218430912030319t2b5d3a1me9c72b153ec7816b@mail.gmail.com> <91d218430912030453o7c40e936k89e25e543f574a15@mail.gmail.com> Message-ID: <4B17CBC7.1050209@student.matnat.uio.no> Steven Nien wrote: > Hi, > Thanks. > I use the following code, and it works. > > inline(code,['arr'], verbose = 1, compiler = 'gcc', type_converters = > converters.blitz) > > But it works in the arr(i,j) syntax. > > Is there any way to use the original C syntax ( arr[i][j] ) ? arr[i][j] fundamentally access something else than a NumPy array. NumPy arrays are blocks of memory indexed by indexes and strides, while arr[i][j] refer to a double-pointer-indirection style array. I.e. they are simply not the same thing, and the answer is no. I'd use a regex search&replace. Alternatively, you can get what you want with a custom written C++ class (with an operator overload which defines [] in the way you need it). Dag Sverre From aisaac at american.edu Thu Dec 3 10:07:39 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 03 Dec 2009 10:07:39 -0500 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: References: <4ADFB98F.6020108@american.edu> Message-ID: <4B17D43B.2040204@american.edu> > On 10/22/2009 03:46 AM, Alan G Isaac wrote: >> So is the idea, in contrast say to NetworkX, >> that graph-tool wraps the Boost Graph Library? >> If so, why is the license more restrictive >> than the BGL? On 12/3/2009 9:29 AM, Tiago de Paula Peixoto wrote: > Why shouldn't it? If the BGL developers wanted to avoid this, they would > have used another licence. Yes of course the BGL developers allowed this freedom. My question is, when you are leaning so heavily on the BGL, why don't you feel some inclination to follow their generous lead and continue with the less restrictive licensing they adopted? On 12/3/2009 9:29 AM, Tiago de Paula Peixoto wrote: > As for my own stuff, I prefer the GPL, to > avoid additional restrictions. I do not understand. As this conversation makes clear, the GPL is the *source* of the additional restrictions. Which of course is why I asked: such restrictive licensing makes the code useless for many on this list. Naturally (!!), each of us should choose the license we find appropriate. I only bothered to ask since you posted this to a list for software that is committed to liberal licensing, *not* to the restrictions of the GPL, and I have found that many people pick the GPL by default rather than thinking through the implications. Cheers, Alan Isaac From tiago at forked.de Thu Dec 3 11:11:43 2009 From: tiago at forked.de (Tiago de Paula Peixoto) Date: Thu, 03 Dec 2009 17:11:43 +0100 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: <4B17D43B.2040204@american.edu> References: <4ADFB98F.6020108@american.edu> <4B17D43B.2040204@american.edu> Message-ID: On 12/03/2009 04:07 PM, Alan G Isaac wrote: >> On 10/22/2009 03:46 AM, Alan G Isaac wrote: >>> So is the idea, in contrast say to NetworkX, >>> that graph-tool wraps the Boost Graph Library? >>> If so, why is the license more restrictive >>> than the BGL? > > On 12/3/2009 9:29 AM, Tiago de Paula Peixoto wrote: >> Why shouldn't it? If the BGL developers wanted to avoid this, they would >> have used another licence. > > Yes of course the BGL developers allowed this freedom. > My question is, when you are leaning so heavily on > the BGL, why don't you feel some inclination to > follow their generous lead and continue with the > less restrictive licensing they adopted? Less restrictive in this case means simply less restrictive to people who want to impose additional restrictions. :-) In other words, the BGL license allows proprietary derivatives, while the GPL doesn't. I find it quite strange that you consider this not as generous, since the only thing it does is to prohibit _even more_ strong restrictions. I have the feeling you would not criticize quite as much, if I had made the library proprietary... How generous would that be? > On 12/3/2009 9:29 AM, Tiago de Paula Peixoto wrote: >> As for my own stuff, I prefer the GPL, to >> avoid additional restrictions. > > I do not understand. As this conversation makes clear, > the GPL is the *source* of the additional restrictions. > Which of course is why I asked: such restrictive licensing > makes the code useless for many on this list. > > Naturally (!!), each of us should choose the license we find > appropriate. I only bothered to ask since you posted this > to a list for software that is committed to liberal licensing, > *not* to the restrictions of the GPL, and I have found that many > people pick the GPL by default rather than thinking through > the implications. This is an age-old debate about copyleft vs non-copyleft... Maybe it's not so useful to re-hash it now, but don't assume that those who choose copyleft do so for lack of information. Cheers, Tiago -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 262 bytes Desc: OpenPGP digital signature URL: From nwagner at iam.uni-stuttgart.de Thu Dec 3 11:16:42 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 03 Dec 2009 17:16:42 +0100 Subject: [SciPy-User] scipy.test() Message-ID: >>> scipy.__version__ '0.8.0.dev6127' >>> numpy.__version__ '1.5.0.dev7827' ====================================================================== ERROR: test_mpmath.test_expi_complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/testing/decorators.py", line 146, in skipper_func return f(*args, **kwargs) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_mpmath.py", line 45, in test_expi_complex dataset = np.array(dataset, dtype=np.complex_) TypeError: a float is required ====================================================================== FAIL: test_lambertw.test_values ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_lambertw.py", line 80, in test_values FuncData(w, data, (0,1), 2, rtol=1e-10, atol=1e-13).check() File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/testutils.py", line 187, in check assert False, "\n".join(msg) AssertionError: Max |adiff|: 2.5797 Max |rdiff|: 3.81511 Bad results for the following points (in output 0): (-0.44800000000000001+0.40000000000000002j) 0j => (-1.2370928928166736-1.6588828572971359j) != (-0.11855133765652383+0.66570534313583418j) (rdiff 3.8151122286225245) ---------------------------------------------------------------------- Ran 4235 tests in 128.083s FAILED (KNOWNFAIL=7, SKIP=20, errors=2, failures=1) From robert.kern at gmail.com Thu Dec 3 13:13:48 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 3 Dec 2009 13:13:48 -0500 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: <4B17D43B.2040204@american.edu> References: <4ADFB98F.6020108@american.edu> <4B17D43B.2040204@american.edu> Message-ID: <3d375d730912031013r3ff74c69hd11b58473b73944e@mail.gmail.com> On Thu, Dec 3, 2009 at 10:07, Alan G Isaac wrote: > Naturally (!!), each of us should choose the license we find > appropriate. ?I only bothered to ask since you posted this > to a list for software that is committed to liberal licensing, ... for scipy itself. Please do not turn this list into a BSD-proselytizing forum. I would like this list to be friendly to announcements of all topically relevant software regardless of the license. Please be aware that your original response can be read as rude and accusatory even if you did not intend it so. There are better ways of asking if an author has considered a different license, and private email is probably a better medium. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu Dec 3 16:00:53 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 03 Dec 2009 16:00:53 -0500 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: References: <4ADFB98F.6020108@american.edu> <4B17D43B.2040204@american.edu> Message-ID: <4B182705.5070506@american.edu> On 12/3/2009 11:11 AM, Tiago de Paula Peixoto wrote: > I have the feeling you would not criticize quite as much, > if I had made the library proprietary. Making this library proprietary would of course be the fastest way to kill it outright. So that is a red herring. But just to be clear, I am not "criticizing" at all. As I said, we all understand that each developer chooses his/her own license on any criteria s/he chooses, naturally. However I still believe that you miscast the issues. You worry unrealistically that your code could be modified by someone who will refuse to share the modifications, and you do not worry enough about licensing your code in a way that makes it useless to the bulk of the community you announced it to. (I.e., the SciPy community.) At least, that is how it appears from your statements to date. (Not that you are under any obligation to explain your reasoning about this, *naturally*.) Cheers, Alan Isaac From aisaac at american.edu Thu Dec 3 16:04:49 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 03 Dec 2009 16:04:49 -0500 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: <3d375d730912031013r3ff74c69hd11b58473b73944e@mail.gmail.com> References: <4ADFB98F.6020108@american.edu> <4B17D43B.2040204@american.edu> <3d375d730912031013r3ff74c69hd11b58473b73944e@mail.gmail.com> Message-ID: <4B1827F1.9090902@american.edu> On 12/3/2009 1:13 PM, Robert Kern wrote: > Please be aware that your original response can be read as > rude and accusatory even if you did not intend it so. In that case I apologize, and wish that I had kept my mouth shut. > There are better > ways of asking if an author has considered a different license, and > private email is probably a better medium. Yes indeed. Alan From tiago at forked.de Thu Dec 3 17:23:58 2009 From: tiago at forked.de (Tiago de Paula Peixoto) Date: Thu, 03 Dec 2009 23:23:58 +0100 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: <4B182705.5070506@american.edu> References: <4ADFB98F.6020108@american.edu> <4B17D43B.2040204@american.edu> <4B182705.5070506@american.edu> Message-ID: On 12/03/2009 10:00 PM, Alan G Isaac wrote: > However I still believe that you miscast the > issues. You worry unrealistically that your > code could be modified by someone who will refuse > to share the modifications, and you do not worry > enough about licensing your code in a way that makes > it useless to the bulk of the community you announced > it to. (I.e., the SciPy community.) The issue of which is the best type of license aside, do you think that switching to LGPL would make it less "useless" to the SciPy community, or does it need to be absolutely non-copyleft, BSD-like? Cheers, Tiago -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 262 bytes Desc: OpenPGP digital signature URL: From gael.varoquaux at normalesup.org Thu Dec 3 17:38:53 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 3 Dec 2009 23:38:53 +0100 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: References: <4ADFB98F.6020108@american.edu> <4B17D43B.2040204@american.edu> <4B182705.5070506@american.edu> Message-ID: <20091203223853.GB11177@phare.normalesup.org> On Thu, Dec 03, 2009 at 11:23:58PM +0100, Tiago de Paula Peixoto wrote: > The issue of which is the best type of license aside, do you think that > switching to LGPL would make it less "useless" to the SciPy community, > or does it need to be absolutely non-copyleft, BSD-like? >From a practical point of view, I can use a LGPL-licensed library in much more situations than I can use a GPL-licensed library, because it does impose restrictions onto the users of my code. As a result, if my research yields practical application, I can still 'sell' the code. Don't worry, the code I am talking about is research-grade code that gets pushed to BSD-licensed libraries when it is not so ugly that I can't even show it. However, I have found that having the argument that one day, I may be able to sell that code (I would be selling my expertise more than the code) helps a lot convincing management that we can release the code under an open source license. Most researchers don't even release code, which means that the restrictions of the GPL don't apply, but also that scientific collaboration and the software ecosystem both pay a hefty price. So yes, LGPL helps a lot. And please, don't take the comments as pressure (even though they probably are, unwillingly), or an ideological position. As Robert pointed out, you own your work, and we should all be grateful to you for sharing it. I am just giving the rational behind my position not to use copyleft licenses for my own work (if you are interest in the LGPL vs BSD discussion, feel free to ask, but it seems to me off topic with regards to your question). Cheers, Ga?l From tiago at forked.de Fri Dec 4 09:22:24 2009 From: tiago at forked.de (Tiago de Paula Peixoto) Date: Fri, 04 Dec 2009 15:22:24 +0100 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: <20091203223853.GB11177@phare.normalesup.org> References: <4ADFB98F.6020108@american.edu> <4B17D43B.2040204@american.edu> <4B182705.5070506@american.edu> <20091203223853.GB11177@phare.normalesup.org> Message-ID: On 12/03/2009 11:38 PM, Gael Varoquaux wrote: > So yes, LGPL helps a lot. I'm considering moving graph-tool to LGPL, since it indeed seems to make more sense for libraries and loadable modules... Note that I'm not dogmatic about copyleft: My modifications of the graphml reader/writer used in graph-tool made into the BGL, with the BSD-like Boost licence... I would make this exception again for other code, if there is interest from the Boost people, or from others projects. But I'll keep it copyleft per default, since I just agree more with this concept. Cheers, Tiago -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 262 bytes Desc: OpenPGP digital signature URL: From william.ratcliff at gmail.com Fri Dec 4 09:45:51 2009 From: william.ratcliff at gmail.com (william ratcliff) Date: Fri, 4 Dec 2009 09:45:51 -0500 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: References: <4ADFB98F.6020108@american.edu> <4B17D43B.2040204@american.edu> <4B182705.5070506@american.edu> <20091203223853.GB11177@phare.normalesup.org> Message-ID: <827183970912040645y565f6962v13a7ba3917c91de0@mail.gmail.com> Thank you very much for the reconsideration! On Fri, Dec 4, 2009 at 9:22 AM, Tiago de Paula Peixoto wrote: > On 12/03/2009 11:38 PM, Gael Varoquaux wrote: > > So yes, LGPL helps a lot. > > I'm considering moving graph-tool to LGPL, since it indeed seems to make > more sense for libraries and loadable modules... > > Note that I'm not dogmatic about copyleft: My modifications of the > graphml reader/writer used in graph-tool made into the BGL, with the > BSD-like Boost licence... I would make this exception again for other > code, if there is interest from the Boost people, or from others > projects. But I'll keep it copyleft per default, since I just agree more > with this concept. > > Cheers, > Tiago > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Dec 4 10:29:59 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 5 Dec 2009 00:29:59 +0900 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: References: <4ADFB98F.6020108@american.edu> <4B17D43B.2040204@american.edu> <4B182705.5070506@american.edu> Message-ID: <5b8d13220912040729l53116413p9abc8707391da32f@mail.gmail.com> On Fri, Dec 4, 2009 at 7:23 AM, Tiago de Paula Peixoto wrote: > On 12/03/2009 10:00 PM, Alan G Isaac wrote: >> However I still believe that you miscast the >> issues. ?You worry unrealistically that your >> code could be modified by someone who will refuse >> to share the modifications, and you do not worry >> enough about licensing your code in a way that makes >> it useless to the bulk of the community you announced >> it to. (I.e., the SciPy community.) > > The issue of which is the best type of license aside, do you think that > switching to LGPL would make it less "useless" to the SciPy community, > or does it need to be absolutely non-copyleft, BSD-like? I don't know the position of the SciPy community (if there is any), but I think any open source license is appropriate. Although scipy itself is under the BSD, GPL announcements are certainly appropriate. I believe that there is also a policy to encourage wrappers around existing code to follow the same license, but again, not a requirement. The only thing which can be said is that python code is more commonly distributed under BSD - that's why I myself use it for all my python-related code, even though I think GPL is "more fair" from a personal POV. Practically, the BSD license may be the one to choose if you don't have strong reason for anything else, for the aforementioned reasons, but that's all that can be said. David From nwagner at iam.uni-stuttgart.de Sat Dec 5 04:30:21 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 05 Dec 2009 10:30:21 +0100 Subject: [SciPy-User] test_lambertw.test_values ...segfaults Message-ID: Hi, Can someone reproduce the following segfault on opensuse11.2 test_lambertw.test_values ... Program received signal SIGSEGV, Segmentation fault. 0x00007ffff7b27c7a in PyErr_Restore () from /usr/lib64/libpython2.6.so.1.0 (gdb) bt #0 0x00007ffff7b27c7a in PyErr_Restore () from /usr/lib64/libpython2.6.so.1.0 #1 0x00007ffff7b2772f in PyErr_SetString () from /usr/lib64/libpython2.6.so.1.0 #2 0x00007ffff7aacc24 in ?? () from /usr/lib64/libpython2.6.so.1.0 #3 0x00007ffff7ab7ad3 in PyFloat_AsDouble () from /usr/lib64/libpython2.6.so.1.0 #4 0x00007ffff7ad97b2 in PyString_Format () from /usr/lib64/libpython2.6.so.1.0 #5 0x00007ffff7a9e286 in ?? () from /usr/lib64/libpython2.6.so.1.0 #6 0x00007ffff7a9eb8e in PyNumber_Remainder () from /usr/lib64/libpython2.6.so.1.0 #7 0x00007fffe51f4112 in __pyx_f_5scipy_7special_8lambertw_lambertw_scalar (__pyx_v_z= ..., __pyx_v_k=, __pyx_v_tol=) at scipy/special/lambertw.c:1078 #8 0x00007fffe51f298c in __pyx_f_5scipy_7special_8lambertw__apply_func_to_1d_vec ( __pyx_v_args=, __pyx_v_dimensions=0x3eda660, __pyx_v_steps= 0x3f3c3b0, __pyx_v_func=0x297a2873626120) at scipy/special/lambertw.c:1160 #9 0x00007ffff60d1309 in PyUFunc_GenericFunction (self=, args=, kwds=0x0, mps=) at numpy/core/src/umath/ufunc_object.c:2054 #10 0x00007ffff60d1eb3 in ufunc_generic_call (self=0x2050450, args=, kwds=0x0) at numpy/core/src/umath/ufunc_object.c:3516 #11 0x00007ffff7a9de32 in PyObject_Call () from /usr/lib64/libpython2.6.so.1.0 #12 0x00007fffe51f2db0 in __pyx_pf_5scipy_7special_8lambertw_lambertw ( __pyx_self=, __pyx_args=, __pyx_kwds=) at scipy/special/lambertw.c:1272 #13 0x00007ffff7b1abf9 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.6.so.1.0 #14 0x00007ffff7b1f54e in PyEval_EvalCodeEx () from /usr/lib64/libpython2.6.so.1.0 #15 0x00007ffff7abd7f2 in ?? () from /usr/lib64/libpython2.6.so.1.0 #16 0x00007ffff7a9de32 in PyObject_Call () from /usr/lib64/libpython2.6.so.1.0 Nils >>> scipy.__version__ '0.8.0.dev6128' >>> numpy.__version__ '1.5.0.dev7878' Linux linux-mogv 2.6.31.5-0.1-default #1 SMP 2009-10-26 15:49:03 +0100 x86_64 x86_64 x86_64 GNU/Linux gcc (SUSE Linux) 4.4.1 [gcc-4_4-branch revision 150839] From tpk at kraussfamily.org Sat Dec 5 09:18:16 2009 From: tpk at kraussfamily.org (Tom K.) Date: Sat, 5 Dec 2009 06:18:16 -0800 (PST) Subject: [SciPy-User] [SciPy-user] ANN: upfirdn 0.1.0 Message-ID: <26656005.post@talk.nabble.com> ANNOUNCEMENT I am pleased to announce the initial release of "upfirdn." This package provides an efficient polyphase FIR resampler object (SWIG-ed C++) and some python wrappers. https://opensource.motorola.com/sf/projects/upfirdn MOTIVATION As a long time user of MATLAB and the Signal Processing Toolbox, I have missed an "upfirdn" analogue in numpy / scipy since I switched over to python a couple years ago. I've been looking for a way to contribute to the wider numpy / scipy community since I love these tools and appreciate all the efforts of those who have developed them. Since we have the polyphase resampling functionality within Motorola and I use it for my work, I thought it best to go the "official route" and get approval from my company to publish it under a BSD compatible license. NOTES TO USERS AND REGARDING INSTALL It is my hope that others find this functionality useful. Suggestions for improvements or bug reports are welcome. As installation is an area that I am very green, I recommend only installing it locally for now, e.g. python setup.py build_ext -i or even just make I'd like to spend some time learning more about distutils and release a version with a more user-friendly install. -- View this message in context: http://old.nabble.com/ANN%3A-upfirdn-0.1.0-tp26656005p26656005.html Sent from the Scipy-User mailing list archive at Nabble.com. From jsalvati at u.washington.edu Sun Dec 6 04:32:50 2009 From: jsalvati at u.washington.edu (John Salvatier) Date: Sun, 6 Dec 2009 01:32:50 -0800 Subject: [SciPy-User] BSD licensed two point boundary value problem solver scikit released Message-ID: <113e17f20912060132p96433c5ia34c81bd22642c9f@mail.gmail.com> Hello all, I have put together sckitkis.bvp_solver which wraps a fortran boundary value problem solver compatible with the BSD license. I feel the package is basically complete and well developed, so I am releasing a beta version (on pypi) for the community to try. The solver is based on a modified BVP_SOLVER (http://cs.smu.ca/~muir/BVP_SOLVER_Webpage.shtml) that uses LAMPAK instead of COLROW and thus is compatible with the BSD license. If you are interested in solving boundary value problems numerically, please try the package and give me feedback so the package can be improved. There is a well developed tutorial and several examples at http://packages.python.org/scikits.bvp_solver/ . Please inform me if you encounter any difficulty. Best Regards, John Salvatier -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis-bz-py at t-online.de Tue Dec 8 09:42:14 2009 From: denis-bz-py at t-online.de (denis) Date: Tue, 08 Dec 2009 15:42:14 +0100 Subject: [SciPy-User] chebfun In-Reply-To: <1cd32cbb0911291742m1d3f3ab8r886009c7fba3cab9@mail.gmail.com> References: <1cd32cbb0911291742m1d3f3ab8r886009c7fba3cab9@mail.gmail.com> Message-ID: Nice links, Josef. Chebfun looks very high quality; do you think there'd be a market for a scipy wrapper ? From josef.pktd at gmail.com Tue Dec 8 10:05:48 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 8 Dec 2009 10:05:48 -0500 Subject: [SciPy-User] chebfun In-Reply-To: References: <1cd32cbb0911291742m1d3f3ab8r886009c7fba3cab9@mail.gmail.com> Message-ID: <1cd32cbb0912080705p6154e394tf124c42e9b118daa@mail.gmail.com> On Tue, Dec 8, 2009 at 9:42 AM, denis wrote: > Nice links, Josef. > Chebfun looks very high quality; do you think there'd be a market > for a scipy wrapper ? Since it is written in matlab, it would be more a candidate for translation (and inspiration) than for wrapping. Chuck and Anne might provide similar functionality, but I don't know much about it and found the webpage especially useful for it's good documentation. Being written by mathematicians specialized in this, it looks very good. I haven't tried it out yet, because the matlab version on my "play" computer is too old, and I don't have a use case for it right now. Josef > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From charlesr.harris at gmail.com Tue Dec 8 10:47:59 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 8 Dec 2009 08:47:59 -0700 Subject: [SciPy-User] chebfun In-Reply-To: <1cd32cbb0912080705p6154e394tf124c42e9b118daa@mail.gmail.com> References: <1cd32cbb0911291742m1d3f3ab8r886009c7fba3cab9@mail.gmail.com> <1cd32cbb0912080705p6154e394tf124c42e9b118daa@mail.gmail.com> Message-ID: On Tue, Dec 8, 2009 at 8:05 AM, wrote: > On Tue, Dec 8, 2009 at 9:42 AM, denis wrote: > > Nice links, Josef. > > Chebfun looks very high quality; do you think there'd be a market > > for a scipy wrapper ? > > Since it is written in matlab, it would be more a candidate for > translation (and inspiration) than for wrapping. Chuck and Anne might > provide similar functionality, but I don't know much about it and > found the webpage especially useful for it's good documentation. > > Trefethen is a Chebyshev fanatic, one of the enlightened. I heard him give a talk at the invitation of a sinc fanatic -- they are related -- and he made a good case for using functions sampled at the Chebyshev II points for quick and dirty solutions to boundary value problems. IIRC, the methods didn't preserve adjointness, which is a bit of a wart since the resulting eigenfunctions aren't orthogonal, but it looked like good fun. He has some books out, here is one: http://www.ec-securehost.com/SIAM/SE10.html . No doubt some of these algorithms could find a place in scipy. In any case, what Anne and I are looking at isn't as extensive as Chebfun, Chebfun looks more like a lite replacement for parts of numpy and scipy, call it chebpy, that combines special functions with zero finding, differentiation, etc. Chuck Being written by mathematicians specialized in this, it looks very > good. I haven't tried it out yet, because the matlab version on my > "play" computer is too old, and I don't have a use case for it right > now. > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From joschu at caltech.edu Tue Dec 8 19:10:35 2009 From: joschu at caltech.edu (John Schulman) Date: Tue, 8 Dec 2009 16:10:35 -0800 Subject: [SciPy-User] ubuntu's PYTHONPATH and installing scipy Message-ID: <185761440912081610o76183166mdf137010646443ce@mail.gmail.com> Hi, There's an issue with ubuntu PYTHONPATH that causes problems for scipy installation. I eventually found a workaround, but I want to point out the problem. PYTHONPATH is different when sudoing, so that packages installed by aptitute appear first in the path.* The svn version scipy imports numpy in the setup script. "python setup.py build" works as it should, but then "sudo python setup.py install" gives the error "ImportError: cannot import name get_info", since it's trying to use numpy1.3 (which was a dependency for some package I installed). The workaround is simple: sudo PYTHONPATH=/usr/local/lib/python2.6/dist-packages: python setup.py install *In ubuntu, packages that you install through the apt package manager goes into /usr/lib/python2.6/dist-packages, but python setup.py install puts stuff in /usr/local/lib/python2.6/dist-packages. There's a line in /etc/profile (I didn't put it there--it must be Ubuntu): export PYTHONPATH=/usr/local/lib/python2.6/dist-packages:$PYTHONPATH. I guess this is a kludge to fix the problem. From dsdale24 at gmail.com Tue Dec 8 20:31:24 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Tue, 8 Dec 2009 20:31:24 -0500 Subject: [SciPy-User] ubuntu's PYTHONPATH and installing scipy In-Reply-To: <185761440912081610o76183166mdf137010646443ce@mail.gmail.com> References: <185761440912081610o76183166mdf137010646443ce@mail.gmail.com> Message-ID: On Tue, Dec 8, 2009 at 7:10 PM, John Schulman wrote: > Hi, > There's an issue with ubuntu PYTHONPATH that causes problems for scipy > installation. I eventually found a workaround, but I want to point out > the problem. > > PYTHONPATH is different when sudoing, so that packages installed by > aptitute appear first in the path.* > > The svn version scipy imports numpy in the setup script. "python > setup.py build" works as it should, but then "sudo python setup.py > install" gives the error "ImportError: cannot import name get_info", > since it's trying to use numpy1.3 (which was a dependency for some > package I installed). > > The workaround is simple: > sudo PYTHONPATH=/usr/local/lib/python2.6/dist-packages: python setup.py install > > *In ubuntu, packages that you install through the apt package manager > goes into /usr/lib/python2.6/dist-packages, but python setup.py > install puts stuff in /usr/local/lib/python2.6/dist-packages. There's > a line in /etc/profile (I didn't put it there--it must be Ubuntu): > export PYTHONPATH=/usr/local/lib/python2.6/dist-packages:$PYTHONPATH. > I guess this is a kludge to fix the problem. You can also use visudo to edit your sudo options, so that your PYTHONPATH is carried over. Darren From charlesr.harris at gmail.com Wed Dec 9 02:06:30 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 9 Dec 2009 00:06:30 -0700 Subject: [SciPy-User] ubuntu's PYTHONPATH and installing scipy In-Reply-To: References: <185761440912081610o76183166mdf137010646443ce@mail.gmail.com> Message-ID: On Tue, Dec 8, 2009 at 6:31 PM, Darren Dale wrote: > On Tue, Dec 8, 2009 at 7:10 PM, John Schulman wrote: > > Hi, > > There's an issue with ubuntu PYTHONPATH that causes problems for scipy > > installation. I eventually found a workaround, but I want to point out > > the problem. > > > > PYTHONPATH is different when sudoing, so that packages installed by > > aptitute appear first in the path.* > > > > The svn version scipy imports numpy in the setup script. "python > > setup.py build" works as it should, but then "sudo python setup.py > > install" gives the error "ImportError: cannot import name get_info", > > since it's trying to use numpy1.3 (which was a dependency for some > > package I installed). > > > > The workaround is simple: > > sudo PYTHONPATH=/usr/local/lib/python2.6/dist-packages: python setup.py > install > > > > *In ubuntu, packages that you install through the apt package manager > > goes into /usr/lib/python2.6/dist-packages, but python setup.py > > install puts stuff in /usr/local/lib/python2.6/dist-packages. There's > > a line in /etc/profile (I didn't put it there--it must be Ubuntu): > > export PYTHONPATH=/usr/local/lib/python2.6/dist-packages:$PYTHONPATH. > > I guess this is a kludge to fix the problem. > > You can also use visudo to edit your sudo options, so that your > PYTHONPATH is carried over. > > Oh, that's too clever. I used locate to find all the numpy installations and deleted those with the wrong date ;) There are actually *three* different directories in which python stuff gets installed. I haven't a clue why. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Wed Dec 9 11:41:46 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Wed, 9 Dec 2009 11:41:46 -0500 Subject: [SciPy-User] ubuntu's PYTHONPATH and installing scipy In-Reply-To: References: <185761440912081610o76183166mdf137010646443ce@mail.gmail.com> Message-ID: On Wed, Dec 9, 2009 at 2:06 AM, Charles R Harris wrote: > > > On Tue, Dec 8, 2009 at 6:31 PM, Darren Dale wrote: >> >> On Tue, Dec 8, 2009 at 7:10 PM, John Schulman wrote: >> > Hi, >> > There's an issue with ubuntu PYTHONPATH that causes problems for scipy >> > installation. I eventually found a workaround, but I want to point out >> > the problem. >> > >> > PYTHONPATH is different when sudoing, so that packages installed by >> > aptitute appear first in the path.* >> > >> > The svn version scipy imports numpy in the setup script. "python >> > setup.py build" works as it should, but then "sudo python setup.py >> > install" gives the error "ImportError: cannot import name get_info", >> > since it's trying to use numpy1.3 (which was a dependency for some >> > package I installed). >> > >> > The workaround is simple: >> > sudo PYTHONPATH=/usr/local/lib/python2.6/dist-packages: python setup.py >> > install >> > >> > *In ubuntu, packages that you install through the apt package manager >> > goes into /usr/lib/python2.6/dist-packages, but python setup.py >> > install puts stuff in /usr/local/lib/python2.6/dist-packages. There's >> > a line in /etc/profile (I didn't put it there--it must be Ubuntu): >> > export PYTHONPATH=/usr/local/lib/python2.6/dist-packages:$PYTHONPATH. >> > I guess this is a kludge to fix the problem. >> >> You can also use visudo to edit your sudo options, so that your >> PYTHONPATH is carried over. >> > > Oh, that's too clever. I used locate to find all the numpy installations and > deleted those with the wrong date ;) There are actually *three* different > directories in which python stuff gets installed. I haven't a clue why. Debian's python policy is described at http://www.debian.org/doc/packaging-manuals/python-policy/ , but I don't think that page reflects changes in debian unstable/ubuntu. I don't understand the reason for using dist-packages instead of site-packages. And now manually installed packages go into /usr/local instead of /usr, but recently they changed things so /usr appears higher in the python search path, so it is difficult to manually upgrade software provided by the package manager. Maybe the debian/ubuntu maintainers were trying to make it more difficult for users to break packages installed by the package manager? In my opinion, the policy is in unintuitive and in the way. From vanforeest at gmail.com Wed Dec 9 15:24:20 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Wed, 9 Dec 2009 21:24:20 +0100 Subject: [SciPy-User] maping a float to a function value Message-ID: Hi, I assume that the problem below has been resolved in a clever, pythonic, way, but I don't know how... For numerical purposes I want to map floats to other floats, and store the results. (In math's language, I want to store the mapping x \right f(x), for x in the reals). One natural way would be to use a dict. However, using floats as keys in not particularly smart as the real numbers (the things I want to deal with) do not map uniquely to floats. One way to resolve this is to overload the dict object to a class, like so: class Function(dict): def __init__(self): dict.__init__(self) def __getitem__(self, x): x = round(100*x) return dict.__getitem__(self,x) def __setitem__(self,x, value): x = round(100*x) dict.__setitem__(self,x,value) However, using a class like this has obvious drawbacks, for instance, now I have to memorize that the internal precision is up to 0.01 (and I tend to forget stuff like this within a week or so). Of course, I can take 1e-13 or so as the internal precision, but I still don't like this approach; I have the feeling that much smarter/more robust ways exist to resolve this problem. Thanks bye Nicky From josef.pktd at gmail.com Wed Dec 9 15:26:04 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 9 Dec 2009 15:26:04 -0500 Subject: [SciPy-User] menagerie of continuous time stochastic processes Message-ID: <1cd32cbb0912091226l309d05bdgadd7353da3b264e8@mail.gmail.com> I needed to get more familiar with stochastic processes in continuous time, so I started to write a "menagerie". Currently mainly simulation and later estimation as far as possible. The attached file contains processes for jump diffusion, stochastic volatility, stochastic time mostly translated from matlab. Not tested, no guarantee for correctness but many illustrative graphs. (part 1 has Brownian Motions, Ornstein-Uhlenbeck and similar, but is even messier) Is there something similar already in python somewhere? Josef -------------- next part -------------- """ Diffusion 2: jump diffusion, stochastic volatility, stochastic time Created on Tue Dec 08 15:03:49 2009 Author: josef-pktd following Meucci License: BSD contains: CIRSubordinatedBrownian Heston IG JumpDiffusionKou JumpDiffusionMerton NIG VG References ---------- Attilio Meucci, Review of Discrete and Continuous Processes in Finance: Theory and Applications Bloomberg Portfolio Research Paper No. 2009-02-CLASSROOM July 1, 2009 http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1373102 this is currently mostly a translation from matlab of http://www.mathworks.com/matlabcentral/fileexchange/23554-review-of-discrete-and-continuous-processes-in-finance license BSD: Copyright (c) 2008, Attilio Meucci All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. TODO: * vectorize where possible * which processes are exactly simulated by finite differences ? * include or exclude (now) the initial observation ? * convert to and merge with diffusion.py (part 1 of diffusions) * which processes can be easily estimated ? loglike or characteristic function ? * tests ? check for possible index errors (random indices), graphs look ok * adjust notation, variable names, more consistent, more pythonic * delete a few unused lines, cleanup * docstrings """ import numpy as np #from scipy import stats # currently only uses np.random import matplotlib.pyplot as plt class JumpDiffusionMerton(object): ''' Example ------- mu=.00 # deterministic drift sig=.20 # Gaussian component l=3.45 # Poisson process arrival rate a=0 # drift of log-jump D=.2 # st.dev of log-jump X = JumpDiffusionMerton().simulate(mu,sig,lambd,a,D,ts,nrepl) plt.figure() plt.plot(X.T) plt.title('Merton jump-diffusion') ''' def __init__(self): pass def simulate(self, m,s,lambd,a,D,ts,nrepl): T = ts[-1] # time points # simulate number of jumps n_jumps = np.random.poisson(lambd*T, size=(nrepl, 1)) jumps=[] nobs=len(ts) jumps=np.zeros((nrepl,nobs)) for j in range(nrepl): # simulate jump arrival time t = T*np.random.rand(n_jumps[j])#,1) #uniform t = np.sort(t,0) # simulate jump size S = a + D*np.random.randn(n_jumps[j],1) # put things together CumS = np.cumsum(S) jumps_ts = np.zeros(nobs) for n in range(nobs): Events = np.sum(t<=ts[n])-1 #print n, Events, CumS.shape, jumps_ts.shape jumps_ts[n]=0 if Events: jumps_ts[n] = CumS[Events] #jumps = np.column_stack((jumps, jumps_ts)) #maybe wrong transl jumps[j,:] = jumps_ts D_Diff = np.zeros((nrepl,nobs)) for k in range(nobs): Dt=ts[k] if k>1: Dt=ts[k]-ts[k-1] D_Diff[:,k]=m*Dt + s*np.sqrt(Dt)*np.random.randn(nrepl) x = np.hstack((np.zeros((nrepl,1)),np.cumsum(D_Diff,1)+jumps)) return x class JumpDiffusionKou(object): def __init__(self): pass def simulate(self, m,s,lambd,p,e1,e2,ts,nrepl): T=ts[-1] # simulate number of jumps N = np.random.poisson(lambd*T,size =(nrepl,1)) jumps=[] nobs=len(ts) jumps=np.zeros((nrepl,nobs)) for j in range(nrepl): # simulate jump arrival time t=T*np.random.rand(N[j]) t=np.sort(t) # simulate jump size ww = np.random.binomial(1, p, size=(N[j])) S = ww * np.random.exponential(e1, size=(N[j])) - \ (1-ww) * np.random.exponential(e2, N[j]) # put things together CumS = np.cumsum(S) jumps_ts = np.zeros(nobs) for n in range(nobs): Events = sum(t<=ts[n])-1 jumps_ts[n]=0 if Events: jumps_ts[n]=CumS[Events] jumps[j,:] = jumps_ts D_Diff = np.zeros((nrepl,nobs)) for k in range(nobs): Dt=ts[k] if k>1: Dt=ts[k]-ts[k-1] D_Diff[:,k]=m*Dt + s*np.sqrt(Dt)*np.random.normal(size=nrepl) x = np.hstack((np.zeros((nrepl,1)),np.cumsum(D_Diff,1)+jumps)) return x class VG(object): '''variance gamma process ''' def __init__(self): pass def simulate(self, m,s,kappa,ts,nrepl): T=len(ts) dXs = np.zeros((nrepl,T)) for t in range(T): dt=ts[1]-0 if t>1: dt = ts[t]-ts[t-1] #print dt/kappa #TODO: check parameterization of gamrnd, checked looks same as np d_tau = kappa * np.random.gamma(dt/kappa,1.,size=(nrepl)) #print s*np.sqrt(d_tau) # this raises exception: #dX = stats.norm.rvs(m*d_tau,(s*np.sqrt(d_tau))) # np.random.normal requires scale >0 dX = np.random.normal(loc=m*d_tau, scale=1e-6+s*np.sqrt(d_tau)) dXs[:,t] = dX x = np.cumsum(dXs,1) return x class IG(object): '''inverse-Gaussian ??? used by NIG ''' def __init__(self): pass def simulate(self, l,m,nrepl): N = np.random.randn(nrepl,1) Y = N**2 X = m + (.5*m*m/l)*Y - (.5*m/l)*np.sqrt(4*m*l*Y+m*m*(Y**2)) U = np.random.rand(nrepl,1) ind = U>m/(X+m) X[ind] = m*m/X[ind] return X.ravel() class NIG(object): '''normal-inverse-Gaussian ''' def __init__(self): pass def simulate(self, th,k,s,ts,nrepl): T = len(ts) DXs = np.zeros((nrepl,T)) for t in range(T): Dt=ts[1]-0 if t>1: Dt=ts[t]-ts[t-1] l = 1/k*(Dt**2) m = Dt DS = IG().simulate(l,m,nrepl) N = np.random.randn(nrepl) DX = s*N*np.sqrt(DS) + th*DS #print DS.shape, DX.shape, DXs.shape DXs[:,t] = DX x = np.cumsum(DXs,1) return x class Heston(object): '''Heston Stochastic Volatility ''' def __init__(self): pass def simulate(self, m, kappa, eta,lambd,r, ts, nrepl,tratio=1.): T = ts[-1] nobs = len(ts) dt = np.zeros(nobs) #/tratio dt[0] = ts[0]-0 dt[1:] = np.diff(ts) DXs = np.zeros((nrepl,nobs)) dB_1 = np.sqrt(dt) * np.random.randn(nrepl,nobs) dB_2u = np.sqrt(dt) * np.random.randn(nrepl,nobs) dB_2 = r*dB_1 + np.sqrt(1-r**2)*dB_2u vt = eta*np.ones(nrepl) v=[] dXs = np.zeros((nrepl,nobs)) vts = np.zeros((nrepl,nobs)) for t in range(nobs): dv = kappa*(eta-vt)*dt[t]+ lambd*np.sqrt(vt)*dB_2[:,t] dX = m*dt[t] + np.sqrt(vt*dt[t]) * dB_1[:,t] vt = vt + dv vts[:,t] = vt dXs[:,t] = dX x = np.cumsum(dXs,1) return x, vts class CIRSubordinatedBrownian(object): '''CIR subordinated Brownian Motion ''' def __init__(self): pass def simulate(self, m, kappa, T_dot,lambd,sigma, ts, nrepl): T = ts[-1] nobs = len(ts) dtarr = np.zeros(nobs) #/tratio dtarr[0] = ts[0]-0 dtarr[1:] = np.diff(ts) DXs = np.zeros((nrepl,nobs)) dB = np.sqrt(dtarr) * np.random.randn(nrepl,nobs) yt = 1. dXs = np.zeros((nrepl,nobs)) dtaus = np.zeros((nrepl,nobs)) y = np.zeros((nrepl,nobs)) for t in range(nobs): dt = dtarr[t] dy = kappa*(T_dot-yt)*dt + lambd*np.sqrt(yt)*dB[:,t] yt = np.maximum(yt+dy,1e-10) # keep away from zero ? dtau = np.maximum(yt*dt, 1e-6) dX = np.random.normal(loc=m*dtau, scale=sigma*np.sqrt(dtau)) y[:,t] = yt dtaus[:,t] = dtau dXs[:,t] = dX tau = np.cumsum(dtaus,1) x = np.cumsum(dXs,1) return x, tau, y def schout2contank(a,b,d): th = d*b/np.sqrt(a**2-b**2) k = 1/(d*np.sqrt(a**2-b**2)) s = np.sqrt(d/np.sqrt(a**2-b**2)) return th,k,s if __name__ == '__main__': #Merton Jump Diffusion #^^^^^^^^^^^^^^^^^^^^^ # grid of time values at which the process is evaluated #("0" will be added, too) nobs = 252.#1000 #252. ts = np.linspace(1./nobs, 1., nobs) nrepl=5 # number of simulations mu=.010 # deterministic drift sigma = .020 # Gaussian component lambd = 3.45 *10 # Poisson process arrival rate a=0 # drift of log-jump D=.2 # st.dev of log-jump jd = JumpDiffusionMerton() x = jd.simulate(mu,sigma,lambd,a,D,ts,nrepl) plt.figure() plt.plot(x.T) #Todo plt.title('Merton jump-diffusion') sigma = 0.2 lambd = 3.45 x = jd.simulate(mu,sigma,lambd,a,D,ts,nrepl) plt.figure() plt.plot(x.T) #Todo plt.title('Merton jump-diffusion') #Kou jump diffusion #^^^^^^^^^^^^^^^^^^ mu=.0 # deterministic drift lambd=4.25 # Poisson process arrival rate p=.5 # prob. of up-jump e1=.2 # parameter of up-jump e2=.3 # parameter of down-jump sig=.2 # Gaussian component x = JumpDiffusionKou().simulate(mu,sig,lambd,p,e1,e2,ts,nrepl) plt.figure() plt.plot(x.T) #Todo plt.title('double exponential (Kou jump diffusion)') #variance-gamma #^^^^^^^^^^^^^^ mu = .1 # deterministic drift in subordinated Brownian motion kappa = 1. #10. #1 # inverse for gamma shape parameter sig = 0.5 #.2 # s.dev in subordinated Brownian motion x = VG().simulate(mu,sig,kappa,ts,nrepl) plt.figure() plt.plot(x.T) #Todo plt.title('variance gamma') #normal-inverse-Gaussian #^^^^^^^^^^^^^^^^^^^^^^^ # (Schoutens notation) al = 2.1 be = 0 de = 1 # convert parameters to Cont-Tankov notation th,k,s = schout2contank(al,be,de) x = NIG().simulate(th,k,s,ts,nrepl) plt.figure() plt.plot(x.T) #Todo x-axis plt.title('normal-inverse-Gaussian') #Heston Stochastic Volatility #^^^^^^^^^^^^^^^^^^^^^^^^^^^^ m=.0 kappa = .6 # 2*Kappa*Eta>Lambda^2 eta = .3**2 lambd =.25 r = -.7 T = 20. nobs = 252.*T#1000 #252. tsh = np.linspace(T/nobs, T, nobs) x, vts = Heston().simulate(m,kappa, eta,lambd,r, tsh, nrepl, tratio=20.) plt.figure() plt.plot(x.T) plt.title('Heston Stochastic Volatility') plt.figure() plt.plot(np.sqrt(vts).T) plt.title('Heston Stochastic Volatility - CIR Vol.') plt.figure() plt.subplot(2,1,1) plt.plot(x[0]) plt.title('Heston Stochastic Volatility process') plt.subplot(2,1,2) plt.plot(np.sqrt(vts[0])) plt.title('CIR Volatility') #CIR subordinated Brownian #^^^^^^^^^^^^^^^^^^^^^^^^^ m=.1 sigma=.4 kappa=.6 # 2*Kappa*T_dot>Lambda^2 T_dot=1 lambd=1 #T=252*10 #dt=1/252 #nrepl=2 T = 10. nobs = 252.*T#1000 #252. tsh = np.linspace(T/nobs, T, nobs) x, tau, y = CIRSubordinatedBrownian().simulate(m, kappa, T_dot,lambd,sigma, tsh, nrepl) plt.figure() plt.plot(tsh, x.T) plt.title('CIRSubordinatedBrownian process') plt.figure() plt.plot(tsh, y.T) plt.title('CIRSubordinatedBrownian - CIR') plt.figure() plt.plot(tsh, tau.T) plt.title('CIRSubordinatedBrownian - stochastic time ') plt.figure() plt.subplot(2,1,1) plt.plot(tsh, x[0]) plt.title('CIRSubordinatedBrownian process') plt.subplot(2,1,2) plt.plot(tsh, y[0], label='CIR') plt.plot(tsh, tau[0], label='stoch. time') plt.legend(loc='upper left') plt.title('CIRSubordinatedBrownian') plt.show() From charlesr.harris at gmail.com Wed Dec 9 15:46:39 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 9 Dec 2009 13:46:39 -0700 Subject: [SciPy-User] maping a float to a function value In-Reply-To: References: Message-ID: On Wed, Dec 9, 2009 at 1:24 PM, nicky van foreest wrote: > Hi, > > I assume that the problem below has been resolved in a clever, > pythonic, way, but I don't know how... > > For numerical purposes I want to map floats to other floats, and store > the results. (In math's language, I want to store the mapping x \right > f(x), for x in the reals). One natural way would be to use a dict. > However, using floats as keys in not particularly smart as the real > numbers (the things I want to deal with) do not map uniquely to > floats. One way to resolve this is to overload the dict object to a > class, like so: > > class Function(dict): > def __init__(self): > dict.__init__(self) > def __getitem__(self, x): > x = round(100*x) > return dict.__getitem__(self,x) > def __setitem__(self,x, value): > x = round(100*x) > dict.__setitem__(self,x,value) > > However, using a class like this has obvious drawbacks, for instance, > now I have to memorize that the internal precision is up to 0.01 (and > I tend to forget stuff like this within a week or so). Of course, I > can take 1e-13 or so as the internal precision, but I still don't like > this approach; I have the feeling that much smarter/more robust ways > exist to resolve this problem. > > This sounds like a proposed solution to a problem. What is the actual problem that this is supposed to solve? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanforeest at gmail.com Wed Dec 9 16:09:12 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Wed, 9 Dec 2009 22:09:12 +0100 Subject: [SciPy-User] maping a float to a function value In-Reply-To: References: Message-ID: Sorry, perhaps I was less clear than I thought. A new attempt... I want to store floats as keys in a dict. but this is not a smart idea. To show the problem: In [1]: a = {} In [6]: import math In [9]: a[1/3.] = 4 In [10]: a[1./(math.sqrt(3)*math.sqrt(3))] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /home/nicky/ in () KeyError: 0.33333333333333337 What would be a smart way to use floats as keys in a dict, in such a way that the above problem does not occur? Nicky 2009/12/9 Charles R Harris : > > > On Wed, Dec 9, 2009 at 1:24 PM, nicky van foreest > wrote: >> >> Hi, >> >> I assume that the problem below has been resolved in a clever, >> pythonic, way, but I don't know how... >> >> For numerical purposes I want to map floats to other floats, and store >> the results. (In math's language, I want to store the mapping x \right >> f(x), for x ?in the reals). One natural way would be to use a dict. >> However, using floats as keys in not particularly smart as the real >> numbers (the things I want to deal with) do not map uniquely to >> floats. One way to resolve this is to overload the dict object to a >> class, like so: >> >> class Function(dict): >> ? ?def __init__(self): >> ? ? ? ?dict.__init__(self) >> ? ?def __getitem__(self, x): >> ? ? ? ?x = round(100*x) >> ? ? ? ?return dict.__getitem__(self,x) >> ? ?def __setitem__(self,x, value): >> ? ? ? ?x = round(100*x) >> ? ? ? ?dict.__setitem__(self,x,value) >> >> However, using a class like this has obvious drawbacks, for instance, >> now I have to memorize that the internal precision is up to 0.01 (and >> I tend to forget stuff like this within a week or so). Of course, I >> can take 1e-13 or so as the internal precision, but I still don't like >> this approach; I have the feeling that much smarter/more robust ways >> exist to resolve this problem. >> > > This sounds like a proposed solution to a problem. What is the actual > problem that this is supposed to solve? > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From charlesr.harris at gmail.com Wed Dec 9 16:13:49 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 9 Dec 2009 14:13:49 -0700 Subject: [SciPy-User] maping a float to a function value In-Reply-To: References: Message-ID: On Wed, Dec 9, 2009 at 2:09 PM, nicky van foreest wrote: > Sorry, perhaps I was less clear than I thought. A new attempt... I > want to store floats as keys in a dict. but this is not a smart idea. > To show the problem: > > In [1]: a = {} > > In [6]: import math > > In [9]: a[1/3.] = 4 > > In [10]: a[1./(math.sqrt(3)*math.sqrt(3))] > --------------------------------------------------------------------------- > KeyError Traceback (most recent call last) > > /home/nicky/ in () > > KeyError: 0.33333333333333337 > > What would be a smart way to use floats as keys in a dict, in such a > way that the above problem does not occur? > > The question is: why do you want to use floats as keys in a dictionary? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Wed Dec 9 16:42:40 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 9 Dec 2009 16:42:40 -0500 Subject: [SciPy-User] maping a float to a function value In-Reply-To: References: Message-ID: > The question is: why do you want to use floats as keys in a dictionary? > > Chuck > > As the OP said, the goal is to store a general function as a set of (x, f(x)) pairs to look up later (i.e. to store the mapping). The issue is that f(x) may well not be known as an explicit function, e.g. the values may have been generated from a numerical calculation (e.g. a solution to an ODE). One good solution is to create a 1-d Curve class that automatically interpolates values not given in the definition by an array of (x, f(x)) pairs. This avoids the OP's problem with dicts except when x values are computed by an arithmetic op that puts them slightly outside the domain of the Curve class's known range of x values. I would catch that using allclose since it should generally be a very small error (so you wouldn't have to worry about remembering the precision). Class Curve could be achieved with this minor addition of allclose by extending the scipy.interpolate.interp1d class. -Rob From vanforeest at gmail.com Wed Dec 9 16:57:58 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Wed, 9 Dec 2009 22:57:58 +0100 Subject: [SciPy-User] maping a float to a function value In-Reply-To: References: Message-ID: > The question is: why do you want to use floats as keys in a dictionary? > One argument is that I like it conceptually. The technical argument is as follows. I need the (numerical) solution of the integral equation gamma(x) = c(x) + \int_0^\infty gamma(x-y) G(y) dy, for given functions c(x) and G(x). (There are some technical conditions on G and c such that I can prove that the integral equation has a solution.) Now I like to store the values gamma(x) as keys of x, as it feels natural. Moreover, I need gamma in a second integral equation. Sure I can store gamma as an array, but then I have to convert the index i to the key x, and I dislike this, as it is less elegant, and requires extra code. I hope the above clarifies my point, but I am not quite sure... Nicky From peter.combs at berkeley.edu Wed Dec 9 17:03:02 2009 From: peter.combs at berkeley.edu (Peter Combs) Date: Wed, 9 Dec 2009 14:03:02 -0800 Subject: [SciPy-User] maping a float to a function value In-Reply-To: References: Message-ID: Since you don't have a problem with modifying classes, what about overriding the list class so that it a) stores information about the bounds of your integral and the spacing of the points, and b) uses those to automatically convert from a value passed into __getitem__ to the nearest point for which it has data? Peter Combs peter.combs at berkeley.edu On Dec 9, 2009, at 1:57 PM, nicky van foreest wrote: >> The question is: why do you want to use floats as keys in a dictionary? >> > > One argument is that I like it conceptually. The technical argument > is as follows. I need the (numerical) solution of the integral > equation > > gamma(x) = c(x) + \int_0^\infty gamma(x-y) G(y) dy, > > for given functions c(x) and G(x). (There are some technical > conditions on G and c such that I can prove that the integral equation > has a solution.) Now I like to store the values gamma(x) as keys of > x, as it feels natural. Moreover, I need gamma in a second integral > equation. Sure I can store gamma as an array, but then I have to > convert the index i to the key x, and I dislike this, as it is less > elegant, and requires extra code. > > I hope the above clarifies my point, but I am not quite sure... > > Nicky > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From Chris.Barker at noaa.gov Wed Dec 9 18:02:05 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 09 Dec 2009 15:02:05 -0800 Subject: [SciPy-User] maping a float to a function value In-Reply-To: References: Message-ID: <4B202C6D.6060200@noaa.gov> > I need the (numerical) solution of the integral > equation > > gamma(x) = c(x) + \int_0^\infty gamma(x-y) G(y) dy, > > for given functions c(x) and G(x). (There are some technical > conditions on G and c such that I can prove that the integral equation > has a solution.) Now I like to store the values gamma(x) as keys of > x, as it feels natural. Moreover, I need gamma in a second integral > equation. Sure I can store gamma as an array, but then I have to > convert the index i to the key x, and I dislike this, as it is less > elegant, and requires extra code. However, I think it's a more correct solution -- as you've pointed out real numbers to not map directly to floats, and, indeed, you are never going to have gamma(x) pre-calculated for all x in your range. Indeed, it's a bit absurd to try to have gamma(x) for all floats in your range, either. So you are going to have to compute and store a subset. As they are ordered, it sure makes sense to simply store them, and it's easy to calculate an index -- you could put a different api on it if you really want: class gamma: def__get_item(self, value): # compute index form value here return self._gamma[index] but, as you are storing only a subset of your possible values, it seems that interpolation really is the best and most natural way to go anyway. In fact, the above is simply a nearest neighbor interpolator already, as is your proposed dict with keys of rounded floats. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From peridot.faceted at gmail.com Wed Dec 9 19:48:08 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 9 Dec 2009 19:48:08 -0500 Subject: [SciPy-User] maping a float to a function value In-Reply-To: References: Message-ID: 2009/12/9 nicky van foreest : >> The question is: why do you want to use floats as keys in a dictionary? >> > > One argument is that I like it conceptually. ?The technical argument > is as follows. I need the (numerical) solution of the ?integral > equation > > gamma(x) = c(x) + \int_0^\infty gamma(x-y) G(y) dy, > > for given functions c(x) and G(x). (There are some technical > conditions on G and c such that I can prove that the integral equation > has a solution.) ?Now I like to store the values gamma(x) as keys of > x, as it feels natural. Moreover, I need gamma in a second integral > equation. Sure I can store gamma as an array, but then I have to > convert the index i to the key x, and I dislike this, as it is less > elegant, and requires extra code. > > I hope the above clarifies my point, but I am not quite sure... I think you may be going about this the wrong way around. Solving integral equations is generally hard, so you should be thinking about the algorithm you will use. Whatever algorithm you choose will imply a representation of gamma(x), necessarily in some finite amount of space. I would worry less about how "natural" your representation is than about whether it suits how you want to solve the problem. In this particular case, gamma(x) can be written entirely in terms of c(x) and values of gamma(z) for z IIRC, timeseries Date objects cannot be pickled. I'm now trying to do some parallel computation. Using IPython, pickling seems to be integral parallel computing. I'm not sure about the python multiprocessing module yet. My question is, how do I do parallel computation with timeseries data? thanks, -robert From pgmdevlist at gmail.com Thu Dec 10 09:27:04 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 10 Dec 2009 09:27:04 -0500 Subject: [SciPy-User] Pickling timeseries Dates In-Reply-To: <1A2A910B-F2F4-42E3-B687-EC5CE0D00398@diablotech.com> References: <1A2A910B-F2F4-42E3-B687-EC5CE0D00398@diablotech.com> Message-ID: <98CE5655-35B6-4E8E-854F-F7415567BB0A@gmail.com> On Dec 10, 2009, at 9:05 AM, Robert Ferrell wrote: > IIRC, timeseries Date objects cannot be pickled. Unfortunately, that's true. Gonna have to fix that. However, DateArrays and TimeSeries are pickable. You may want to put your Date in a single-element DateArray > I'm now trying to do > some parallel computation. Using IPython, pickling seems to be > integral parallel computing. I'm not sure about the python > multiprocessing module yet. > > My question is, how do I do parallel computation with timeseries data? Well, you entered unchartered territories, sorry... Let me know how it goes From ferrell at diablotech.com Thu Dec 10 09:41:28 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Thu, 10 Dec 2009 07:41:28 -0700 Subject: [SciPy-User] Pickling timeseries Dates In-Reply-To: <98CE5655-35B6-4E8E-854F-F7415567BB0A@gmail.com> References: <1A2A910B-F2F4-42E3-B687-EC5CE0D00398@diablotech.com> <98CE5655-35B6-4E8E-854F-F7415567BB0A@gmail.com> Message-ID: <41DAF069-97A7-4E75-9C6F-38BE02ECD729@diablotech.com> On Dec 10, 2009, at 7:27 AM, Pierre GM wrote: > On Dec 10, 2009, at 9:05 AM, Robert Ferrell wrote: >> IIRC, timeseries Date objects cannot be pickled. > > Unfortunately, that's true. Gonna have to fix that. However, > DateArrays and TimeSeries are pickable. You may want to put your > Date in a single-element DateArray Great hint. I had just assumed DateArrays and TimeSeries couldn't be pickled because Dates can't be pickled. This makes it all much easier. I've been str() single Date instances, which has been working fine. single-element DateArray is probably easier. > >> I'm now trying to do >> some parallel computation. Using IPython, pickling seems to be >> integral parallel computing. I'm not sure about the python >> multiprocessing module yet. >> >> My question is, how do I do parallel computation with timeseries >> data? > > > Well, you entered unchartered territories, sorry... Let me know how > it goes First I have to learn some basic Python multiprocessing stuff. Thanks, -robert From pgmdevlist at gmail.com Thu Dec 10 10:07:52 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 10 Dec 2009 10:07:52 -0500 Subject: [SciPy-User] Pickling timeseries Dates In-Reply-To: <41DAF069-97A7-4E75-9C6F-38BE02ECD729@diablotech.com> References: <1A2A910B-F2F4-42E3-B687-EC5CE0D00398@diablotech.com> <98CE5655-35B6-4E8E-854F-F7415567BB0A@gmail.com> <41DAF069-97A7-4E75-9C6F-38BE02ECD729@diablotech.com> Message-ID: <92C4CD60-8EC4-41A3-9EE5-8422EF214316@gmail.com> On Dec 10, 2009, at 9:41 AM, Robert Ferrell wrote: > > On Dec 10, 2009, at 7:27 AM, Pierre GM wrote: > >> On Dec 10, 2009, at 9:05 AM, Robert Ferrell wrote: >>> IIRC, timeseries Date objects cannot be pickled. >> >> Unfortunately, that's true. Gonna have to fix that. However, >> DateArrays and TimeSeries are pickable. You may want to put your >> Date in a single-element DateArray > > Great hint. I had just assumed DateArrays and TimeSeries couldn't be > pickled because Dates can't be pickled. This makes it all much > easier. I've been str() single Date instances, which has been working > fine. single-element DateArray is probably easier. Note that there's a problem with pickling DateArrays: you lose the frequency at unpickling time. I gonna fix that ASAP. Pickling TimeSeries works quite fine, though (I used them a lot). From ferrell at diablotech.com Thu Dec 10 12:35:08 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Thu, 10 Dec 2009 10:35:08 -0700 Subject: [SciPy-User] Pickling timeseries Dates In-Reply-To: <92C4CD60-8EC4-41A3-9EE5-8422EF214316@gmail.com> References: <1A2A910B-F2F4-42E3-B687-EC5CE0D00398@diablotech.com> <98CE5655-35B6-4E8E-854F-F7415567BB0A@gmail.com> <41DAF069-97A7-4E75-9C6F-38BE02ECD729@diablotech.com> <92C4CD60-8EC4-41A3-9EE5-8422EF214316@gmail.com> Message-ID: <2F28D69E-0A41-448F-81C5-0733F4E34BE7@diablotech.com> On Dec 10, 2009, at 8:07 AM, Pierre GM wrote: > On Dec 10, 2009, at 9:41 AM, Robert Ferrell wrote: >> >> On Dec 10, 2009, at 7:27 AM, Pierre GM wrote: >> >>> On Dec 10, 2009, at 9:05 AM, Robert Ferrell wrote: >>>> IIRC, timeseries Date objects cannot be pickled. >>> >>> Unfortunately, that's true. Gonna have to fix that. However, >>> DateArrays and TimeSeries are pickable. You may want to put your >>> Date in a single-element DateArray >> >> Great hint. I had just assumed DateArrays and TimeSeries couldn't be >> pickled because Dates can't be pickled. This makes it all much >> easier. I've been str() single Date instances, which has been >> working >> fine. single-element DateArray is probably easier. > > > Note that there's a problem with pickling DateArrays: you lose the > frequency at unpickling time. I gonna fix that ASAP. Pickling > TimeSeries works quite fine, though (I used them a lot). Thanks for the heads up. I can work around that pretty easily, but I'll listen for the fix. From rjchacko at gmail.com Thu Dec 10 12:48:09 2009 From: rjchacko at gmail.com (Ranjit Chacko) Date: Thu, 10 Dec 2009 12:48:09 -0500 Subject: [SciPy-User] ndimage.convolve Message-ID: I want to take the laplacian of a 2-d field which has periodic boundary conditions. I tried doing this using ndimage.convolve, but when I test this on a simple function I don't get the result I expect. Why aren't the boundary elements going to zero as well in the following example? A= np.array([[x + y for x in np.arange(0,10)] for y in np.arange(0,10)]) stencil=[[0,0,-1,0,0],[0,0,16,0,0],[-1,16,-60,16,-1],[0,0,16,0,0],[0,0,-1,0,0]] ndimage.convolve(A,stencil,mode='wrap') array([[ 300, 140, 150, 150, 150, 150, 150, 150, 160, 0], [ 140, -20, -10, -10, -10, -10, -10, -10, 0, -160], [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], [ 160, 0, 10, 10, 10, 10, 10, 10, 20, -140], [ 0, -160, -150, -150, -150, -150, -150, -150, -140, -300]]) Thanks, -Ranjit -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Thu Dec 10 13:08:39 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 10 Dec 2009 13:08:39 -0500 Subject: [SciPy-User] ndimage.convolve In-Reply-To: References: Message-ID: > I want to take the laplacian of a 2-d field which has periodic > boundary conditions. I tried doing this using ndimage.convolve, but > when I test this on a simple function I don't get the result I > expect. Why aren't the boundary elements going to zero as well in > the following example? > > > A= np.array([[x + y for x in np.arange(0,10)] for y in > np.arange(0,10)]) > stencil=[[0,0,-1,0,0],[0,0,16,0,0],[-1,16,-60,16,-1],[0,0,16,0,0], > [0,0,-1,0,0]] > ndimage.convolve(A,stencil,mode='wrap') > > array([[ 300, 140, 150, 150, 150, 150, 150, 150, 160, 0], > [ 140, -20, -10, -10, -10, -10, -10, -10, 0, -160], > [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], > [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], > [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], > [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], > [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], > [ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150], > [ 160, 0, 10, 10, 10, 10, 10, 10, 20, -140], > [ 0, -160, -150, -150, -150, -150, -150, -150, -140, -300]]) I feel that I am missing something, because this seems like the expected behavior. You're convolving this matrix: array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], [ 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], [ 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], [ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], [ 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]]) With this kernel: array([[ 0, 0, -1, 0, 0], [ 0, 0, 16, 0, 0], [ -1, 16, -60, 16, -1], [ 0, 0, 16, 0, 0], [ 0, 0, -1, 0, 0]]) using *wraparound* boundary conditions (not mirroring!). I fail to see why the result is unexpected... For simplicity, let's look at the equivalent 1D case: matrix: [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] kernel: [ -1, 16, -30, 16, -1] At the right edge, you have this: [ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] 5 6 * [ -1, 16,-30, 16, -1] where the "phantom" 5 and 6 come from the wraparound boundary condition. -1*12 + 16*13 + -30*14 + 16*5 + -1*6 does not equal zero! ndimage.convolve([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], [ -1, 16,-30, 16, -1], mode='wrap') array([ 150, -10, 0, 0, 0, 0, 0, 0, 10, -150]) Zach ps: more straightforward way to construct A matrix above: np.add.outer(np.arange(10), np.arange(10)) From vanforeest at gmail.com Thu Dec 10 15:01:17 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 10 Dec 2009 21:01:17 +0100 Subject: [SciPy-User] maping a float to a function value In-Reply-To: <4B202C6D.6060200@noaa.gov> References: <4B202C6D.6060200@noaa.gov> Message-ID: Dear Chris, Peter, and Rob, Thanks for your suggestions. All in all it seems easiest to implement my function as a subclass of a list, and in the initialization use a grid on which I define the function's values. As I don't need any other values than the ones on the predefined grid, I don't think I'll need an interpolating function---but I'll remember it for a next time Nicky 2009/12/10 Christopher Barker : > ?> I need the (numerical) solution of the ?integral >> equation >> >> gamma(x) = c(x) + \int_0^\infty gamma(x-y) G(y) dy, >> >> for given functions c(x) and G(x). (There are some technical >> conditions on G and c such that I can prove that the integral equation >> has a solution.) ?Now I like to store the values gamma(x) as keys of >> x, as it feels natural. Moreover, I need gamma in a second integral >> equation. Sure I can store gamma as an array, but then I have to >> convert the index i to the key x, and I dislike this, as it is less >> elegant, and requires extra code. > > However, I think it's a more correct solution -- as you've pointed out > real numbers to not map directly to floats, and, indeed, you are never > going to have gamma(x) pre-calculated for all x in your range. Indeed, > it's a bit absurd to try to have gamma(x) for all floats in your range, > either. So you are going to have to compute and store a subset. As they > are ordered, it sure makes sense to simply store them, and it's easy to > calculate an index -- you could put a different api on it if you really > want: > > > class gamma: > > ? ? def__get_item(self, value): > ? ? ? ? # compute index form value here > ? ? ? ? return self._gamma[index] > > > but, as you are storing only a subset of your possible values, it seems > that interpolation really is the best and most natural way to go anyway. > > In fact, the above is simply a nearest neighbor interpolator already, as > is your proposed dict with keys of rounded floats. > > -Chris > > > > > > > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959 ? voice > 7600 Sand Point Way NE ? (206) 526-6329 ? fax > Seattle, WA ?98115 ? ? ? (206) 526-6317 ? main reception > > Chris.Barker at noaa.gov > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From vanforeest at gmail.com Thu Dec 10 15:22:01 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 10 Dec 2009 21:22:01 +0100 Subject: [SciPy-User] maping a float to a function value In-Reply-To: References: Message-ID: Dear Anne, Thanks for your suggestions. I guess my reaction below is a bit too off-topic for the rest of the community... > I think you may be going about this the wrong way around. Ok. I did not think of this point up to now, but you're right. Solving > integral equations is generally hard, so you should be thinking about > the algorithm you will use. Whatever algorithm you choose will imply a > representation of gamma(x), necessarily in some finite amount of > space. I would worry less about how "natural" your representation is > than about whether it suits how you want to solve the problem. > > In this particular case, gamma(x) can be written entirely in terms of > c(x) and values of gamma(z) for z from left to right (hopefully you have some sort of "initial > conditions" that allow this). Interestingly, the function G against which I integrate gamma(x-y) has some particular properties that I can take nearly any function as a start for gamma. The function G is a probability density and the factor lambda in front of the integral is smaller than 1, hence any "history" at the left of some x becomes exponentially less important (to put it heuristically). > In any case, I think the representation you choose will depend heavily > on the algorithm you plan to use. If you choose to use a > representation in terms of pairs (x,gamma(x)), allow me to recommend > using a pair of arrays, xi, gamma(xi), with the xi sorted. This is > much more space-efficient than a dictionary and quite fast to search > using searchsorted; in the very likely case that you are looking up an > x value that is not already in the array, you have easy access to the > neighbors if you want some form of interpolation. This is also a good idea. However, for the moment it suffices to use a fixed grid and compute gamma on this grid. I don't need intermediate values. As a matter of fact, I just want to make a plot to show that it is possible, in principle. to carry out some computations, the rest of the paper is strictly mathematical, and nobody will worry about the numerical stability of my implementation. Anyway, thanks again. Nicky > > Anne > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From joebarfett at yahoo.ca Thu Dec 10 20:39:15 2009 From: joebarfett at yahoo.ca (Joe Barfett) Date: Thu, 10 Dec 2009 17:39:15 -0800 (PST) Subject: [SciPy-User] f2py fortran95 won't compile Message-ID: <876227.1119.qm@web59402.mail.ac4.yahoo.com> Hello,I've been putting a lot of time and googling into trying to get f2py to work. I do large matrix operations in python using scipy and numpy. There's a few small functions I need to speed up and want to use fortran or c to do this.Right now I write code that needs days to run. I'm hoping that I can really speed this up. I've also bought books to help but to no avail yet. I've found python to be incredibly good for what I do, but have had a lot of trouble extending it.Eventually I thought I would ask for help.?On my new computer running snow leopard I was able to install the fortran 95 compiler. I installed the gcc compilers from the xcode developer tools.Going into bash and typing f2p gives returns a long message ending in.. Version: ? ? 2numpy Version: 1.3.0Requires: ? ?Python 2.3 or higher..License: ? ? NumPy license (see LICENSE.txt in the NumPy source code)Copyright 1999 - 2005 Pearu Peterson all rights reserved.http://cens.ioc.ee/projects/f2py2e/ If I run?f2py -c --help-fcompiler? I get a lot of output, one line of which says that the GNU Fortran 95 compiler is there. So I proceeded to make hello.f as indicated in the scipy documentation. I saved it in my documents directory. Going back to bash, I cd into the documents directory and run?f2py -c -m hello hello.f This gives me a long output that ends with the error: building extension "hello" sourcesf2py options: []f2py:> /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghA/src.macosx-10.3-fat-2.5/hellomodule.ccreating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghAcreating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghA/src.macosx-10.3-fat-2.5Reading fortran codes... Reading file 'hello.f' (format:fix,strict)Post-processing...Post-processing (stage 2)...Building modules...error: f2py target file '/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghA/src.macosx-10.3-fat-2.5/hellomodule.c' not generated I've been stuck trying to get up and running with extending python for months now and I'm really hoping someone can help me along. Once I can get this working, I'll start trying to send in numpy arrays and get numpy arrays back.?I really appreciate your kind assistance. Cheers!joe __________________________________________________________________ Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now http://ca.toolbar.yahoo.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ferrell at diablotech.com Thu Dec 10 22:12:17 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Thu, 10 Dec 2009 20:12:17 -0700 Subject: [SciPy-User] Pickling timeseries Dates In-Reply-To: <2F28D69E-0A41-448F-81C5-0733F4E34BE7@diablotech.com> References: <1A2A910B-F2F4-42E3-B687-EC5CE0D00398@diablotech.com> <98CE5655-35B6-4E8E-854F-F7415567BB0A@gmail.com> <41DAF069-97A7-4E75-9C6F-38BE02ECD729@diablotech.com> <92C4CD60-8EC4-41A3-9EE5-8422EF214316@gmail.com> <2F28D69E-0A41-448F-81C5-0733F4E34BE7@diablotech.com> Message-ID: <4E4786D1-FC51-47E3-9812-9A2C63B49E1C@diablotech.com> On Dec 10, 2009, at 10:35 AM, Robert Ferrell wrote: > > On Dec 10, 2009, at 8:07 AM, Pierre GM wrote: > >> On Dec 10, 2009, at 9:41 AM, Robert Ferrell wrote: >>> >>> On Dec 10, 2009, at 7:27 AM, Pierre GM wrote: >>> >>>> On Dec 10, 2009, at 9:05 AM, Robert Ferrell wrote: >>>>> IIRC, timeseries Date objects cannot be pickled. >>>> >>>> Unfortunately, that's true. Gonna have to fix that. However, >>>> DateArrays and TimeSeries are pickable. You may want to put your >>>> Date in a single-element DateArray >>> >>> Great hint. I had just assumed DateArrays and TimeSeries couldn't >>> be >>> pickled because Dates can't be pickled. This makes it all much >>> easier. I've been str() single Date instances, which has been >>> working >>> fine. single-element DateArray is probably easier. >> >> >> Note that there's a problem with pickling DateArrays: you lose the >> frequency at unpickling time. I gonna fix that ASAP. Pickling >> TimeSeries works quite fine, though (I used them a lot). > > Thanks for the heads up. I can work around that pretty easily, but > I'll listen for the fix. > > Fernando reminded me that any date scalar will trigger the pickle problem. Some of my objects have Date instances stashed all over the place. Converting all those to single-element DateArrays is not feasible. Why can't Date instances be pickled, and how hard would it be to change that? thanks, -robert From pgmdevlist at gmail.com Thu Dec 10 22:37:29 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 10 Dec 2009 22:37:29 -0500 Subject: [SciPy-User] Pickling timeseries Dates In-Reply-To: <4E4786D1-FC51-47E3-9812-9A2C63B49E1C@diablotech.com> References: <1A2A910B-F2F4-42E3-B687-EC5CE0D00398@diablotech.com> <98CE5655-35B6-4E8E-854F-F7415567BB0A@gmail.com> <41DAF069-97A7-4E75-9C6F-38BE02ECD729@diablotech.com> <92C4CD60-8EC4-41A3-9EE5-8422EF214316@gmail.com> <2F28D69E-0A41-448F-81C5-0733F4E34BE7@diablotech.com> <4E4786D1-FC51-47E3-9812-9A2C63B49E1C@diablotech.com> Message-ID: On Dec 10, 2009, at 10:12 PM, Robert Ferrell wrote: >> > Fernando reminded me that any date scalar will trigger the pickle > problem. Some of my objects have Date instances stashed all over the > place. Converting all those to single-element DateArrays is not > feasible. And converting them to something else, like a datetime or even just a tuple (value,freq) > > Why can't Date instances be pickled, and how hard would it be to > change that? because we never implemented it on the C side, and that'll depend on whether Matt Knox can do it or not. From david_baddeley at yahoo.com.au Thu Dec 10 23:28:17 2009 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Thu, 10 Dec 2009 20:28:17 -0800 (PST) Subject: [SciPy-User] Pickling timeseries Dates In-Reply-To: Message-ID: <161504.16801.qm@web33006.mail.mud.yahoo.com> It also shouldn't be to hard to register a new pickle handler that takes care of the conversion to e.g. datetime for you. I'm not familiar with timeseries stuff, but wrote something similar to let me pickle slices, which I've included below - as you can see it's pretty trivial. After that you can carry on treating your date (slice) objects as though they were picklable. import copy_reg def pickleSlice(slice): return unpickleSlice, (slice.start, slice.stop, slice.step) def unpickleSlice(start, stop, step): return slice(start, stop, step) copy_reg.pickle(slice, pickleSlice, unpickleSlice) best wishes, David --- On Fri, 11/12/09, Pierre GM wrote: > From: Pierre GM > Subject: Re: [SciPy-User] Pickling timeseries Dates > To: "SciPy Users List" > Received: Friday, 11 December, 2009, 4:37 PM > On Dec 10, 2009, at 10:12 PM, Robert > Ferrell wrote: > >> > > Fernando reminded me that any date scalar will trigger > the pickle? > > problem.? Some of my objects have Date instances > stashed all over the? > > place.? Converting all those to single-element > DateArrays is not? > > feasible. > > And converting them to something else, like a datetime or > even just a tuple (value,freq) > > > > > > Why can't Date instances be pickled, and how hard > would it be to? > > change that? > > > because we never implemented it on the C side, and that'll > depend on whether Matt Knox can do it or not. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ferrell at diablotech.com Fri Dec 11 00:08:24 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Thu, 10 Dec 2009 22:08:24 -0700 Subject: [SciPy-User] Pickling timeseries Dates In-Reply-To: References: <1A2A910B-F2F4-42E3-B687-EC5CE0D00398@diablotech.com> <98CE5655-35B6-4E8E-854F-F7415567BB0A@gmail.com> <41DAF069-97A7-4E75-9C6F-38BE02ECD729@diablotech.com> <92C4CD60-8EC4-41A3-9EE5-8422EF214316@gmail.com> <2F28D69E-0A41-448F-81C5-0733F4E34BE7@diablotech.com> <4E4786D1-FC51-47E3-9812-9A2C63B49E1C@diablotech.com> Message-ID: <29C9FD54-842B-46E1-A0E6-EA90E33E686D@diablotech.com> On Dec 10, 2009, at 8:37 PM, Pierre GM wrote: > On Dec 10, 2009, at 10:12 PM, Robert Ferrell wrote: >>> >> Fernando reminded me that any date scalar will trigger the pickle >> problem. Some of my objects have Date instances stashed all over the >> place. Converting all those to single-element DateArrays is not >> feasible. > > And converting them to something else, like a datetime or even just > a tuple (value,freq) I often grab a Date from a DateArray. Of course, I could try to use datetime for every date scalar, and DateArray for arrays of dates. But that's not nearly as convenient nor robust as using a single representation whenever I need a date. > > >> >> Why can't Date instances be pickled, and how hard would it be to >> change that? > > > because we never implemented it on the C side, and that'll depend on > whether Matt Knox can do it or not. Understood. From jgomezdans at gmail.com Fri Dec 11 08:14:34 2009 From: jgomezdans at gmail.com (Jose Gomez-Dans) Date: Fri, 11 Dec 2009 13:14:34 +0000 Subject: [SciPy-User] Saving multidimensional matlab arrays Message-ID: <91d218430912110514w37fe4bb6oeafd6592a6155572@mail.gmail.com> Hi! I need to save some 3d (i, j, k) arrays in matlab format. I have used savemat(filename, {'var_name':var}), and they work fine. However, when I load them up either in Matlab or in scipy, the arrays are (i*j, k). I can quickly unwrap them into 3d arrays as I know what i and j are, but I was wondering whether I'm missing something on the savemat that forces the "real arrays" to be saved? J -------------- next part -------------- An HTML attachment was scrubbed... URL: From jagan_cbe2003 at yahoo.co.in Fri Dec 11 09:39:35 2009 From: jagan_cbe2003 at yahoo.co.in (jagan prabhu) Date: Fri, 11 Dec 2009 20:09:35 +0530 (IST) Subject: [SciPy-User] Inequality constrains definition and violation in slsqp Message-ID: <34323.13209.qm@web8320.mail.in.yahoo.com> my objective function is a simple least square function, i have 8 parameters and? 3 inequality constraints, i used fmin_slsqp optimization algorithm for my problem, but often my constraints are violated while execution. My code looks like, ######### begin ##################################### Init = [8.0, 14.0, 16.0, 7.0, 13.0, 50.0, 9.0, 4.0] ?# [a,b,c,d,e,f,g,h] # constraints ?(c > d, g > h & ?e>=a) bounds = ?[(3.0, 4000.0), (1.0, 6000.0), (2.0, 10000.0), (1.0, 4000.0), (4.0, 6000.0), (1.0, 5000.0), (2.0, 10000.0), (1.0, 4000.0)] # constraints con1 = lambda ?x: ?x[2]-x[3] con2 = lambda ?x: ?x[6]-x[7] con3 = lambda ?x: ?x[4]-x[0] cons = [con1, con2, con3] ### # gradient function for my problem def grad(s,*args): ?f = func ?step = 1.e-3 ?fini = f(*((s,)+args)) ?grad = numpy.zeros((len(s),), float) ?st = numpy.zeros((len(s),), float) ?for i in range(len(s)): ? st[i] = step*s[i] ? grad[i] = (f(*((s+st,)+args)) - fini)/st[i] ? st[i] = 0.0 ?return grad opt = fmin_slsqp(func, Init, ieqcons=cons, bounds= bounds, fprime = grad, iter=50000,iprint=2, acc=0.01) ############## ?end of the code ############################# Here the major problem i face is the constraints (c > d, g > h & e>=a) are violated frequently during execution, is there any error with my constrain definition method or any better way to avoid constrain violations? The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Dec 11 10:11:21 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 11 Dec 2009 10:11:21 -0500 Subject: [SciPy-User] Inequality constrains definition and violation in slsqp In-Reply-To: <34323.13209.qm@web8320.mail.in.yahoo.com> References: <34323.13209.qm@web8320.mail.in.yahoo.com> Message-ID: <1cd32cbb0912110711r61a89823se5cd7c53800b3d63@mail.gmail.com> On Fri, Dec 11, 2009 at 9:39 AM, jagan prabhu wrote: > my objective function is a simple least square function, i have 8 > parameters and 3 inequality constraints, i used fmin_slsqp optimization > algorithm for my problem, but often my constraints are violated while > execution. My > code looks like, > > ######### begin ############################## > ####### > Init = [8.0, 14.0, 16.0, 7.0, 13.0, 50.0, 9.0, 4.0] # > [a,b,c,d,e,f,g,h] > # constraints (c > d, g > h & e>=a) > > bounds = [(3.0, 4000.0), (1.0, 6000.0), (2.0, 10000.0), (1.0, > 4000.0), (4.0, 6000.0), (1.0, 5000.0), (2.0, 10000.0), (1.0, 4000.0)] > # constraints > con1 = lambda x: x[2]-x[3] > con2 = lambda x: x[6]-x[7] > con3 = lambda x: x[4]-x[0] > cons = [con1, con2, con3] > ### > # gradient function for my problem > def grad(s,*args): > f = func > step = 1.e-3 > fini = f(*((s,)+args)) > grad = numpy.zeros((len(s),), float) > st = numpy.zeros((len(s),), float) > for i in range(len(s)): > st[i] = step*s[i] > grad[i] = (f(*((s+st,)+args)) - fini)/st[i] > st[i] = 0.0 > return grad > > opt = fmin_slsqp(func, Init, ieqcons=cons, bounds= bounds, fprime = > grad, iter=50000,iprint=2, acc=0.01) > > ############## end of the code ############################# > > Here the major problem i face is the constraints (c > d, g > h & e>=a) are > violated frequently during execution, is there any error with my constrain > definition method or any better way to avoid constrain violations? > > What's your func? I don't see anything obviously wrong. With "violated frequently during execution", do you mean during optimization but not at the final result? If yes, your numerical gradient calculation doesn't check whether the parameters for the functions f(*((s+st,)+args)) are satisfying the constraints, and your stepsizes are pretty big. I don't know if slsqp itself only evaluates at points that statisfy the constraints. If that's not the point, then I would need more information to understand what's going on. Josef > ------------------------------ > The INTERNET now has a personality. YOURS! See your Yahoo! Homepage > . > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jagan_cbe2003 at yahoo.co.in Fri Dec 11 11:37:26 2009 From: jagan_cbe2003 at yahoo.co.in (jagan prabhu) Date: Fri, 11 Dec 2009 22:07:26 +0530 (IST) Subject: [SciPy-User] Inequality constrains definition and violation in slsqp In-Reply-To: <1cd32cbb0912110711r61a89823se5cd7c53800b3d63@mail.gmail.com> Message-ID: <739740.1264.qm@web8319.mail.in.yahoo.com> My function is a simple least square, it look like, func = sum((observeddata[ ] - calculated[ ] )**2)? or sum(abs(observed[ ] - calculated[ ] )) The slsqp optimization code? is coupled to a third part software, beginning from the initial values for each function evaluation my optimization code passes 8 parameters to the software. Based on these 8 parameters software calculates the data and its is stored in calculated[ ] array and observeddata[ ]? is a constant data through out the process, aim is to minimize the error between the observedata[ ] and calculated[ ]? by choosing the proper parameter. Basically i want to do a inverse process identification. observeddata[ ]? and calculated[ ] will be an array of 200 to 400 entries. My software is very sensitive to the constraints, if it violates? is stops execution and? the entire process stops. End of the process i don't know whether it is violated or not, because once in between the process constrains are violated the coupling software stops execution and my optimization process stops. So i am forced to search of a better way to define my constraints so that it will not violated during function evaluation. --- On Fri, 11/12/09, josef.pktd at gmail.com wrote: From: josef.pktd at gmail.com Subject: Re: [SciPy-User] Inequality constrains definition and violation in slsqp To: "SciPy Users List" Date: Friday, 11 December, 2009, 8:41 PM On Fri, Dec 11, 2009 at 9:39 AM, jagan prabhu wrote: my objective function is a simple least square function, i have 8 parameters and 3 inequality constraints, i used fmin_slsqp optimization algorithm for my problem, but often my constraints are violated while execution. My code looks like, ######### begin ##################################### Init = [8.0, 14.0, 16.0, 7.0, 13.0, 50.0, 9.0, 4.0] # [a,b,c,d,e,f,g,h] # constraints (c > d, g > h & e>=a) bounds = [(3.0, 4000.0), (1.0, 6000.0), (2.0, 10000.0), (1.0, 4000.0), (4.0, 6000.0), (1.0, 5000.0), (2.0, 10000.0), (1.0, 4000.0)] # constraints con1 = lambda x: x[2]-x[3] con2 = lambda x: x[6]-x[7] con3 = lambda x: x[4]-x[0] cons = [con1, con2, con3] ### # gradient function for my problem def grad(s,*args): f = func step = 1.e-3 fini = f(*((s,)+args)) grad = numpy.zeros((len(s),), float) st = numpy.zeros((len(s),), float) for i in range(len(s)): st[i] = step*s[i] grad[i] = (f(*((s+st,)+args)) - fini)/st[i] st[i] = 0.0 return grad opt = fmin_slsqp(func, Init, ieqcons=cons, bounds= bounds, fprime = grad, iter=50000,iprint=2, acc=0.01) ############## end of the code ############################# Here the major problem i face is the constraints (c > d, g > h & e>=a) are violated frequently during execution, is there any error with my constrain definition method or any better way to avoid constrain violations? What's your func? I don't see anything obviously wrong. With "violated frequently during execution", do you mean during optimization but not at the final result? If yes, your numerical gradient calculation doesn't check whether the parameters for the functions f(*((s+st,)+args)) are satisfying the constraints, and your stepsizes are pretty big. I don't know if slsqp itself only evaluates at points that statisfy the constraints. If that's not the point, then I would need more information to understand what's going on. Josef The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -----Inline Attachment Follows----- _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Dec 11 11:49:13 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 11 Dec 2009 11:49:13 -0500 Subject: [SciPy-User] Inequality constrains definition and violation in slsqp In-Reply-To: <739740.1264.qm@web8319.mail.in.yahoo.com> References: <1cd32cbb0912110711r61a89823se5cd7c53800b3d63@mail.gmail.com> <739740.1264.qm@web8319.mail.in.yahoo.com> Message-ID: <1cd32cbb0912110849x3df74d18wfe5932b96cb3fcd9@mail.gmail.com> On Fri, Dec 11, 2009 at 11:37 AM, jagan prabhu wrote: > My function is a simple least square, it look like, > > func = sum((observeddata[ ] - calculated[ ] )**2) or sum(abs(observed[ ] - > calculated[ ] )) > > The slsqp optimization code is coupled to a third part software, beginning > from the initial values for each function evaluation my optimization code > passes 8 parameters to the software. Based on these 8 parameters software > calculates the data and its is stored in calculated[ ] array and > observeddata[ ] is a constant data through out the process, aim is to > minimize the error between the observedata[ ] and calculated[ ] by choosing > the proper parameter. Basically i want to do a inverse process > identification. > > observeddata[ ] and calculated[ ] will be an array of 200 to 400 entries. > > My software is very sensitive to the constraints, if it violates is stops > execution and the entire process stops. > > End of the process i don't know whether it is violated or not, because once > in between the process constrains are violated the coupling software stops > execution and my optimization process stops. > > So i am forced to search of a better way to define my constraints so that > it will not violated during function evaluation. > In this case you also have to rewrite (or drop) your numerical differentiation, because you violate the constraint yourself. One simple way to make sure that your external program doesn't get invalid parameters, is to check the constraints in your wrapper func and return e.g. inf or a very large number directly (without calling the external program) There might be some problems with the derivative calculations at the boundary, and I don't know how slsqp handles it if you drop your own numerical derivatives. Otherwise, I guess, you would need to to check that your own derivative calculations only use points that satisfy the constraints. Josef > > --- On *Fri, 11/12/09, josef.pktd at gmail.com * wrote: > > > From: josef.pktd at gmail.com > Subject: Re: [SciPy-User] Inequality constrains definition and violation in > slsqp > To: "SciPy Users List" > Date: Friday, 11 December, 2009, 8:41 PM > > > > > On Fri, Dec 11, 2009 at 9:39 AM, jagan prabhu > > wrote: > >> my objective function is a simple least square function, i have 8 >> parameters and 3 inequality constraints, i used fmin_slsqp optimization >> algorithm for my problem, but often my constraints are violated while >> execution. My >> code looks like, >> >> ######### begin ############################## >> ####### >> Init = [8.0, 14.0, 16.0, 7.0, 13.0, 50.0, 9.0, 4.0] # >> [a,b,c,d,e,f,g,h] >> # constraints (c > d, g > h & e>=a) >> >> bounds = [(3.0, 4000.0), (1.0, 6000.0), (2.0, 10000.0), (1.0, >> 4000.0), (4.0, 6000.0), (1.0, 5000.0), (2.0, 10000.0), (1.0, 4000.0)] >> # constraints >> con1 = lambda x: x[2]-x[3] >> con2 = lambda x: x[6]-x[7] >> con3 = lambda x: x[4]-x[0] >> cons = [con1, con2, con3] >> ### >> # gradient function for my problem >> def grad(s,*args): >> f = func >> step = 1.e-3 >> fini = f(*((s,)+args)) >> grad = numpy.zeros((len(s),), float) >> st = numpy.zeros((len(s),), float) >> for i in range(len(s)): >> st[i] = step*s[i] >> grad[i] = (f(*((s+st,)+args)) - fini)/st[i] >> st[i] = 0.0 >> return grad >> >> opt = fmin_slsqp(func, Init, ieqcons=cons, bounds= bounds, fprime = >> grad, iter=50000,iprint=2, acc=0.01) >> >> ############## end of the code ############################# >> >> Here the major problem i face is the constraints (c > d, g > h & e>=a) are >> violated frequently during execution, is there any error with my constrain >> definition method or any better way to avoid constrain violations? >> >> > What's your func? > > I don't see anything obviously wrong. > With "violated frequently during execution", do you mean during > optimization but not at the final result? > > If yes, your numerical gradient calculation doesn't check whether the > parameters for the functions f(*((s+st,)+args)) > are satisfying the constraints, and your stepsizes are pretty big. I don't > know if slsqp itself only evaluates at points that statisfy the constraints. > > If that's not the point, then I would need more information to understand > what's going on. > > Josef > > > ------------------------------ >> The INTERNET now has a personality. YOURS! See your Yahoo! Homepage >> . >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > -----Inline Attachment Follows----- > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------ > The INTERNET now has a personality. YOURS! See your Yahoo! Homepage > . > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amenity at enthought.com Fri Dec 11 12:52:23 2009 From: amenity at enthought.com (Amenity Applewhite) Date: Fri, 11 Dec 2009 11:52:23 -0600 Subject: [SciPy-User] December Webinar: SciPy India with Travis Oliphant Message-ID: <0F07782C-2129-4FD0-BC91-C45CAEC10501@enthought.com> Next Friday Enthought will be hosting our monthly Scientific Computing with Python Webinar: Summary of SciPy India Friday December 18 1pm CST/ 7pm UTC Register at GoToMeeting Enthought President Travis Oliphant is currently in Kerala, India as the keynote speaker at SciPy India 2009. Due to a training engagement, Travis missed SciPy for the first time this summer, so he?s excited for this additional opportunity to meet and collaborate with the scientific Python community. Speakers at the event include Jarrod Millman, David Cournapeau, Christopher Burns, Prabhu Ramachandran, and Asokan Pichai ? a great group. We?re looking forward to hearing Travis? review of the proceedings. Hope to see you there! Enthought Media -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Dec 11 13:15:42 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 11 Dec 2009 19:15:42 +0100 Subject: [SciPy-User] Trouble when installing OpenOpt on opensuse11.2 Message-ID: Hi all, After upgrading from opensuse11.1 to 11.2 I have trouble to install OpenOpt via python setup.py install --prefix=$HOME/local /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 self.message = message running install Checking .pth file support in /home/nwagner/local/lib/python2.6/site-packages /usr/bin/python -E -c pass TEST FAILED: /home/nwagner/local/lib/python2.6/site-packages does NOT support .pth files error: bad install directory or PYTHONPATH You are attempting to install a package to a directory that is not on PYTHONPATH and which Python does not read ".pth" files from. The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /home/nwagner/local/lib/python2.6/site-packages and your PYTHONPATH environment variable currently contains: '/home/nwagner/:/home/nwagner/local/lib64/python2.6/site-packages/:/home/nwagner/svn/PyDSTool/:/home/nwagner/svn/PyDSTool/PyDSTool:/home/nwagner/svn/PyDSTool/PyDSTool/tests:/home/nwagner/git/pyadolc' Here are some of your options for correcting the problem: * You can choose a different installation directory, i.e., one that is on PYTHONPATH or supports .pth files * You can add the installation directory to the PYTHONPATH environment variable. (It must then also be on PYTHONPATH whenever you run Python and want to use the package(s) you are installing.) * You can set up the installation directory to support ".pth" files by using one of the approaches described here: http://peak.telecommunity.com/EasyInstall.html#custom-installation-locations Please make the appropriate changes for your system and try again. Can I fix the problem by adding /home/nwagner/local/lib/site-packages/ to the PYTHONPATH ? Any pointer would be appreciated. Nils From robince at gmail.com Fri Dec 11 13:50:30 2009 From: robince at gmail.com (Robin) Date: Fri, 11 Dec 2009 18:50:30 +0000 Subject: [SciPy-User] f2py fortran95 won't compile In-Reply-To: <876227.1119.qm@web59402.mail.ac4.yahoo.com> References: <876227.1119.qm@web59402.mail.ac4.yahoo.com> Message-ID: <2d5132a50912111050y29e00f5ei5ab34c336253ae21@mail.gmail.com> On Fri, Dec 11, 2009 at 1:39 AM, Joe Barfett wrote: > > Hello, > I've been putting a lot of time and googling into trying to get f2py to work. I do large matrix operations in python using scipy and numpy. There's a few small functions I need to speed up and want to use fortran or c to do this. > Right now I write code that needs days to run. I'm hoping that I can really speed this up. I've also bought books to help but to no avail yet. I've found python to be incredibly good for what I do, but have had a lot of trouble extending it. > Eventually I thought I would ask for help. > On my new computer running snow leopard I was able to install the fortran 95 compiler. I installed the gcc compilers from the xcode developer tools. > Going into bash and typing f2p gives returns a long message ending in.. > Version: ? ? 2 > numpy Version: 1.3.0 > Requires: ? ?Python 2.3 or higher. > License: ? ? NumPy license (see LICENSE.txt in the NumPy source code) > Copyright 1999 - 2005 Pearu Peterson all rights reserved. > http://cens.ioc.ee/projects/f2py2e/ > If I run?f2py -c --help-fcompiler > I get a lot of output, one line of which says that the GNU Fortran 95 compiler is there. So I proceeded to make hello.f as indicated in the scipy documentation. I saved it in my documents directory. > Going back to bash, I cd into the documents directory and run > f2py -c -m hello hello.f > This gives me a long output that ends with the error: > building extension "hello" sources > f2py options: [] > f2py:> /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghA/src.macosx-10.3-fat-2.5/hellomodule.c > creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghA > creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghA/src.macosx-10.3-fat-2.5 > Reading fortran codes... > Reading file 'hello.f' (format:fix,strict) > Post-processing... > Post-processing (stage 2)... > Building modules... > error: f2py target file '/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghA/src.macosx-10.3-fat-2.5/hellomodule.c' not generated > I've been stuck trying to get up and running with extending python for months now and I'm really hoping someone can help me along. Once I can get this working, I'll start trying to send in numpy arrays and get numpy arrays back. > I really appreciate your kind assistance. Cheers! > joe Hi, I'm not sure what your actual problem is, ?but perhaps it will help if I point out the steps I followed to get it working. Also note the f2py is part of numpy, so the numpy list (or f2py list) might be more appropriate. You don't specify which fortran compiler you are using. For the mac it is generally recommended to use the gfortran build from here: http://r.research.att.com/tools/ You will also need to install Xcode (to get c compilers). If you are on OS X 10.6 with Python build from python.org you may need to do: #export CC=/usr/bin/gcc-4.0 before calling f2py. This is because apple changed the default compiler version to 4.2, but the Python binaries are built with the older one (this is true for 2.5, I havent tried a python.org 2.6 build). With those steps, the following hello.f95 file compiles for me: subroutine foo (a) integer a print*, "Hello from Fortran!" print*, "a=",a end with f2py -c -m hello hello.f95 Cheers Robin From sccolbert at gmail.com Fri Dec 11 18:17:48 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 12 Dec 2009 00:17:48 +0100 Subject: [SciPy-User] ubuntu's PYTHONPATH and installing scipy In-Reply-To: References: <185761440912081610o76183166mdf137010646443ce@mail.gmail.com> Message-ID: <7f014ea60912111517v8d081f9q75e3133998843a4d@mail.gmail.com> i had this same problem when trying to use my own built-from-scratch numpy while still needing system numpy as dependency. In debian, /usr/lib/pythonX.Y/dist-packages will appear on sys.path BEFORE /usr/local/lib/pythonX.Y/dist-packages (which is where I install my source-built stuff). I solved this by creating a $HOME/.local/lib/pythonX.Y/site-packages directory (which WILL be on sys.path before the others) and inserting a .pth file with my local directory. Since this dir is searched by python before /usr/lib/... it reads the .pth file and thus adds my local dir to sys.path before anything else. I found this a cleaner solution than editing .bashrc or any other variant because those only work when your in a shell environment. Cheers, Chris On Wed, Dec 9, 2009 at 5:41 PM, Darren Dale wrote: > On Wed, Dec 9, 2009 at 2:06 AM, Charles R Harris > wrote: >> >> >> On Tue, Dec 8, 2009 at 6:31 PM, Darren Dale wrote: >>> >>> On Tue, Dec 8, 2009 at 7:10 PM, John Schulman wrote: >>> > Hi, >>> > There's an issue with ubuntu PYTHONPATH that causes problems for scipy >>> > installation. I eventually found a workaround, but I want to point out >>> > the problem. >>> > >>> > PYTHONPATH is different when sudoing, so that packages installed by >>> > aptitute appear first in the path.* >>> > >>> > The svn version scipy imports numpy in the setup script. "python >>> > setup.py build" works as it should, but then "sudo python setup.py >>> > install" gives the error "ImportError: cannot import name get_info", >>> > since it's trying to use numpy1.3 (which was a dependency for some >>> > package I installed). >>> > >>> > The workaround is simple: >>> > sudo PYTHONPATH=/usr/local/lib/python2.6/dist-packages: python setup.py >>> > install >>> > >>> > *In ubuntu, packages that you install through the apt package manager >>> > goes into /usr/lib/python2.6/dist-packages, but python setup.py >>> > install puts stuff in /usr/local/lib/python2.6/dist-packages. There's >>> > a line in /etc/profile (I didn't put it there--it must be Ubuntu): >>> > export PYTHONPATH=/usr/local/lib/python2.6/dist-packages:$PYTHONPATH. >>> > I guess this is a kludge to fix the problem. >>> >>> You can also use visudo to edit your sudo options, so that your >>> PYTHONPATH is carried over. >>> >> >> Oh, that's too clever. I used locate to find all the numpy installations and >> deleted those with the wrong date ;) There are actually *three* different >> directories in which python stuff gets installed. I haven't a clue why. > > Debian's python policy is described at > http://www.debian.org/doc/packaging-manuals/python-policy/ , but I > don't think that page reflects changes in debian unstable/ubuntu. I > don't understand the reason for using dist-packages instead of > site-packages. And now manually installed packages go into /usr/local > instead of /usr, but recently they changed things so /usr appears > higher in the python search path, so it is difficult to manually > upgrade software provided by the package manager. Maybe the > debian/ubuntu maintainers were trying to make it more difficult for > users to break packages installed by the package manager? In my > opinion, the policy is in unintuitive and in the way. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From agile.aspect at gmail.com Fri Dec 11 18:21:53 2009 From: agile.aspect at gmail.com (Agile Aspect) Date: Fri, 11 Dec 2009 15:21:53 -0800 Subject: [SciPy-User] f2py fortran95 won't compile] Message-ID: On Fri, Dec 11, 2009 at 3:15 PM, Agile Aspect wrote: > > -- > > Enjoy global warming while it lasts. > > > Hello, > I've been putting a lot of time and googling into trying to get f2py to > work. I do large matrix operations in python using scipy and numpy. There's > a few small functions I need to speed up and want to use fortran or c to do > this. > Right now I write code that needs days to run. I'm hoping that I can really > speed this up. I've also bought books to help but to no avail yet. I've > found python to be incredibly good for what I do, but have had a lot of > trouble extending it. > Eventually I thought I would ask for help. > On my new computer running snow leopard I was able to install the fortran > 95 compiler. I installed the gcc compilers from the xcode developer tools. > Going into bash and typing f2p gives returns a long message ending in.. > > Version: 2 > numpy Version: 1.3.0 > Requires: Python 2.3 or higher. > License: NumPy license (see LICENSE.txt in the NumPy source code) > Copyright 1999 - 2005 Pearu Peterson all rights reserved. > http://cens.ioc.ee/projects/f2py2e/ > > If I run f2py -c --help-fcompiler > > I get a lot of output, one line of which says that the GNU Fortran 95 > compiler is there. So I proceeded to make hello.f as indicated in the scipy > documentation. I saved it in my documents directory. > > Going back to bash, I cd into the documents directory and run > f2py -c -m hello hello.f > > This gives me a long output that ends with the error: > > building extension "hello" sources > f2py options: [] > f2py:> > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghA/src.macosx-10.3-fat-2.5/hellomodule.c > creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghA > creating > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghA/src.macosx-10.3-fat-2.5 > Reading fortran codes... > Reading file 'hello.f' (format:fix,strict) > Post-processing... > Post-processing (stage 2)... > Building modules... > error: f2py target file > '/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpWX5ghA/src.macosx-10.3-fat-2.5/hellomodule.c' > not generated > > I've been stuck trying to get up and running with extending python for > months now and I'm really hoping someone can help me along. Once I can get > this working, I'll start trying to send in numpy arrays and get numpy arrays > back. > I really appreciate your kind assistance. Cheers! > joe > > > ------------------------------ > > *Yahoo! Canada Toolbar :* Search from anywhere on the web and bookmark > your favourite sites. Download it now! > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > In the following, try loosing the 'hello' f2py -c -m hello hello.f so it reads f2py -c -m hello.f -- Enjoy global warming while it lasts. -------------- next part -------------- An HTML attachment was scrubbed... URL: From agile.aspect at gmail.com Fri Dec 11 18:33:47 2009 From: agile.aspect at gmail.com (Agile Aspect) Date: Fri, 11 Dec 2009 15:33:47 -0800 Subject: [SciPy-User] f2py fortran95 won't compile] In-Reply-To: References: Message-ID: On Fri, Dec 11, 2009 at 3:21 PM, Agile Aspect wrote: [snip] If this is a second post, I apologize - I using a web browser and I think I'm hitting the wrong buttons. In the following, try loosing the 'hello' f2py -c -m hello hello.f so it reads f2py -c -m hello.f -- Enjoy global warming while it lasts. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joebarfett at yahoo.ca Fri Dec 11 19:31:46 2009 From: joebarfett at yahoo.ca (theWonder) Date: Fri, 11 Dec 2009 16:31:46 -0800 (PST) Subject: [SciPy-User] [SciPy-user] f2py fortran95 won't compile In-Reply-To: <876227.1119.qm@web59402.mail.ac4.yahoo.com> References: <876227.1119.qm@web59402.mail.ac4.yahoo.com> Message-ID: <26753350.post@talk.nabble.com> Thanks for the tips! I installed the compiler, ran the command and wrote that function exactly. Compiling went further, but now ends in this error: 2.5/hellomodule.c:246: error: called object ?*(struct PyMethodDef *)&? is not a function /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmp61G3Ch/src.macosx-10.3-fat-2.5/hellomodule.c:247: warning: comparison of distinct pointer types lacks a cast lipo: can't figure out the architecture type of: /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-//ccpxvBfj.out error: Command "gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 -I/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmp61G3Ch/src.macosx-10.3-fat-2.5 -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmp61G3Ch/src.macosx-10.3-fat-2.5/hellomodule.c -o /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmp61G3Ch/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmp61G3Ch/src.macosx-10.3-fat-2.5/hellomodule.o" failed with exit status 1 I found the f2py users group and will seek help there. I do really appreciate it. joe -- View this message in context: http://old.nabble.com/f2py-fortran95-won%27t-compile-tp26737506p26753350.html Sent from the Scipy-User mailing list archive at Nabble.com. From robince at gmail.com Sat Dec 12 06:02:06 2009 From: robince at gmail.com (Robin) Date: Sat, 12 Dec 2009 11:02:06 +0000 Subject: [SciPy-User] [SciPy-user] f2py fortran95 won't compile In-Reply-To: <26753350.post@talk.nabble.com> References: <876227.1119.qm@web59402.mail.ac4.yahoo.com> <26753350.post@talk.nabble.com> Message-ID: <2d5132a50912120302o5a9d5edcmab4b660a81292482@mail.gmail.com> On Sat, Dec 12, 2009 at 12:31 AM, theWonder wrote: > > Thanks for the tips! > I installed the compiler, ran the command and wrote that function exactly. > Compiling went further, but now ends in this error: > > 2.5/hellomodule.c:246: error: called object ?*(struct PyMethodDef > *)&? is not a function > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmp61G3Ch/src.macosx-10.3-fat-2.5/hellomodule.c:247: > warning: comparison of distinct pointer types lacks a cast > lipo: can't figure out the architecture type of: > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-//ccpxvBfj.out > error: Command "gcc -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -no-cpp-precomp > -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 > -I/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmp61G3Ch/src.macosx-10.3-fat-2.5 > -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include > -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmp61G3Ch/src.macosx-10.3-fat-2.5/hellomodule.c > -o > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmp61G3Ch/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmp61G3Ch/src.macosx-10.3-fat-2.5/hellomodule.o" > failed with exit status 1 > > I found the f2py users group and will seek help there. I do really > appreciate it. > joe Could you post all the output? It is usually the first errors rather than the last that are most revealing. Robin From cool-rr at cool-rr.com Sat Dec 12 08:13:11 2009 From: cool-rr at cool-rr.com (cool-RR) Date: Sat, 12 Dec 2009 15:13:11 +0200 Subject: [SciPy-User] Mayavi for physics simulations Message-ID: Hello, I would like to use Mayavi to simulate physical bodies, (which for now will be only spheres,) moving around in 3-D space. Will it be possible? (I'm asking because I've seen mostly plots with Mayavi and not discrete objects.) Bonus points if I can draw velocity and acceleration arrows from the bodies. Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cool-rr at cool-rr.com Sat Dec 12 08:15:05 2009 From: cool-rr at cool-rr.com (cool-RR) Date: Sat, 12 Dec 2009 15:15:05 +0200 Subject: [SciPy-User] Mayavi for physics simulations In-Reply-To: References: Message-ID: Oops, I'm sorry, I got mixed up with the mailing lists again, I'll send it to enthought-dev, sorry. On Sat, Dec 12, 2009 at 3:13 PM, cool-RR wrote: > Hello, > > I would like to use Mayavi to simulate physical bodies, (which for now will > be only spheres,) moving around in 3-D space. Will it be possible? (I'm > asking because I've seen mostly plots with Mayavi and not discrete objects.) > > Bonus points if I can draw velocity and acceleration arrows from the > bodies. > > Ram. > -- Sincerely, Ram Rachum -------------- next part -------------- An HTML attachment was scrubbed... URL: From gruben at bigpond.net.au Sat Dec 12 08:27:16 2009 From: gruben at bigpond.net.au (Gary Ruben) Date: Sun, 13 Dec 2009 00:27:16 +1100 Subject: [SciPy-User] Mayavi for physics simulations In-Reply-To: References: Message-ID: <4B239A34.7000805@bigpond.net.au> You should be able to do this with Mayavi's visual compatibility module. See this example but it sounds like you'd be better off using the visual module directly http://vpython.org cool-RR wrote: > Oops, I'm sorry, I got mixed up with the mailing lists again, I'll send > it to enthought-dev, sorry. > > On Sat, Dec 12, 2009 at 3:13 PM, cool-RR > wrote: > > Hello, > > I would like to use Mayavi to simulate physical bodies, (which for > now will be only spheres,) moving around in 3-D space. Will it be > possible? (I'm asking because I've seen mostly plots with Mayavi and > not discrete objects.) > > Bonus points if I can draw velocity and acceleration arrows from the > bodies. > > Ram. From robince at gmail.com Sat Dec 12 10:28:34 2009 From: robince at gmail.com (Robin) Date: Sat, 12 Dec 2009 15:28:34 +0000 Subject: [SciPy-User] mpmath test hangs with latest svn Message-ID: <2d5132a50912120728s64afd221m947bae43e3ab319b@mail.gmail.com> Hi, With up to date SVN (6136) on OS X 10.6.2, python 2.5.4 (python.org), gcc-4.0, g++-4.0, test_mpmath.test_hyp2f1_some_points_2 seems to hang with Python process consuming 100% cpu for several minutes. test_mpmath.test_expi_complex ... ERROR test_mpmath.test_hyp2f1_strange_points ... KNOWNFAIL test_mpmath.test_hyp2f1_real_some_points ... ERROR test_mpmath.test_hyp2f1_some_points_2 ... I've given it a while and will leave it a while longer but presumably it shouldn't take this long. Cheers Robin From gael.varoquaux at normalesup.org Sat Dec 12 11:05:20 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 12 Dec 2009 17:05:20 +0100 Subject: [SciPy-User] Mayavi for physics simulations In-Reply-To: <4B239A34.7000805@bigpond.net.au> References: <4B239A34.7000805@bigpond.net.au> Message-ID: <20091212160520.GC19832@phare.normalesup.org> On Sun, Dec 13, 2009 at 12:27:16AM +1100, Gary Ruben wrote: > You should be able to do this with Mayavi's visual compatibility module. > See this example > > but it sounds like you'd be better off using the visual module directly > http://vpython.org If the only thing that you want to do are moving objects, Mayavi will indeed give an overhead compared to vpython. However, if you want to mix scientific data visualization and these moving objects, then it is much easier in Mayavi. Ga?l From robince at gmail.com Sat Dec 12 11:18:24 2009 From: robince at gmail.com (Robin) Date: Sat, 12 Dec 2009 16:18:24 +0000 Subject: [SciPy-User] loadmat/savemat sparse with umfpack problem Message-ID: <2d5132a50912120818n5d555321j2497aa3edeb3525b@mail.gmail.com> Hi, I have a problem with sparse matrices that have been roundtripped through savemat/loadmat and scikits.umf: In [29]: scipy.__version__ Out[29]: '0.8.0.dev6136' scikits.umfpack doesn't have a version but it is latest from scikits svn (r2239). Here is the simplest way I could recreate the problem: from scikits.umfpack import UmfpackContext import scipy.sparse as sparse from scipy.io import loadmat, savemat a = sparse.eye(3,3,format='csc') umf = UmfpackContext() print 'Original sparse matrix:' print a.__repr__() # works fine umf.numeric(a) print 'savemat/loadmat ...' savemat('test',{'a':a}) a2 = loadmat('test')['a'] print 'Loaded sparse matrix:' print a2.__repr__() # doesnt work umf.numeric(a2) which outputs: Original sparse matrix: <3x3 sparse matrix of type '' with 3 stored elements in Compressed Sparse Column format> savemat/loadmat ... Loaded sparse matrix: <3x3 sparse matrix of type '' with 3 stored elements in Compressed Sparse Column format> --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/robince/svn/pyentropy/pyentropy/umf.py in () 15 print 'Loaded sparse matrix:' 16 print a2.__repr__() ---> 17 umf.numeric(a2) 18 19 /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scikits.umfpack-5.1.0-py2.5-macosx-10.3-i386.egg/scikits/umfpack/umfpack.pyc in numeric(self, mtx) 393 394 if self._symbolic is None: --> 395 self.symbolic( mtx ) 396 397 indx = self._getIndx( mtx ) /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scikits.umfpack-5.1.0-py2.5-macosx-10.3-i386.egg/scikits/umfpack/umfpack.pyc in symbolic(self, mtx) 364 = self.funs.symbolic( mtx.shape[0], mtx.shape[1], 365 mtx.indptr, indx, mtx.data, --> 366 self.control, self.info ) 367 else: 368 real, imag = mtx.data.real.copy(), mtx.data.imag.copy() /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scikits.umfpack-5.1.0-py2.5-macosx-10.3-i386.egg/scikits/umfpack/_umfpack.pyc in umfpack_di_symbolic(*args) 435 double Control, double Info) -> int 436 """ --> 437 return __umfpack.umfpack_di_symbolic(*args) 438 439 def umfpack_dl_symbolic(*args): TypeError: not a C array WARNING: Failure executing file: I can't figure out whats causing it - and whether its a bug in savemat/loadmat or scikits.umfpack. Is scikits.umfpack still supported? Is there a way to prefactor matrix with the libraries built into scipy (I need to solve the same large sparse matrix many times so I was prefactoring with umfpack). Thanks, Robin From robince at gmail.com Sat Dec 12 12:28:29 2009 From: robince at gmail.com (Robin) Date: Sat, 12 Dec 2009 17:28:29 +0000 Subject: [SciPy-User] mpmath test hangs with latest svn In-Reply-To: <2d5132a50912120728s64afd221m947bae43e3ab319b@mail.gmail.com> References: <2d5132a50912120728s64afd221m947bae43e3ab319b@mail.gmail.com> Message-ID: <2d5132a50912120928l76690562ub1425568ef434313@mail.gmail.com> On Sat, Dec 12, 2009 at 3:28 PM, Robin wrote: > Hi, > > With up to date SVN (6136) on OS X 10.6.2, python 2.5.4 (python.org), > gcc-4.0, g++-4.0, test_mpmath.test_hyp2f1_some_points_2 seems to hang > with Python process consuming 100% cpu for several minutes. > > test_mpmath.test_expi_complex ... ERROR > test_mpmath.test_hyp2f1_strange_points ... KNOWNFAIL > test_mpmath.test_hyp2f1_real_some_points ... ERROR > test_mpmath.test_hyp2f1_some_points_2 ... > > I've given it a while and will leave it a while longer but presumably > it shouldn't take this long. > > Cheers > > Robin With help from Pauli on IRC I found out this is because of having an old version of sympy installed. I think he is going to update the version check in the test. Cheers Robin From seb.haase at gmail.com Sun Dec 13 06:03:22 2009 From: seb.haase at gmail.com (Sebastian Haase) Date: Sun, 13 Dec 2009 12:03:22 +0100 Subject: [SciPy-User] mpmath test hangs with latest svn In-Reply-To: <2d5132a50912120928l76690562ub1425568ef434313@mail.gmail.com> References: <2d5132a50912120728s64afd221m947bae43e3ab319b@mail.gmail.com> <2d5132a50912120928l76690562ub1425568ef434313@mail.gmail.com> Message-ID: On Sat, Dec 12, 2009 at 6:28 PM, Robin wrote: > On Sat, Dec 12, 2009 at 3:28 PM, Robin wrote: >> Hi, >> >> With up to date SVN (6136) on OS X 10.6.2, python 2.5.4 (python.org), >> gcc-4.0, g++-4.0, test_mpmath.test_hyp2f1_some_points_2 seems to hang >> with Python process consuming 100% cpu for several minutes. >> >> test_mpmath.test_expi_complex ... ERROR >> test_mpmath.test_hyp2f1_strange_points ... KNOWNFAIL >> test_mpmath.test_hyp2f1_real_some_points ... ERROR >> test_mpmath.test_hyp2f1_some_points_2 ... >> >> I've given it a while and will leave it a while longer but presumably >> it shouldn't take this long. >> >> Cheers >> >> Robin > > With help from Pauli on IRC I found out this is because of having an > old version of sympy installed. I think he is going to update the > version check in the test. > > Cheers > > Robin How does ScipPy depend on Sympy !? --Sebastian Haase From pav at iki.fi Sun Dec 13 06:43:40 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 13 Dec 2009 13:43:40 +0200 Subject: [SciPy-User] mpmath test hangs with latest svn In-Reply-To: References: <2d5132a50912120728s64afd221m947bae43e3ab319b@mail.gmail.com> <2d5132a50912120928l76690562ub1425568ef434313@mail.gmail.com> Message-ID: <1260704619.5228.32.camel@idol> su, 2009-12-13 kello 12:03 +0100, Sebastian Haase kirjoitti: [clip] > How does ScipPy depend on Sympy !? It does not. Only the test depends on mpmath, and the test is skipped if mpmath is not available. -- Pauli Virtanen From athanastasiou at googlemail.com Sun Dec 13 08:13:54 2009 From: athanastasiou at googlemail.com (Athanasios Anastasiou) Date: Sun, 13 Dec 2009 13:13:54 +0000 Subject: [SciPy-User] Odeint Integration Method and its time step Message-ID: Hello everyone I am trying to run some simple examples with odeint in order to get a better understanding of it but amongst other things i am a little bit confused by the time parameter (t) that is passed to the function (f). The specific questions that i have are: 1) What exactly does t represent in the derivatives function f? Is it absolute time? 2) Which method is odeint using to integrate the derivatives function and why does it call it more than one time for the same time instance? (t) 3) Considering the following odeint scipy examples: http://www.scipy.org/Cookbook/CoupledSpringMassSystem and http://www.scipy.org/LoktaVolterraTutorial Why isn't time (or the time step) taken into account within the function that returns the derivatives at each time instant? For more information about these please see below. Looking forward to hearing from you Athanasios Anastasiou Here are some more pieces of information regarding what i have done and the results that generated those questions above. Please note that i have not previously used the Fortran code that the scipy functions are based on. The first thing i tried was a spring-mass system with something like this: def dS(y,t,K,M,dT,c): dY=[0.0,0.0] F = (-K*y[1])-(c*y[0]) #Forces acting on the spring... (linear returning force, friction damping) a = F/M #...cause an acceleration... dY[0] = a * dT #...which sets the body in motion at some velocity over the specific time step... dY[1] = y[0] * dT #...which displaces the body by dS at every dT step return dY Fs=512 dT = 1.0/Fs y0 = [0.0,2.0] tSim = 4 #This is 4 seconds... nSim = tSim*Fs #...which translates to so many discrete samples theTime=range(0,nSim) S = SI.odeint(dS,y0,theTime, args=(32.0,4.0,dT,0.0)) pylab.plot(theTime,S[:,1]); #This plot is of course in "samples" rather than "seconds" This works as expected. The next step was to "drive" the spring-mass system with a sinusoid like this: (Only the dS function is given here the rest of the code from above remains almost the same) def dS(y,t,K,M,dT,c): dY=[0.0,0.0,0.0] F = (-K*y[1])-(c*y[0]) + scipy.sin(y[2]) * 8 #Forces acting on the spring... (linear returning force, friction damping) a = F/M #...cause an acceleration... dY[0] = a * dT #...which sets the body in motion at some velocity over the specific time step... dY[1] = y[0] * dT #...which displaces the body at every dT step dY[2] = 2.0*scipy.pi*4.0*dT #A completely independent integration of the phase component return dY (In this case the spring mass system is excited with a sinusoidal force running at a frequency of 4 Hz having an amplitude of 8 N) This again, seems to be working as expected but the mean amplitude does not have ONE resonant frequency. It seems to be "oscillating" erratically with a change in the driving frequency. If i drive the system with really low frequencies (0.5 Hz) the mean amplitude is greater than when i am driving the system at its resonating frequency of (1/2*pi)*sqrt(k/m) :-( At this point i started suspecting that there is something i am ignoring or not doing right in the whole simulation and decided to examine the values of t more closely. This is when i discovered that t(n),t(n+1) is not constant as i am assuming with that dT over there. The first thing i did was to obtain the dT from the t vector simply by dT=t(n)-t(n-1). Unfortunately this turns out to be 0.0 for every other t(n) which of course produces no change for dY which leads to ever decreasing time steps and eventually a fatal odeint error. The time variable (t) seems to be *absolute time*. Therefore i modified the code so that odeint runs for only one step and used the time variable directly for the calculation of dY as follows: def dS(y,t,K,M,dT,c): dY=[0.0, 0.0, 0.0] F = (-K*y[1]) - (c*y[0]) + (scipy.sin(y[2]) * 1.0) #Forces acting on the spring... a = F/M #- 9.81 #...cause an acceleration... dY[0] = a * t #...which sets the body in motion at some velocity.. dY[1] = y[0] * t #...which displaces the body at every dT step dY[2] = 2.0*scipy.pi*4.0*t #Simple integration of a phase component return dY Fs=512 dT = 1.0/Fs y0 = [0.0,0.0,0.0] #y0 = [0.0,0.0,0.0,0.0] tSim = 4 nSim = tSim*Fs theTime=range(0,nSim) S = scipy.zeros((nSim,len(y0))) k = 1 S[0,:] = y0 while k hi all, suppose I have a scipy.sparse matrix A and positions of elements to be extracted I = (2,3,4), J = (0,1,5). how can I get vector val (Python list or numpy array) that contains the elements A[2,0], A[3,1], A[4,5]? I haven't found it in scipy.sparse doc. And, if anyone knows, how can I get it from dense numpy array A? Thank you in advance, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ferrell at diablotech.com Sun Dec 13 09:30:44 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Sun, 13 Dec 2009 07:30:44 -0700 Subject: [SciPy-User] Pickling timeseries Dates In-Reply-To: <161504.16801.qm@web33006.mail.mud.yahoo.com> References: <161504.16801.qm@web33006.mail.mud.yahoo.com> Message-ID: <2DDD6B6B-1ED6-4949-9C35-BCACD2450045@diablotech.com> Thanks for the suggestion. I may try that. Seems pretty straight forward. -r On Dec 10, 2009, at 9:28 PM, David Baddeley wrote: > It also shouldn't be to hard to register a new pickle handler that > takes care of the conversion to e.g. datetime for you. I'm not > familiar with timeseries stuff, but wrote something similar to let > me pickle slices, which I've included below - as you can see it's > pretty trivial. After that you can carry on treating your date > (slice) objects as though they were picklable. > > import copy_reg > > def pickleSlice(slice): > return unpickleSlice, (slice.start, slice.stop, slice.step) > > def unpickleSlice(start, stop, step): > return slice(start, stop, step) > > copy_reg.pickle(slice, pickleSlice, unpickleSlice) > > best wishes, > David > > > > --- On Fri, 11/12/09, Pierre GM wrote: > >> From: Pierre GM >> Subject: Re: [SciPy-User] Pickling timeseries Dates >> To: "SciPy Users List" >> Received: Friday, 11 December, 2009, 4:37 PM >> On Dec 10, 2009, at 10:12 PM, Robert >> Ferrell wrote: >>>> >>> Fernando reminded me that any date scalar will trigger >> the pickle >>> problem. Some of my objects have Date instances >> stashed all over the >>> place. Converting all those to single-element >> DateArrays is not >>> feasible. >> >> And converting them to something else, like a datetime or >> even just a tuple (value,freq) >> >> >>> >>> Why can't Date instances be pickled, and how hard >> would it be to >>> change that? >> >> >> because we never implemented it on the C side, and that'll >> depend on whether Matt Knox can do it or not. >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From peridot.faceted at gmail.com Sun Dec 13 12:07:34 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 13 Dec 2009 12:07:34 -0500 Subject: [SciPy-User] Odeint Integration Method and its time step In-Reply-To: References: Message-ID: 2009/12/13 Athanasios Anastasiou : > Hello everyone > > I am trying to run some simple examples with odeint in order to get a > better understanding of it but amongst other things i am a little bit > confused by the time parameter (t) that is passed to the function (f). > > The specific questions that i have are: > > 1) What exactly does t represent in the derivatives function f? Is it > absolute time? Yes, t is absolute time. > 2) Which method is odeint using to integrate the derivatives function > and why does it call it more than one time for the same time instance? > (t) In general, in a system of ODEs, you have Y' = f(Y,t) That is, Y is a vector, and the time derivative of Y is given by a function of Y and t. That is, the derivative you calculate is allowed to depend not just on time but also on the position. Thus you can solve something like dy/dt = y to get an exponential, or dy/dt = t to get t**2/2, or some combination of the above. In order to solve such a thing, odeint needs to evaluate f(Y,t) at various values of Y and t. It may well occur that it tries f(Y1,t) and f(Y2,t) or f(Y,t1) and f(Y, t2). This is just the integrator trying to map out the derivative function. It may also occur that it evaluates f(Y,t) more than once. This is redundant, and in an ideal world odeint would simply remember the value. But odeint is a complex piece of code, and it may not have been simple to arrange that it never repeats a value. > 3) Considering the following odeint scipy examples: > http://www.scipy.org/Cookbook/CoupledSpringMassSystem > and > http://www.scipy.org/LoktaVolterraTutorial > Why isn't time (or the time step) taken into account within the > function that returns the derivatives at each time instant? Because in those examples, the derivative only depends on time through the function value. This is like the y' = y equation, where there is no explicit dependence on t. As for depending on the time step, that is a different question. Your right-hand-side function should not know anything about the time steps, since they are a detail of how the solver works. All knowledge of time steps is hidden inside the solver itself, and your code should never need to know what they are. This is important because odeint is an adaptive solver that adjusts its time steps as necessary to obtain the desired accuracy. In regions where the solution has simple behaviour, it will take big steps, while any place the solution has complex behaviour it will (should) take much smaller steps. Incidentally, if you want to evaluate a second-order ODE, you should convert it into a system of first-order ODEs. Anne From emmanuelle.gouillart at normalesup.org Sun Dec 13 12:34:02 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Sun, 13 Dec 2009 18:34:02 +0100 Subject: [SciPy-User] indexation of sparse matrices In-Reply-To: References: Message-ID: <20091213173402.GA15408@phare.normalesup.org> Hi Dmitrey, what you want to do is called *fancy indexing* in numpy. Fancy indexing consists in indexing an array with (a) sequence(s) of indices, see http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing for more details. It works on numpy arrays as well as sparse matrices for the lil_matrix format. Here is an example: >>> import numpy as np >>> from scipy import sparse >>> a = np.zeros((10,10)) >>> a[2, 0] = a[3, 1] = a[4, 2] = 1 >>> a[4, 5] = 2 >>> a_sparse = sparse.lil_matrix(a) >>> a_sparse <10x10 sparse matrix of type '' with 4 stored elements in LInked List format> >>> I = (2, 3, 4) >>> J = (0, 1, 5) >>> a[I, J] array([ 1., 1., 2.]) >>> a_sparse[I, J] <1x3 sparse matrix of type '' with 3 stored elements in LInked List format> >>> a_sparse[I, J].todense() matrix([[ 1., 1., 2.]]) The lil_matrix is meant for supporting fancy indexing, but it is not efficient for matrices operations such as inversion or multiplication; you should transform your matrix to another format for performing such operations. Cheers, Emmanuelle On Sun, Dec 13, 2009 at 03:44:52PM +0200, Dmitrey wrote: > hi all, > suppose I have a scipy.sparse matrix A and positions of elements to be > extracted I = (2,3,4), J = (0,1,5). > how can I get vector val (Python list or numpy array) that contains the > elements A[2,0], A[3,1], A[4,5]? > I haven't found it in scipy.sparse doc. > And, if anyone knows, how can I get it from dense numpy array A? > Thank you in advance, D. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From gael.varoquaux at normalesup.org Sun Dec 13 18:55:50 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 14 Dec 2009 00:55:50 +0100 Subject: [SciPy-User] [Ann] EuroScipy 2010 Message-ID: <20091213235550.GB27356@phare.normalesup.org> ========================== Announcing EuroScipy 2010 ========================== --------------------------------------------------- The 3rd European meeting on Python in Science --------------------------------------------------- **Paris, Ecole Normale Sup?rieure, July 8-11 2010** We are happy to announce the 3rd EuroScipy meeting, in Paris, July 2010. The EuroSciPy meeting is a cross-disciplinary gathering focused on the use and development of the Python language in scientific research. This event strives to bring together both users and developers of scientific tools, as well as academic research and state of the art industry. Important dates ================== ====================================== =================================== **Registration opens** Sunday March 29 **Paper submission deadline** Sunday May 9 **Program announced** Sunday May 22 **Tutorials tracks** Thursday July 8 - Friday July 9 **Conference track** Saturday July 10 - Sunday July 11 ====================================== =================================== Tutorial ========= There will be two tutorial tracks at the conference, an introductory one, to bring up to speed with the Python language as a scientific tool, and an advanced track, during which experts of the field will lecture on specific advanced topics such as advanced use of numpy, scientific visualization, software engineering... Main conference topics ======================== We will be soliciting talks on the follow topics: - Presentations of scientific tools and libraries using the Python language, including but not limited to: - Vector and array manipulation - Parallel computing - Scientific visualization - Scientific data flow and persistence - Algorithms implemented or exposed in Python - Web applications and portals for science and engineering - Reports on the use of Python in scientific achievements or ongoing projects. - General-purpose Python tools that can be of special interest to the scientific community. Keynote Speaker: Hans Petter Langtangen ========================================== We are excited to welcome Hans Petter Langtangen as our keynote speaker. - Director of scientific computing and bio-medical research at Simula labs, Oslo - Author of the famous book Python scripting for computational science http://www.springer.com/math/cse/book/978-3-540-73915-9 -- Ga?l Varoquaux, conference co-chair Nicolas Chauvat, conference co-chair Program committee ................. Romain Brette (ENS Paris, DEC) Mike M?ller (Python Academy) Christophe Pradal (CIRAD/INRIA, DigiPlantes team) Pierre Raybault (CEA, DAM) Jarrod Millman (UC Berkeley, Helen Wills NeuroScience institute) From joebarfett at yahoo.ca Sun Dec 13 23:14:26 2009 From: joebarfett at yahoo.ca (theWonder) Date: Sun, 13 Dec 2009 20:14:26 -0800 (PST) Subject: [SciPy-User] [SciPy-user] f2py fortran95 won't compile] In-Reply-To: References: Message-ID: <26772806.post@talk.nabble.com> Well that compiles. Now I have hello.f sitting in my documents directory so the compile worked! yay! but going to python and saying import hello yields "no module named hello". Ideas? Again many thanks! -- View this message in context: http://old.nabble.com/Re%3A-f2py-fortran95-won%27t-compile--tp26752997p26772806.html Sent from the Scipy-User mailing list archive at Nabble.com. From joebarfett at yahoo.ca Sun Dec 13 23:14:30 2009 From: joebarfett at yahoo.ca (theWonder) Date: Sun, 13 Dec 2009 20:14:30 -0800 (PST) Subject: [SciPy-User] [SciPy-user] f2py fortran95 won't compile] In-Reply-To: References: Message-ID: <26772807.post@talk.nabble.com> Well that compiles. Now I have hello.f sitting in my documents directory so the compile worked! yay! but going to python and saying import hello yields "no module named hello". Ideas? Again many thanks! -- View this message in context: http://old.nabble.com/Re%3A-f2py-fortran95-won%27t-compile--tp26752997p26772807.html Sent from the Scipy-User mailing list archive at Nabble.com. From wnbell at gmail.com Mon Dec 14 01:39:03 2009 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 14 Dec 2009 01:39:03 -0500 Subject: [SciPy-User] indexation of sparse matrices In-Reply-To: <20091213173402.GA15408@phare.normalesup.org> References: <20091213173402.GA15408@phare.normalesup.org> Message-ID: On Sun, Dec 13, 2009 at 12:34 PM, Emmanuelle Gouillart wrote: > ? ? ? ?Hi Dmitrey, > > ? ? ? ?what you want to do is called *fancy indexing* in numpy. ?Fancy > indexing consists in indexing an array with (a) sequence(s) of indices, > see > http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing > for more details. It works on numpy arrays as well as sparse matrices for > the lil_matrix format. Here is an example: > > ? ? ? ?The lil_matrix is meant for supporting fancy indexing, but it is > not efficient for matrices operations such as inversion or > multiplication; you should transform your matrix to another format for > performing such operations. > The CSR and CSC formats also support fast fancy indexing. Something like: >>> A = ... some sparse matrix >>> csr_matrix(A)[I,J] ought to work well. -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ From cimrman3 at ntc.zcu.cz Mon Dec 14 03:29:50 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 14 Dec 2009 09:29:50 +0100 Subject: [SciPy-User] loadmat/savemat sparse with umfpack problem In-Reply-To: <2d5132a50912120818n5d555321j2497aa3edeb3525b@mail.gmail.com> References: <2d5132a50912120818n5d555321j2497aa3edeb3525b@mail.gmail.com> Message-ID: <4B25F77E.4060607@ntc.zcu.cz> Hi Robin, Robin wrote: > Hi, > > I have a problem with sparse matrices that have been roundtripped > through savemat/loadmat and scikits.umf: > In [29]: scipy.__version__ > Out[29]: '0.8.0.dev6136' > > scikits.umfpack doesn't have a version but it is latest from scikits > svn (r2239). > > Here is the simplest way I could recreate the problem: > > from scikits.umfpack import UmfpackContext > import scipy.sparse as sparse > from scipy.io import loadmat, savemat > > a = sparse.eye(3,3,format='csc') > > umf = UmfpackContext() > print 'Original sparse matrix:' > print a.__repr__() > # works fine > umf.numeric(a) > print 'savemat/loadmat ...' > savemat('test',{'a':a}) > a2 = loadmat('test')['a'] > print 'Loaded sparse matrix:' > print a2.__repr__() > # doesnt work > umf.numeric(a2) > > which outputs: > > Original sparse matrix: > <3x3 sparse matrix of type '' > with 3 stored elements in Compressed Sparse Column format> > savemat/loadmat ... > Loaded sparse matrix: > <3x3 sparse matrix of type '' > with 3 stored elements in Compressed Sparse Column format> > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > > /Users/robince/svn/pyentropy/pyentropy/umf.py in () > 15 print 'Loaded sparse matrix:' > 16 print a2.__repr__() > ---> 17 umf.numeric(a2) > 18 > 19 > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scikits.umfpack-5.1.0-py2.5-macosx-10.3-i386.egg/scikits/umfpack/umfpack.pyc > in numeric(self, mtx) > 393 > 394 if self._symbolic is None: > --> 395 self.symbolic( mtx ) > 396 > 397 indx = self._getIndx( mtx ) > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scikits.umfpack-5.1.0-py2.5-macosx-10.3-i386.egg/scikits/umfpack/umfpack.pyc > in symbolic(self, mtx) > 364 = self.funs.symbolic( mtx.shape[0], mtx.shape[1], > 365 mtx.indptr, indx, mtx.data, > --> 366 self.control, self.info ) > 367 else: > 368 real, imag = mtx.data.real.copy(), mtx.data.imag.copy() > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scikits.umfpack-5.1.0-py2.5-macosx-10.3-i386.egg/scikits/umfpack/_umfpack.pyc > in umfpack_di_symbolic(*args) > 435 double Control, double Info) -> int > 436 """ > --> 437 return __umfpack.umfpack_di_symbolic(*args) > 438 > 439 def umfpack_dl_symbolic(*args): > > TypeError: not a C array > WARNING: Failure executing file: > > I can't figure out whats causing it - and whether its a bug in > savemat/loadmat or scikits.umfpack. > > Is scikits.umfpack still supported? Is there a way to prefactor matrix > with the libraries built into scipy (I need to solve the same large > sparse matrix many times so I was prefactoring with umfpack). I am not sure what causes this, but you try the following: 1. umfpack requires the indices array to be sorted in ascending order, there is a function to ensure that: csc_matrix.ensure_sorted_indices() 2. the wrappers expect all the sparse matrix arrays (indptr, indices, data) to be in c-contiguous order - try a2.indices = a2.indices.copy() etc. Hope that helps, r. From athanastasiou at googlemail.com Mon Dec 14 05:03:18 2009 From: athanastasiou at googlemail.com (Athanasios Anastasiou) Date: Mon, 14 Dec 2009 10:03:18 +0000 Subject: [SciPy-User] Odeint Integration Method and its time step Message-ID: Hello Anne Thank you for your response, please see below: > In order to solve such a thing, odeint needs to evaluate f(Y,t) at > various values of Y and t. It may well occur that it tries f(Y1,t) and > f(Y2,t) or f(Y,t1) and f(Y, t2). This is just the integrator trying to > map out the derivative function. Exactly, so (t) must somehow contribute to the return value of f, otherwise the integrator would not call it at different values of (t) to "map the derivative function" > > 3) Considering the following odeint scipy examples: > > http://www.scipy.org/Cookbook/CoupledSpringMassSystem > > and > > http://www.scipy.org/LoktaVolterraTutorial > > Why isn't time (or the time step) taken into account within the > > function that returns the derivatives at each time instant? > > Because in those examples, the derivative only depends on time through > the function value. This is like the y' = y equation, where there is > no explicit dependence on t. > > As for depending on the time step, that is a different question. Your > right-hand-side function should not know anything about the time > steps, since they are a detail of how the solver works. All knowledge > of time steps is hidden inside the solver itself, and your code should > never need to know what they are. This is important because odeint is > an adaptive solver that adjusts its time steps as necessary to obtain > the desired accuracy. In regions where the solution has simple > behaviour, it will take big steps, while any place the solution has > complex behaviour it will (should) take much smaller steps. This is all understood but doesn't answer the question (or somehow, i don't get it). You need to know how long was an object accelerating for to derive its velocity and also how long did the object travel at that velocity to derive its displacement. This can only be known to (f) through the variable (t). So, if there is time dependence (as in the case of a spring-mass system), it should be somehow taken into account within (f) > Incidentally, if you want to evaluate a second-order ODE, you should > convert it into a system of first-order ODEs. Which is what i have done in the code following my previous message. There is one first order equation for the velocity of the mass which is then fed to the second first order equation which returns displacement. All the best Athanasios From pav+sp at iki.fi Mon Dec 14 05:33:31 2009 From: pav+sp at iki.fi (Pauli Virtanen) Date: Mon, 14 Dec 2009 10:33:31 +0000 (UTC) Subject: [SciPy-User] Odeint Integration Method and its time step References: Message-ID: Mon, 14 Dec 2009 10:03:18 +0000, Athanasios Anastasiou wrote: [clip] > This is all understood but doesn't answer the question (or somehow, i > don't get it). > > You need to know how long was an object accelerating for to derive its > velocity and also how long did the object travel at that velocity to You don't: Newton's equations for e.g. a coupled mass system are autonomous (have no explicit dependence on time). In a form where you can use odeint, they read dx/dt = v(t) dv/dt = F(x(t)) / m Check your a math textbook or Wikipedia for more information about ODEs. From njs at pobox.com Mon Dec 14 06:18:38 2009 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 14 Dec 2009 03:18:38 -0800 Subject: [SciPy-User] [announce] scikits.sparse v0.1, wrapper for CHOLMOD (and hopefully more) Message-ID: <961fa2b40912140318k6a0e2c02na7fe6a7612e716cb@mail.gmail.com> As mentioned previously[0], I've written a scipy.sparse-compatible wrapper for the CHOLMOD sparse Cholesky routines. I considered making it 'scikits.cholmod' (cf. scikits.umfpack), but creating a new scikit every time someone needs a sparse linear algebra routine seems like it will become very silly very quickly, so instead I hereby declare the existence of 'scikits.sparse' as a home for all such routines. (Of course, it currently only contains scikits.sparse.cholmod). Manual: http://packages.python.org/scikits.sparse/ Source: hg clone https://scikits-sparse.googlecode.com/hg/ scikits.sparse Homepage: http://code.google.com/p/scikits-sparse Bug tracker: http://code.google.com/p/scikits-sparse/issues/list Mailing list: scikits-sparse-discuss at lists.vorpus.org http://lists.vorpus.org/cgi-bin/mailman/listinfo/scikits-sparse-discuss I would have sucked scikits.umfpack in, except that it uses SWIG, which I don't understand and am not really inspired to learn, at least for a v0.1 release. Also, there appear to still be some sort of complicated entanglements with scipy.sparse (e.g. in at least part of the test suite). Anyone feeling inspired? It's not a very complicated interface; just rewrapping it might be as easy as anything else. SuiteSparseQR would also be a natural fit, since it uses the (already wrapped) CHOLMOD matrix interfaces. [0] http://mail.scipy.org/pipermail/scipy-dev/2009-November/013244.html Share and enjoy, -- Nathaniel From eg at fft.be Mon Dec 14 07:14:12 2009 From: eg at fft.be (Eloi Gaudry) Date: Mon, 14 Dec 2009 13:14:12 +0100 Subject: [SciPy-User] [build] visual studio 2008 and intel fortran 11.1 compiler Message-ID: <4B262C14.8010102@fft.be> Hi, I'm having some trouble building scipy using Visual Studio 2008 and Intel Fortran 11.1. It seems that the objects built by the Intel Fortran compiler are not output in the 'right' directory. As you can see below, the built objects are in the current directory (with *.obj sufix instead of *.o). I think I'm missing something somewhere in the distutils module, but I cannot find what). Could the IntelFortran detection module be broken for version 11.1 ? Note that numpy-1.30 was be built using the same compilers (and without any issue). Aren't we using the same module to build scipy ? Any help would be appreciated, Thanks, Eloi C:\fft\external\scipy-0.7.1>call "C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat" x86 Setting environment for using Microsoft Visual Studio 2008 x86 tools. C:\fft\external\scipy-0.7.1>set FC_VENDOR=Intel C:\fft\external\scipy-0.7.1>SET IFORT_COMPILER11=C:\Program Files\Intel\Compiler\11.1\051 C:\fft\external\scipy-0.7.1>SET INTEL_LICENSE_FILE=C:\Program Files\Common Files\Intel\Licenses;%INTEL_LICENSE_FILE% C:\fft\external\scipy-0.7.1>SET PATH=%IFORT_COMPILER11%\Bin\ia32;%PATH% C:\fft\external\scipy-0.7.1>SET LIB=%IFORT_COMPILER11%\Lib\ia32;%LIB% C:\fft\external\scipy-0.7.1>SET INCLUDE=%IFORT_COMPILER11%\Include;%IFORT_COMPILER11%\Include\ia32;%INCLUDE C:\fft\external\scipy-0.7.1>python setup.py build --compiler=msvc --fcompiler=intel --verbose Warning: No configuration returned, assuming unavailable. blas_opt_info: blas_mkl_info: FOUND: libraries = ['mkl_intel_c', 'mkl_intel_thread_dll', 'mkl_core_dll'] library_dirs = ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\mkl\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\mkl\\include'] FOUND: libraries = ['mkl_intel_c', 'mkl_intel_thread_dll', 'mkl_core_dll'] library_dirs = ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\mkl\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\mkl\\include'] lapack_opt_info: lapack_mkl_info: mkl_info: FOUND: libraries = ['mkl_intel_c', 'mkl_intel_thread_dll', 'mkl_core_dll'] library_dirs = ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\mkl\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\mkl\\include'] FOUND: libraries = ['mkl_scalapack_core_dll', 'mkl_intel_c', 'mkl_intel_thread_dll', 'mkl_core_dll'] library_dirs = ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\mkl\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\mkl\\include'] FOUND: libraries = ['mkl_scalapack_core_dll', 'mkl_intel_c', 'mkl_intel_thread_dll', 'mkl_core_dll'] library_dirs = ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\mkl\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\mkl\\include'] umfpack_info: libraries umfpack not found in c:\fft\bin\python-2.6.4\lib libraries umfpack not found in C:\ libraries umfpack not found in c:\fft\bin\python-2.6.4\libs c:\fft\bin\python-2.6.4\lib\site-packages\numpy\distutils\system_info.py:452: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "superlu_src" sources building library "arpack" sources building library "sc_c_misc" sources building library "sc_cephes" sources building library "sc_mach" sources building library "sc_toms" sources building library "sc_amos" sources building library "sc_cdf" sources building library "sc_specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.cluster._hierarchy_wrap" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. adding 'build\src.win32-2.6\scipy\interpolate\src\dfitpack-f2pywrappers.f' to sources. building extension "scipy.interpolate._interpolate" sources building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. adding 'build\src.win32-2.6\build\src.win32-2.6\scipy\lib\blas\fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build\src.win32-2.6\scipy\lib\blas\cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build\src.win32-2.6\scipy\lib\lapack\clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build\src.win32-2.6\scipy\linalg\fblas.pyf' to sources. f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. adding 'build\src.win32-2.6\build\src.win32-2.6\scipy\linalg\fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build\src.win32-2.6\scipy\linalg\cblas.pyf' to sources. f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build\src.win32-2.6\scipy\linalg\flapack.pyf' to sources. f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. adding 'build\src.win32-2.6\build\src.win32-2.6\scipy\linalg\flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build\src.win32-2.6\scipy\linalg\clapack.pyf' to sources. f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.optimize._slsqp" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.optimize._nnls" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse.linalg.isolve._iterative" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.sparse.linalg.dsolve._zsuperlu" sources building extension "scipy.sparse.linalg.dsolve._dsuperlu" sources building extension "scipy.sparse.linalg.dsolve._csuperlu" sources building extension "scipy.sparse.linalg.dsolve._ssuperlu" sources building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. adding 'build\src.win32-2.6\build\src.win32-2.6\scipy\sparse\linalg\eigen\arpack\_arpack-f2pywrappers.f' to sources building extension "scipy.sparse.sparsetools._csr" sources building extension "scipy.sparse.sparsetools._csc" sources building extension "scipy.sparse.sparsetools._coo" sources building extension "scipy.sparse.sparsetools._bsr" sources building extension "scipy.sparse.sparsetools._dia" sources building extension "scipy.spatial.ckdtree" sources building extension "scipy.spatial._distance_wrap" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.stats.vonmises_cython" sources building extension "scipy.stats.futil" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build\src.win32-2.6\fortranobject.c' to sources. adding 'build\src.win32-2.6' to include_dirs. adding 'build\src.win32-2.6\scipy\stats\mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building data_files sources running build_py copying scipy\version.py -> build\lib.win32-2.6\scipy copying build\src.win32-2.6\scipy\__config__.py -> build\lib.win32-2.6\scipy running build_clib No module named msvccompiler in numpy.distutils; trying from distutils customize MSVCCompiler customize MSVCCompiler using build_clib customize IntelFCompiler debutg compiler_f90 debugc Found executable C:\Program Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe debutg {'linker_exe': None, 'archiver': ['ar', '-cr'], 'compiler_f77': [None, '-72', '-w90', '-w95'], 'version_cmd': F77>', '-FI', '-V', '-c', 'c:\\docume~1\\qa\\locals~1\\temp\\tmp4uc_dl\\frbyvf.f', '-o', 'c:\\docume~1\\qa\\locals~1\ mp\\tmp4uc_dl\\frbyvf.o'], 'compiler_fix': [None, '-FI'], 'linker_so': ['', '-shared'], 'ranlib': ['ranlib'], 'c iler_f90': ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe']} debutg compiler_f77 debugc debutg compiler_fix debugc debutg linker_so debugc debutg linker_exe debugc debutg version_cmd debugc debutg archiver debugc debutg ranlib debugc debutg compiler_f90 debugc debutg {'linker_exe': None, 'archiver': ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe', '-cr' 'compiler_f77': ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe', '-72', '-w90', '-w95'], 'vers _cmd': ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe', '-FI', '-V', '-c', 'c:\\docume~1\\qa\\ als~1\\temp\\tmp4uc_dl\\frbyvf.f', '-o', 'c:\\docume~1\\qa\\locals~1\\temp\\tmp4uc_dl\\frbyvf.o'], 'compiler_fix': [' \Program Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe', '-FI'], 'linker_so': ['C:\\Program Files\\Intel\\C iler\\11.1\\051\\Bin\\ia32\\ifort.exe', '-shared'], 'ranlib': ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\Bin\\i \\ifort.exe'], 'compiler_f90': ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe']} debutg compiler_f77 debugc debutg compiler_fix debugc debutg linker_so debugc debutg linker_exe debugc debutg version_cmd debugc debutg archiver debugc debutg ranlib debugc Found executable C:\Program Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe customize IntelFCompiler using build_clib building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: C:\Program Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe -72 -w90 -w95 -KPIC -cm -O3 -unroll Fortran f90 compiler: C:\Program Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe -FR -KPIC -cm -O3 -unroll Fortran fix compiler: C:\Program Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe -FI -KPIC -cm -O3 -unroll creating build\temp.win32-2.6 creating build\temp.win32-2.6\scipy creating build\temp.win32-2.6\scipy\fftpack creating build\temp.win32-2.6\scipy\fftpack\src creating build\temp.win32-2.6\scipy\fftpack\src\dfftpack compile options: '-Ic:\fft\bin\python-2.6.4\lib\site-packages\numpy\core\include -c' ifort.exe:f77: scipy\fftpack\src\dfftpack\dcosqb.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dcosqf.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dcosqi.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dcost.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dcosti.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dfftb.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dfftb1.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dfftf.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dfftf1.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dffti.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dffti1.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dsinqb.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dsinqf.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dsinqi.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dsint.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dsint1.f ifort.exe:f77: scipy\fftpack\src\dfftpack\dsinti.f ifort.exe:f77: scipy\fftpack\src\dfftpack\zfftb.f ifort.exe:f77: scipy\fftpack\src\dfftpack\zfftb1.f ifort.exe:f77: scipy\fftpack\src\dfftpack\zfftf.f ifort.exe:f77: scipy\fftpack\src\dfftpack\zfftf1.f ifort.exe:f77: scipy\fftpack\src\dfftpack\zffti.f ifort.exe:f77: scipy\fftpack\src\dfftpack\zffti1.f C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\lib.exe build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqb.o bui ld\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqf.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqi.o build\tem p.win32-2.6\scipy\fftpack\src\dfftpack\dcost.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosti.o build\temp.win32 -2.6\scipy\fftpack\src\dfftpack\dfftb.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dfftb1.o build\temp.win32-2.6\sc ipy\fftpack\src\dfftpack\dfftf.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dfftf1.o build\temp.win32-2.6\scipy\fft pack\src\dfftpack\dffti.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dffti1.o build\temp.win32-2.6\scipy\fftpack\sr c\dfftpack\dsinqb.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsinqf.o build\temp.win32-2.6\scipy\fftpack\src\dfft pack\dsinqi.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsint.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\ds int1.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsinti.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zfftb.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zfftb1.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zfftf.o build\t emp.win32-2.6\scipy\fftpack\src\dfftpack\zfftf1.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zffti.o build\temp.win 32-2.6\scipy\fftpack\src\dfftpack\zffti1.o /OUT:build\temp.win32-2.6\dfftpack.lib Found executable C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\lib.exe Microsoft (R) Library Manager Version 9.00.30729.01 Copyright (C) Microsoft Corporation. All rights reserved. LINK : fatal error LNK1181: cannot open input file 'build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqb.o' error: Command "C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\lib.exe build\temp.win32-2.6\scipy\fftpack\src\dfftp ack\dcosqb.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqf.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dc osqi.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcost.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosti.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dfftb.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dfftb1.o build\t emp.win32-2.6\scipy\fftpack\src\dfftpack\dfftf.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dfftf1.o build\temp.win 32-2.6\scipy\fftpack\src\dfftpack\dffti.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dffti1.o build\temp.win32-2.6\ scipy\fftpack\src\dfftpack\dsinqb.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsinqf.o build\temp.win32-2.6\scipy\ fftpack\src\dfftpack\dsinqi.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsint.o build\temp.win32-2.6\scipy\fftpack \src\dfftpack\dsint1.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsinti.o build\temp.win32-2.6\scipy\fftpack\src\d fftpack\zfftb.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zfftb1.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack \zfftf.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zfftf1.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zffti. o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zffti1.o /OUT:build\temp.win32-2.6\dfftpack.lib" failed with exit stat us 1181 C:\fft\external\scipy-0.7.1>dir Volume in drive C has no label. Volume Serial Number is 54FF-E262 Directory of C:\fft\external\scipy-0.7.1 12/14/2009 01:08 PM . 12/14/2009 01:08 PM .. 12/09/2009 05:58 PM build 11/27/2009 04:52 PM 585 build_scipy.x64.bat 12/09/2009 05:14 PM 790 build_scipy.x86.bat 12/09/2009 05:14 PM 848 build_scipy.x86.bat~ 12/14/2009 01:07 PM 2,811 dcosqb.obj 12/14/2009 01:07 PM 2,649 dcosqf.obj 12/14/2009 01:07 PM 1,277 dcosqi.obj 12/14/2009 01:07 PM 1,849 dcost.obj 12/14/2009 01:07 PM 1,828 dcosti.obj 12/14/2009 01:07 PM 525 dfftb.obj 12/14/2009 01:08 PM 22,309 dfftb1.obj 12/14/2009 01:08 PM 525 dfftf.obj 12/14/2009 01:08 PM 23,737 dfftf1.obj 12/14/2009 01:08 PM 525 dffti.obj 12/14/2009 01:08 PM 1,930 dffti1.obj 11/24/2009 04:01 PM doc 12/14/2009 01:08 PM 1,324 dsinqb.obj 12/14/2009 01:08 PM 1,161 dsinqf.obj 12/14/2009 01:08 PM 509 dsinqi.obj 12/14/2009 01:08 PM 541 dsint.obj 12/14/2009 01:08 PM 2,746 dsint1.obj 12/14/2009 01:08 PM 1,145 dsinti.obj 12/14/2009 01:05 PM 2,174 FFT_Install.aixgcc.fromnumpy13.python264.sh 11/24/2009 04:01 PM 7,414 FFT_install.dynblas.sh 11/24/2009 04:01 PM 759 FFT_Install.fromnumpy13.python264.sh 12/14/2009 01:07 PM 609 frbyvf.obj 11/24/2009 04:01 PM 13,619 INSTALL.txt 11/24/2009 04:01 PM 291 LATEST.txt 11/24/2009 04:01 PM 1,570 LICENSE.txt 11/24/2009 04:01 PM 695 MANIFEST.in 11/24/2009 04:01 PM 1,715 PKG-INFO 11/24/2009 04:01 PM 4,097 README.txt 12/14/2009 01:07 PM scipy 11/24/2009 04:01 PM 4,991 setup.py 11/24/2009 04:01 PM 149 setupegg.py 11/24/2009 04:01 PM 3,441 setupscons.py 12/09/2009 04:36 PM 128 site.aixxlf.cfg 11/24/2009 04:01 PM 4,041 THANKS.txt 11/24/2009 04:01 PM 1,233 TOCHANGE.txt 12/14/2009 01:08 PM 541 zfftb.obj 12/14/2009 01:08 PM 22,229 zfftb1.obj 12/14/2009 01:08 PM 541 zfftf.obj 12/14/2009 01:08 PM 22,341 zfftf1.obj 12/14/2009 01:08 PM 525 zffti.obj 12/14/2009 01:08 PM 2,046 zffti1.obj 42 File(s) 164,763 bytes From robince at gmail.com Mon Dec 14 09:15:58 2009 From: robince at gmail.com (Robin) Date: Mon, 14 Dec 2009 14:15:58 +0000 Subject: [SciPy-User] loadmat/savemat sparse with umfpack problem In-Reply-To: <4B25F77E.4060607@ntc.zcu.cz> References: <2d5132a50912120818n5d555321j2497aa3edeb3525b@mail.gmail.com> <4B25F77E.4060607@ntc.zcu.cz> Message-ID: <2d5132a50912140615t282c1708m5ca2b1c34d200433@mail.gmail.com> On Mon, Dec 14, 2009 at 8:29 AM, Robert Cimrman wrote: > Hi Robin, > > Robin wrote: >> Hi, >> >> I have a problem with sparse matrices that have been roundtripped >> through savemat/loadmat and scikits.umf: >> In [29]: scipy.__version__ >> Out[29]: '0.8.0.dev6136' >> >> scikits.umfpack doesn't have a version but it is latest from scikits >> svn (r2239). >> >> Here is the simplest way I could recreate the problem: >> >> from scikits.umfpack import UmfpackContext >> import scipy.sparse as sparse >> from scipy.io import loadmat, savemat >> >> a = sparse.eye(3,3,format='csc') >> >> umf = UmfpackContext() >> print 'Original sparse matrix:' >> print a.__repr__() >> # works fine >> umf.numeric(a) >> print 'savemat/loadmat ...' >> savemat('test',{'a':a}) >> a2 = loadmat('test')['a'] >> print 'Loaded sparse matrix:' >> print a2.__repr__() >> # doesnt work >> umf.numeric(a2) >> >> which outputs: >> >> Original sparse matrix: >> <3x3 sparse matrix of type '' >> ? ? ? ? with 3 stored elements in Compressed Sparse Column format> >> savemat/loadmat ... >> Loaded sparse matrix: >> <3x3 sparse matrix of type '' >> ? ? ? ? with 3 stored elements in Compressed Sparse Column format> >> --------------------------------------------------------------------------- >> TypeError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) >> >> /Users/robince/svn/pyentropy/pyentropy/umf.py in () >> ? ? ?15 print 'Loaded sparse matrix:' >> ? ? ?16 print a2.__repr__() >> ---> 17 umf.numeric(a2) >> ? ? ?18 >> ? ? ?19 >> >> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scikits.umfpack-5.1.0-py2.5-macosx-10.3-i386.egg/scikits/umfpack/umfpack.pyc >> in numeric(self, mtx) >> ? ? 393 >> ? ? 394 ? ? ? ? if self._symbolic is None: >> --> 395 ? ? ? ? ? ? self.symbolic( mtx ) >> ? ? 396 >> ? ? 397 ? ? ? ? indx = self._getIndx( mtx ) >> >> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scikits.umfpack-5.1.0-py2.5-macosx-10.3-i386.egg/scikits/umfpack/umfpack.pyc >> in symbolic(self, mtx) >> ? ? 364 ? ? ? ? ? ? ? ? ? ? = self.funs.symbolic( mtx.shape[0], mtx.shape[1], >> ? ? 365 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? mtx.indptr, indx, mtx.data, >> --> 366 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? self.control, self.info ) >> ? ? 367 ? ? ? ? else: >> ? ? 368 ? ? ? ? ? ? real, imag = mtx.data.real.copy(), mtx.data.imag.copy() >> >> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scikits.umfpack-5.1.0-py2.5-macosx-10.3-i386.egg/scikits/umfpack/_umfpack.pyc >> in umfpack_di_symbolic(*args) >> ? ? 435 ? ? ? ? double Control, double Info) -> int >> ? ? 436 ? ? """ >> --> 437 ? return __umfpack.umfpack_di_symbolic(*args) >> ? ? 438 >> ? ? 439 def umfpack_dl_symbolic(*args): >> >> TypeError: not a C array >> WARNING: Failure executing file: >> >> I can't figure out whats causing it - and whether its a bug in >> savemat/loadmat or scikits.umfpack. >> >> Is scikits.umfpack still supported? Is there a way to prefactor matrix >> with the libraries built into scipy (I need to solve the same large >> sparse matrix many times so I was prefactoring with umfpack). > > I am not sure what causes this, but you try the following: > > 1. umfpack requires the indices array to be sorted in ascending order, there is > a function to ensure that: > > csc_matrix.ensure_sorted_indices() > > 2. the wrappers expect all the sparse matrix arrays (indptr, indices, data) to > be in c-contiguous order - try a2.indices = a2.indices.copy() etc. > > Hope that helps, > r. Hi, Thanks very much. Sorting the indices seems to fix it. (although ensure_sorted_indices is deprecated for sorted_indices or sort_indices). Still not sure why though - in this simple example the indices do seem to be sorted - the only change I can find from calling the sorted_indices function is that after the WRITEABLE flag of .indices is True (as it is originally) but on the loadmat'ed array WRITEABLE is False. Could this be causing it? Is it a bug in loadmat - should the flag be set differently? (I'm not sure what it does). In [31]: a.indices # origina Out[31]: array([0, 1, 2]) In [32]: a2.indices # loaded Out[32]: array([0, 1, 2]) In [33]: a2.sorted_indices().indices # loaded + sorted Out[33]: array([0, 1, 2]) In [34]: a.indices.flags # original Out[34]: C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [35]: a2.indices.flags # laoded Out[35]: C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : False ALIGNED : True UPDATEIFCOPY : False In [37]: a2.sorted_indices().indices.flags # loaded and sorted Out[37]: C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False Thanks again for the help, Cheers Robin From athanastasiou at googlemail.com Mon Dec 14 09:25:34 2009 From: athanastasiou at googlemail.com (Athanasios Anastasiou) Date: Mon, 14 Dec 2009 14:25:34 +0000 Subject: [SciPy-User] Odeint Integration Method and its time step Message-ID: Hello Pauli Thank you for your response. An earlier message by Ms Anne Archibald helped in clarifying the issues i had with odeint. I would consider this thread closed. All the best Athanasios Anastasiou From matthew.brett at gmail.com Mon Dec 14 11:17:20 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 14 Dec 2009 11:17:20 -0500 Subject: [SciPy-User] loadmat/savemat sparse with umfpack problem In-Reply-To: <2d5132a50912140615t282c1708m5ca2b1c34d200433@mail.gmail.com> References: <2d5132a50912120818n5d555321j2497aa3edeb3525b@mail.gmail.com> <4B25F77E.4060607@ntc.zcu.cz> <2d5132a50912140615t282c1708m5ca2b1c34d200433@mail.gmail.com> Message-ID: <1e2af89e0912140817x6c63e828nad30288cc6a28e48@mail.gmail.com> Hi, > Still not sure why though - in this simple example the indices do seem > to be sorted - the only change I can find from calling the > sorted_indices function is that after the WRITEABLE flag of .indices > is True (as it is originally) but on the loadmat'ed array WRITEABLE is > False. > > Could this be causing it? Is it a bug in loadmat - should the flag be > set differently? (I'm not sure what it does). Yes, that's strange. Do you see the same behavior for loadmat from scipy 0.7.1? Best, Matthew From robince at gmail.com Mon Dec 14 11:54:06 2009 From: robince at gmail.com (Robin) Date: Mon, 14 Dec 2009 16:54:06 +0000 Subject: [SciPy-User] loadmat/savemat sparse with umfpack problem In-Reply-To: <1e2af89e0912140817x6c63e828nad30288cc6a28e48@mail.gmail.com> References: <2d5132a50912120818n5d555321j2497aa3edeb3525b@mail.gmail.com> <4B25F77E.4060607@ntc.zcu.cz> <2d5132a50912140615t282c1708m5ca2b1c34d200433@mail.gmail.com> <1e2af89e0912140817x6c63e828nad30288cc6a28e48@mail.gmail.com> Message-ID: <2d5132a50912140854n698148e7o838617d6398dc38d@mail.gmail.com> On Mon, Dec 14, 2009 at 4:17 PM, Matthew Brett wrote: > Hi, > >> Still not sure why though - in this simple example the indices do seem >> to be sorted - the only change I can find from calling the >> sorted_indices function is that after the WRITEABLE flag of .indices >> is True (as it is originally) but on the loadmat'ed array WRITEABLE is >> False. >> >> Could this be causing it? Is it a bug in loadmat - should the flag be >> set differently? (I'm not sure what it does). > > Yes, that's strange. ?Do you see the same behavior for loadmat from scipy 0.7.1? No, Python 2.6 with scipy 0.7.1 (through macports) doesn't seem to have WRITEABLE False. I don't have umfpack in that installation to check but I would guess it would work. In [3]: scipy.__version__ Out[3]: '0.7.1' In [4]: a.indices.flags Out[4]: C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [6]: a2.indices.flags Out[6]: C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False The environment where is happens is python 2.5.4, scipy '0.8.0.dev6136' But it is not limited to sparse, everything seems to be loaded with writeable false (I'm not sure if this would be causing the problem with umfpack or if it could cause other problems): In [61]: a = eye(3) In [62]: a.flags Out[62]: C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [63]: savemat('test',{'a':a}) In [64]: a2 = loadmat('test')['a'] In [65]: a2.flags Out[65]: C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : False WRITEABLE : False ALIGNED : False UPDATEIFCOPY : False Cheers Robin From matthew.brett at gmail.com Mon Dec 14 12:03:42 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 14 Dec 2009 12:03:42 -0500 Subject: [SciPy-User] loadmat/savemat sparse with umfpack problem In-Reply-To: <2d5132a50912140854n698148e7o838617d6398dc38d@mail.gmail.com> References: <2d5132a50912120818n5d555321j2497aa3edeb3525b@mail.gmail.com> <4B25F77E.4060607@ntc.zcu.cz> <2d5132a50912140615t282c1708m5ca2b1c34d200433@mail.gmail.com> <1e2af89e0912140817x6c63e828nad30288cc6a28e48@mail.gmail.com> <2d5132a50912140854n698148e7o838617d6398dc38d@mail.gmail.com> Message-ID: <1e2af89e0912140903q368aae52t1d3f82faedf7b377@mail.gmail.com> Hi, > The environment where is happens is python 2.5.4, scipy '0.8.0.dev6136' > But it is not limited to sparse, everything seems to be loaded with > writeable false (I'm not sure if this would be causing the problem > with umfpack or if it could cause other problems): Ah - how odd - thanks - that's a bug I should fix... Matthew From tmp50 at ukr.net Mon Dec 14 15:51:20 2009 From: tmp50 at ukr.net (Dmitrey) Date: Mon, 14 Dec 2009 22:51:20 +0200 Subject: [SciPy-User] fancy indexation of sparse matrices is terrible slow In-Reply-To: Message-ID: ?? ????: Nathan Bell The CSR and CSC formats also support fast fancy indexing. Something like: >>> A = ... some sparse matrix >>> csr_matrix(A)[I,J] ought to work well. I have tried all 3, lil works faster than any others, but casting to numpy ndarray and invoking it on the dense array obtained works several times faster, that is extremely important for me. Thus on my problem with sparse matrix of shape 151* 1178 each time casting it to dense before applying fancy indexing makes numerical optimization ipopt solver works faster from 130 to 60 sec. Here's the code that demonstrates the difference: from scipy.sparse import lil_matrix, find from numpy import where, zeros, prod, vstack, ones from time import time n, m = 1000, 20000 M = vstack((zeros((n, m)), ones((25, m)))) I, J = where(M) # 1: time for lil M = lil_matrix(M) print 'sparsity of the matrix:', float(M.size)/prod(M.shape) t = time() M[I, J] print('time elapsed with lil: %f' % (time()-t)) # 2: time for dense t = time() M = M.A # even with this time elapsed is less M[I, J] print('time elapsed with ndarray: %f' % (time()-t)) # output: # sparsity of the matrix: 0.0243902439024 # time elapsed with lil: 17.006631 # time elapsed with ndarray: 0.710046 # as for csr and csc fancy indexation it works several times slower So, I have filed a ticket for it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pfeldman at verizon.net Mon Dec 14 21:33:33 2009 From: pfeldman at verizon.net (Dr. Phillip M. Feldman) Date: Mon, 14 Dec 2009 18:33:33 -0800 (PST) Subject: [SciPy-User] [SciPy-user] problem with documentation for `jn` and `jv` Bessel functions Message-ID: <26787354.post@talk.nabble.com> The documentation for `jv` and `jn` in http://docs.scipy.org/doc/scipy/reference/special.html#module-scipy.special needs some fixing. I believe that `jn` evaluates the Bessel function of the first kind for integer order n at complex z, and that `jv` evaluates the Bessel function of the first kind for real order v at complex z. (I tried to edit the page in question, but although I can log in, I'm unable to actually edit anything). -- View this message in context: http://old.nabble.com/problem-with-documentation-for-%60jn%60-and-%60jv%60-Bessel-functions-tp26787354p26787354.html Sent from the Scipy-User mailing list archive at Nabble.com. From robert.kern at gmail.com Mon Dec 14 21:36:41 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 Dec 2009 20:36:41 -0600 Subject: [SciPy-User] [SciPy-user] problem with documentation for `jn` and `jv` Bessel functions In-Reply-To: <26787354.post@talk.nabble.com> References: <26787354.post@talk.nabble.com> Message-ID: <3d375d730912141836y4add5990m32ecbd496ef1a57@mail.gmail.com> On Mon, Dec 14, 2009 at 20:33, Dr. Phillip M. Feldman wrote: > > The documentation for `jv` and `jn` in > http://docs.scipy.org/doc/scipy/reference/special.html#module-scipy.special > needs some fixing. I believe that `jn` evaluates the Bessel function of the > first kind for integer order n at complex z, and that `jv` evaluates the > Bessel function of the first kind for real order v at complex z. ?(I tried > to edit the page in question, but although I can log in, I'm unable to > actually edit anything). jn() is currently just an alias to jv(). The jn() implementation was found to be buggy for certain combinations of inputs and the floating point algorithm for jv() wasn't too bad performancewise over an integer algorithm. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From joebarfett at yahoo.ca Mon Dec 14 21:47:43 2009 From: joebarfett at yahoo.ca (theWonder) Date: Mon, 14 Dec 2009 18:47:43 -0800 (PST) Subject: [SciPy-User] [SciPy-user] f2py fortran95 won't compile In-Reply-To: <2d5132a50912120302o5a9d5edcmab4b660a81292482@mail.gmail.com> References: <876227.1119.qm@web59402.mail.ac4.yahoo.com> <26753350.post@talk.nabble.com> <2d5132a50912120302o5a9d5edcmab4b660a81292482@mail.gmail.com> Message-ID: <26788278.post@talk.nabble.com> It's an enormous error message. I have posted the first big chunk of it. If you really think the whole thing would be better then that's ok. Importantly, if I just go into the documents directory with hello.f and type "f2py -c -m hello.f" I don't get an error message. I get this running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building extension "hello.f95" sources running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext running scons But no .os file and nothing works from python. OK here is that big error message: ***** running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building extension "hello" sources f2py options: [] f2py:> /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/hellomodule.c creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5 Reading fortran codes... Reading file 'hello.f' (format:fix,strict) Post-processing... Block: hello Block: foo Post-processing (stage 2)... Building modules... Building module "hello"... Constructing wrapper function "foo"... foo(a) Wrote C/API module "hello" to file "/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/hellomodule.c" adding '/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding '/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5' to include_dirs. copying /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/f2py/src/fortranobject.c -> /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5 copying /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/f2py/src/fortranobject.h -> /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5 running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize Gnu95FCompiler Found executable /usr/local/bin/gfortran Using built-in specs. Target: powerpc-apple-darwin8 Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --build=i686-apple-darwin8 --host=i686-apple-darwin8 --target=powerpc-apple-darwin8 --enable-languages=fortran Thread model: posix gcc version 4.2.3 Using built-in specs. Target: i686-apple-darwin8 Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --build=i686-apple-darwin8 --host=i686-apple-darwin8 --target=i686-apple-darwin8 --enable-languages=fortran Thread model: posix gcc version 4.2.3 /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/distutils/fcompiler/gnu.py:119: UserWarning: Env. variable MACOSX_DEPLOYMENT_TARGET set to 10.3 warnings.warn(s) Using built-in specs. Target: powerpc-apple-darwin8 Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --build=i686-apple-darwin8 --host=i686-apple-darwin8 --target=powerpc-apple-darwin8 --enable-languages=fortran Thread model: posix gcc version 4.2.3 Using built-in specs. Target: i686-apple-darwin8 Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --build=i686-apple-darwin8 --host=i686-apple-darwin8 --target=i686-apple-darwin8 --enable-languages=fortran Thread model: posix gcc version 4.2.3 customize Gnu95FCompiler Using built-in specs. Target: powerpc-apple-darwin8 Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --build=i686-apple-darwin8 --host=i686-apple-darwin8 --target=powerpc-apple-darwin8 --enable-languages=fortran Thread model: posix gcc version 4.2.3 Using built-in specs. Target: i686-apple-darwin8 Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --build=i686-apple-darwin8 --host=i686-apple-darwin8 --target=i686-apple-darwin8 --enable-languages=fortran Thread model: posix gcc version 4.2.3 Using built-in specs. Target: powerpc-apple-darwin8 Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --build=i686-apple-darwin8 --host=i686-apple-darwin8 --target=powerpc-apple-darwin8 --enable-languages=fortran Thread model: posix gcc version 4.2.3 Using built-in specs. Target: i686-apple-darwin8 Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --build=i686-apple-darwin8 --host=i686-apple-darwin8 --target=i686-apple-darwin8 --enable-languages=fortran Thread model: posix gcc version 4.2.3 customize Gnu95FCompiler using build_ext building 'hello' extension compiling C sources C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders/1I creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp- creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5 compile options: '-I/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5 -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c' gcc: /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c In file included from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:18:20: error: limits.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:21:2: error: #error "Something's broken. UCHAR_MAX should be defined in limits.h." /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:25:2: error: #error "Python's source code assumes C's unsigned char is an 8-bit type." /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:32:19: error: stdio.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:34:5: error: #error "Python.h requires that stdio.h define NULL." /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:37:20: error: string.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:39:19: error: errno.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:41:20: error: stdlib.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:43:20: error: unistd.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:55:20: error: assert.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:57, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:93: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?Py_ssize_t? In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:57, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:200:76: error: math.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:207:22: error: sys/time.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:208:18: error: time.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:226:24: error: sys/select.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:265:22: error: sys/stat.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:515:21: error: termios.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:516: warning: ?struct winsize? declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:516: warning: its scope is only this definition or declaration, which is probably not what you want /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:516: warning: ?struct termios? declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:517: warning: ?struct winsize? declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:517: warning: ?struct termios? declared inside parameter list In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:76, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pymem.h:50: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pymem.h:51: error: expected declaration specifiers or ?...? before ?size_t? In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:78, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:104: error: expected specifier-qualifier-list before ?Py_ssize_t? /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:108: error: expected specifier-qualifier-list before ?Py_ssize_t? /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:131: error: expected declaration specifiers or ?...? before ?*? token /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:131: error: ?Py_ssize_t? declared as function returning a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:149: error: ?readbufferproc? declared as function returning a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:150: error: ?writebufferproc? declared as function returning a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:151: error: ?segcountproc? declared as function returning a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:152: error: ?charbufferproc? declared as function returning a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:215: error: expected specifier-qualifier-list before ?lenfunc? /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:229: error: expected specifier-qualifier-list before ?lenfunc? /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:244: error: expected declaration specifiers or ?...? before ?FILE? /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:258: warning: ?struct _typeobject? declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:259: warning: ?struct _typeobject? declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:262: error: field ?ob_refcnt? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:262: error: field ?ob_size? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:264: error: field ?tp_basicsize? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:264: error: field ?tp_itemsize? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:309: error: field ?tp_weaklistoffset? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:324: error: field ?tp_dictoffset? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:389: error: expected declaration specifiers or ?...? before ?FILE? In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:79, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/objimpl.h:97: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/objimpl.h:98: error: expected declaration specifiers or ?...? before ?size_t? /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/objimpl.h:228: error: ?PyGC_Collect? declared as function returning a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/objimpl.h:249: error: field ?gc_refs? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/objimpl.h:288: warning: parameter names (without types) in function declaration In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:83, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:55:19: error: ctype.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:118:21: error: wchar.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:83, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:384: error: field ?ob_refcnt? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:385: error: field ?length? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:447: error: ?PyUnicodeUCS2_GetSize? declared as function returning a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:521: error: expected ?;?, ?,? or ?)? before ?*? token /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:539: error: expected ?;?, ?,? or ?)? before ?*? token /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:1102: error: ?PyUnicodeUCS2_Tailmatch? declared as function returning a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:1114: error: ?PyUnicodeUCS2_Find? declared as function returning a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:1123: error: ?PyUnicodeUCS2_Count? declared as function returning a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:84, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/intobject.h:24: error: field ?ob_refcnt? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/intobject.h:38: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/intobject.h:41: error: ?PyInt_AsSsize_t? declared as function returning a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:86, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/longobject.h:25: error: ?_PyLong_AsSsize_t? declared as function returning a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/longobject.h:26: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/longobject.h:69: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?_PyLong_NumBits? /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/longobject.h:85: error: expected declaration specifiers or ?...? before ?size_t? /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/longobject.h:108: error: expected declaration specifiers or ?...? before ?size_t? In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:87, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/floatobject.h:15: error: field ?ob_refcnt? declared as a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:89, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/complexobject.h:39: error: field ?ob_refcnt? declared as a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:92, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/stringobject.h:10:20: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:92, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/stringobject.h:36: error: field ?ob_refcnt? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/stringobject.h:36: error: field ?ob_size? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/stringobject.h:63: error: expected declaration specifiers or ?...? before ?va_list? /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/stringobject.h:67: error: ?PyString_Size? declared as function returning a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:94, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/tupleobject.h:25: error: field ?ob_refcnt? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/tupleobject.h:25: error: field ?ob_size? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/tupleobject.h:40: error: ?PyTuple_Size? declared as function returning a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:95, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/listobject.h:23: error: field ?ob_refcnt? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/listobject.h:23: error: field ?ob_size? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/listobject.h:38: error: field ?allocated? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/listobject.h:47: error: ?PyList_Size? declared as function returning a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:96, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:55: error: field ?me_hash? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:71: error: field ?ob_refcnt? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:72: error: field ?ma_fill? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:73: error: field ?ma_used? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:79: error: field ?ma_mask? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:106: error: ?PyDict_Size? declared as function returning a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:98, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/setobject.h:36: error: field ?ob_refcnt? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/setobject.h:38: error: field ?fill? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/setobject.h:39: error: field ?used? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/setobject.h:45: error: field ?mask? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/setobject.h:79: error: ?PySet_Size? declared as function returning a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:99, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/methodobject.h:82: error: field ?ob_refcnt? declared as a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:101, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/funcobject.h:22: error: field ?ob_refcnt? declared as a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:102, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/classobject.h:13: error: field ?ob_refcnt? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/classobject.h:24: error: field ?ob_refcnt? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/classobject.h:31: error: field ?ob_refcnt? declared as a function In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:103, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:11: error: field ?ob_refcnt? declared as a function /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:12: error: expected specifier-qualifier-list before ?FILE? /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:38: error: expected ?)? before ?*? token /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:40: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?*? token /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:57: error: expected declaration specifiers or ?...? before ?FILE? /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:58: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?Py_UniversalNewlineFread? In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:105, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, from /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: Thanks so much for your time. Cheers! joe -- View this message in context: http://old.nabble.com/f2py-fortran95-won%27t-compile-tp26737506p26788278.html Sent from the Scipy-User mailing list archive at Nabble.com. From robince at gmail.com Tue Dec 15 03:43:06 2009 From: robince at gmail.com (Robin) Date: Tue, 15 Dec 2009 08:43:06 +0000 Subject: [SciPy-User] [SciPy-user] f2py fortran95 won't compile In-Reply-To: <26788278.post@talk.nabble.com> References: <876227.1119.qm@web59402.mail.ac4.yahoo.com> <26753350.post@talk.nabble.com> <2d5132a50912120302o5a9d5edcmab4b660a81292482@mail.gmail.com> <26788278.post@talk.nabble.com> Message-ID: <2d5132a50912150043x1a39dcf3x279bbb09f3af0ef7@mail.gmail.com> Hi On Tue, Dec 15, 2009 at 2:47 AM, theWonder wrote: > > It's an enormous error message. I have posted the first big chunk of it. If > you really think the whole thing would be better then that's ok. > Importantly, if I just go into the documents directory with hello.f and type > "f2py -c -m hello.f" I don't get an error message. I get this Hi, No definite answers but a couple of things to look at. It looks like you are using Python 2.5. As I said in an earlier message, if you are using the build from python.org you will need to use the older gcc (4.0) rather than 4.2 (the default from 10.6 and what you are using here). So before running f2py enter the following 2 commands: export CC=/usr/bin/gcc-4.0 export CXX=/usr/bin/g++-4.0 Second, sorry to contradict the other guy, but the -m option should be used to give the name of the module, so you really want 'hello' in there as you had originally f2py -c -m hello hello.f If you do f2py with no arguments it will give usage instructions. Third, try to call your file .f95 - I think it might determine filetype by extension (but looks like you have problems before that). The first error is about missing header files - limits.h stdio.h etc. These should be installed by xcode. I'm not sure why it's not finding them but I would try reinstalling xcode with all extra options (build for 10.4 etc.) On my computer there are deveral copies in /Developer/SDKs/MacOSX10.4u.sdk/usr/include/ /Developer/SDKs/MacOSX10.5.sdk/usr/include/limits.h /Developer/SDKs/MacOSX10.6.sdk/usr/include/limits.h /Developer/usr/lib/gcc/i686-apple-darwin10/4.0.1/include/limits.h /Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin10/4.2.1/include/limits.h etc Another thing to try might be to set OSX_DEPLOYMENT_TARGET I think it should be 10.4 to match the Python build: export OSX_DEPLOYMENT_TARGET=10.4 but it might be something else to try (other values) So sorry theres still nothing concrete but I can tell you the problem is something wrong with your build chain that is stopping creation of python extensions (not specifically f2py). Cheers Robin > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands --compiler > options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands --fcompiler > options > running build_src > building extension "hello.f95" sources > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > running scons > > But no .os file and nothing works from python. > > OK here is that big error message: > > > ***** > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands --compiler > options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands --fcompiler > options > running build_src > building extension "hello" sources > f2py options: [] > f2py:> > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/hellomodule.c > creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T > creating > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5 > Reading fortran codes... > ? ? ? ?Reading file 'hello.f' (format:fix,strict) > Post-processing... > ? ? ? ?Block: hello > ? ? ? ? ? ? ? ? ? ? ? ?Block: foo > Post-processing (stage 2)... > Building modules... > ? ? ? ?Building module "hello"... > ? ? ? ? ? ? ? ?Constructing wrapper function "foo"... > ? ? ? ? ? ? ? ? ?foo(a) > ? ? ? ?Wrote C/API module "hello" to file > "/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/hellomodule.c" > ?adding > '/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c' > to sources. > ?adding > '/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5' > to include_dirs. > copying > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/f2py/src/fortranobject.c > -> > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5 > copying > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/f2py/src/fortranobject.h > -> > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5 > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize NAGFCompiler > Could not locate executable f95 > customize AbsoftFCompiler > Could not locate executable f90 > Could not locate executable f77 > customize IBMFCompiler > Could not locate executable xlf90 > Could not locate executable xlf > customize IntelFCompiler > Could not locate executable ifort > Could not locate executable ifc > customize GnuFCompiler > Could not locate executable g77 > customize Gnu95FCompiler > Found executable /usr/local/bin/gfortran > Using built-in specs. > Target: powerpc-apple-darwin8 > Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local > --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ > --build=i686-apple-darwin8 --host=i686-apple-darwin8 > --target=powerpc-apple-darwin8 --enable-languages=fortran > Thread model: posix > gcc version 4.2.3 > Using built-in specs. > Target: i686-apple-darwin8 > Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local > --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ > --build=i686-apple-darwin8 --host=i686-apple-darwin8 > --target=i686-apple-darwin8 --enable-languages=fortran > Thread model: posix > gcc version 4.2.3 > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/distutils/fcompiler/gnu.py:119: > UserWarning: Env. variable MACOSX_DEPLOYMENT_TARGET set to 10.3 > ?warnings.warn(s) > Using built-in specs. > Target: powerpc-apple-darwin8 > Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local > --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ > --build=i686-apple-darwin8 --host=i686-apple-darwin8 > --target=powerpc-apple-darwin8 --enable-languages=fortran > Thread model: posix > gcc version 4.2.3 > Using built-in specs. > Target: i686-apple-darwin8 > Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local > --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ > --build=i686-apple-darwin8 --host=i686-apple-darwin8 > --target=i686-apple-darwin8 --enable-languages=fortran > Thread model: posix > gcc version 4.2.3 > customize Gnu95FCompiler > Using built-in specs. > Target: powerpc-apple-darwin8 > Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local > --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ > --build=i686-apple-darwin8 --host=i686-apple-darwin8 > --target=powerpc-apple-darwin8 --enable-languages=fortran > Thread model: posix > gcc version 4.2.3 > Using built-in specs. > Target: i686-apple-darwin8 > Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local > --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ > --build=i686-apple-darwin8 --host=i686-apple-darwin8 > --target=i686-apple-darwin8 --enable-languages=fortran > Thread model: posix > gcc version 4.2.3 > Using built-in specs. > Target: powerpc-apple-darwin8 > Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local > --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ > --build=i686-apple-darwin8 --host=i686-apple-darwin8 > --target=powerpc-apple-darwin8 --enable-languages=fortran > Thread model: posix > gcc version 4.2.3 > Using built-in specs. > Target: i686-apple-darwin8 > Configured with: /Builds/unix/gcc/gcc-4.2/configure --prefix=/usr/local > --mandir=/share/man --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ > --build=i686-apple-darwin8 --host=i686-apple-darwin8 > --target=i686-apple-darwin8 --enable-languages=fortran > Thread model: posix > gcc version 4.2.3 > customize Gnu95FCompiler using build_ext > building 'hello' extension > compiling C sources > C compiler: gcc -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -no-cpp-precomp > -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 > > creating /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var > creating > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders > creating > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders/1I > creating > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI > creating > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp- > creating > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T > creating > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5 > compile options: > '-I/var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5 > -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include > -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c' > gcc: > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c > In file included from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:18:20: > error: limits.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:21:2: > error: #error "Something's broken. ?UCHAR_MAX should be defined in > limits.h." > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:25:2: > error: #error "Python's source code assumes C's unsigned char is an 8-bit > type." > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:32:19: > error: stdio.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:34:5: > error: #error "Python.h requires that stdio.h define NULL." > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:37:20: > error: string.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:39:19: > error: errno.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:41:20: > error: stdlib.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:43:20: > error: unistd.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:55:20: > error: assert.h: No such file or directory > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:57, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:93: > error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?Py_ssize_t? > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:57, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:200:76: > error: math.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:207:22: > error: sys/time.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:208:18: > error: time.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:226:24: > error: sys/select.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:265:22: > error: sys/stat.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:515:21: > error: termios.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:516: > warning: ?struct winsize? declared inside parameter list > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:516: > warning: its scope is only this definition or declaration, which is probably > not what you want > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:516: > warning: ?struct termios? declared inside parameter list > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:517: > warning: ?struct winsize? declared inside parameter list > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyport.h:517: > warning: ?struct termios? declared inside parameter list > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:76, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pymem.h:50: > warning: parameter names (without types) in function declaration > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pymem.h:51: > error: expected declaration specifiers or ?...? before ?size_t? > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:78, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:104: > error: expected specifier-qualifier-list before ?Py_ssize_t? > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:108: > error: expected specifier-qualifier-list before ?Py_ssize_t? > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:131: > error: expected declaration specifiers or ?...? before ?*? token > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:131: > error: ?Py_ssize_t? declared as function returning a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:149: > error: ?readbufferproc? declared as function returning a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:150: > error: ?writebufferproc? declared as function returning a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:151: > error: ?segcountproc? declared as function returning a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:152: > error: ?charbufferproc? declared as function returning a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:215: > error: expected specifier-qualifier-list before ?lenfunc? > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:229: > error: expected specifier-qualifier-list before ?lenfunc? > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:244: > error: expected declaration specifiers or ?...? before ?FILE? > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:258: > warning: ?struct _typeobject? declared inside parameter list > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:259: > warning: ?struct _typeobject? declared inside parameter list > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:262: > error: field ?ob_refcnt? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:262: > error: field ?ob_size? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:264: > error: field ?tp_basicsize? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:264: > error: field ?tp_itemsize? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:309: > error: field ?tp_weaklistoffset? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:324: > error: field ?tp_dictoffset? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/object.h:389: > error: expected declaration specifiers or ?...? before ?FILE? > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:79, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/objimpl.h:97: > warning: parameter names (without types) in function declaration > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/objimpl.h:98: > error: expected declaration specifiers or ?...? before ?size_t? > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/objimpl.h:228: > error: ?PyGC_Collect? declared as function returning a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/objimpl.h:249: > error: field ?gc_refs? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/objimpl.h:288: > warning: parameter names (without types) in function declaration > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:83, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:55:19: > error: ctype.h: No such file or directory > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:118:21: > error: wchar.h: No such file or directory > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:83, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:384: > error: field ?ob_refcnt? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:385: > error: field ?length? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:447: > error: ?PyUnicodeUCS2_GetSize? declared as function returning a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:521: > error: expected ?;?, ?,? or ?)? before ?*? token > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:539: > error: expected ?;?, ?,? or ?)? before ?*? token > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:1102: > error: ?PyUnicodeUCS2_Tailmatch? declared as function returning a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:1114: > error: ?PyUnicodeUCS2_Find? declared as function returning a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:1123: > error: ?PyUnicodeUCS2_Count? declared as function returning a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:84, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/intobject.h:24: > error: field ?ob_refcnt? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/intobject.h:38: > warning: parameter names (without types) in function declaration > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/intobject.h:41: > error: ?PyInt_AsSsize_t? declared as function returning a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:86, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/longobject.h:25: > error: ?_PyLong_AsSsize_t? declared as function returning a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/longobject.h:26: > warning: parameter names (without types) in function declaration > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/longobject.h:69: > error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before > ?_PyLong_NumBits? > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/longobject.h:85: > error: expected declaration specifiers or ?...? before ?size_t? > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/longobject.h:108: > error: expected declaration specifiers or ?...? before ?size_t? > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:87, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/floatobject.h:15: > error: field ?ob_refcnt? declared as a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:89, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/complexobject.h:39: > error: field ?ob_refcnt? declared as a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:92, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/stringobject.h:10:20: > error: stdarg.h: No such file or directory > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:92, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/stringobject.h:36: > error: field ?ob_refcnt? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/stringobject.h:36: > error: field ?ob_size? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/stringobject.h:63: > error: expected declaration specifiers or ?...? before ?va_list? > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/stringobject.h:67: > error: ?PyString_Size? declared as function returning a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:94, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/tupleobject.h:25: > error: field ?ob_refcnt? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/tupleobject.h:25: > error: field ?ob_size? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/tupleobject.h:40: > error: ?PyTuple_Size? declared as function returning a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:95, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/listobject.h:23: > error: field ?ob_refcnt? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/listobject.h:23: > error: field ?ob_size? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/listobject.h:38: > error: field ?allocated? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/listobject.h:47: > error: ?PyList_Size? declared as function returning a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:96, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:55: > error: field ?me_hash? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:71: > error: field ?ob_refcnt? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:72: > error: field ?ma_fill? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:73: > error: field ?ma_used? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:79: > error: field ?ma_mask? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/dictobject.h:106: > error: ?PyDict_Size? declared as function returning a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:98, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/setobject.h:36: > error: field ?ob_refcnt? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/setobject.h:38: > error: field ?fill? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/setobject.h:39: > error: field ?used? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/setobject.h:45: > error: field ?mask? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/setobject.h:79: > error: ?PySet_Size? declared as function returning a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:99, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/methodobject.h:82: > error: field ?ob_refcnt? declared as a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:101, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/funcobject.h:22: > error: field ?ob_refcnt? declared as a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:102, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/classobject.h:13: > error: field ?ob_refcnt? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/classobject.h:24: > error: field ?ob_refcnt? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/classobject.h:31: > error: field ?ob_refcnt? declared as a function > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:103, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:11: > error: field ?ob_refcnt? declared as a function > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:12: > error: expected specifier-qualifier-list before ?FILE? > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:38: > error: expected ?)? before ?*? token > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:40: > error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?*? token > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:57: > error: expected declaration specifiers or ?...? before ?FILE? > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/fileobject.h:58: > error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before > ?Py_UniversalNewlineFread? > In file included from > /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:105, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.h:7, > ? ? ? ? ? ? ? ? from > /var/folders/1I/1IOFcx8xHPWOBxtsQ1BZgE+++TI/-Tmp-/tmpVxrq1T/src.macosx-10.3-fat-2.5/fortranobject.c:2: > > > Thanks so much for your time. Cheers! > joe > -- > View this message in context: http://old.nabble.com/f2py-fortran95-won%27t-compile-tp26737506p26788278.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From wnbell at gmail.com Tue Dec 15 04:31:02 2009 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 15 Dec 2009 04:31:02 -0500 Subject: [SciPy-User] fancy indexation of sparse matrices is terrible slow In-Reply-To: References: Message-ID: 2009/12/14 Dmitrey : > > I have tried all 3, lil works faster than any others, but casting to numpy > ndarray and invoking it on the dense array obtained works several times > faster, that is extremely important for me. Thus on my problem with sparse > matrix of shape 151* 1178 > each time casting it to dense before applying fancy indexing makes numerical > optimization ipopt solver works faster from 130 to 60 sec. Here's the code > that demonstrates the difference: > > > So, I have filed a ticket for it. Hi Dmitrey, I think what you're asking for is simply not possible. The time spent looking up elements in a dense matrix (a completely trivial operation) will always be less than the lookup in sparse matrix data structures where indexing elements is inherently more expensive. If you want something that does the above operation really fast then you can do >>> M = coo_matrix(M) >>> M.data # same as M[I,J] or >>> M = csr_matrix(M) >>> M.data # same as M[I,J] Why do you want to index sparse matrices in the first place? For many operations there are better formulations that avoid the need to do indexing of any kind. -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ From eg at fft.be Tue Dec 15 05:07:20 2009 From: eg at fft.be (Eloi Gaudry) Date: Tue, 15 Dec 2009 11:07:20 +0100 Subject: [SciPy-User] [build] visual studio 2008 and intel fortran 11.1 compiler In-Reply-To: <4B262C14.8010102@fft.be> References: <4B262C14.8010102@fft.be> Message-ID: <4B275FD8.9030207@fft.be> Hi, I think that the numpy/distutils/intel.py module is not up-to-date as it doesn't support Intel Visual Fortran for 64-bits apps (windows). And I think that the distutils module from python-2.6.4 cannot cope with Intel Visual Fortran neither: For instance, the following build command (automatically generated by distutils): C:\Program Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe -72 -w90 -w95 -KPIC -cm -O3 -unroll -Ic:\fft\bin\python-2.6.4\lib\site-packages\numpy\core\include -c -c scipy\fftpack\src\dfftpack\dcosqb.f -o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqb.o is not right (the "-o" switch at least) and should look like: C:\Program Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe -72 -w90 -w95 -KPIC -cm -O3 -unroll -Ic:\fft\bin\python-2.6.4\lib\site-packages\numpy\core\include -c scipy\fftpack\src\dfftpack\dcosqb.f /Fobuild\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqb.o Has anyone used the intel visual fortran (11.1) compiler to build scipy (or any fortran files using distutils) ? Thanks for your help, Eloi Eloi Gaudry wrote: > Hi, > > I'm having some trouble building scipy using Visual Studio 2008 and > Intel Fortran 11.1. It seems that the objects built by the Intel Fortran > compiler are not output in the 'right' directory. As you can see below, > the built objects are in the current directory (with *.obj sufix instead > of *.o). > I think I'm missing something somewhere in the distutils module, but I > cannot find what). Could the IntelFortran detection module be broken for > version 11.1 ? > > Note that numpy-1.30 was be built using the same compilers (and without > any issue). Aren't we using the same module to build scipy ? > > Any help would be appreciated, > Thanks, > Eloi > > > > C:\fft\external\scipy-0.7.1>call "C:\Program Files\Microsoft Visual > Studio 9.0\VC\vcvarsall.bat" x86 > Setting environment for using Microsoft Visual Studio 2008 x86 tools. > > C:\fft\external\scipy-0.7.1>set FC_VENDOR=Intel > C:\fft\external\scipy-0.7.1>SET IFORT_COMPILER11=C:\Program > Files\Intel\Compiler\11.1\051 > C:\fft\external\scipy-0.7.1>SET INTEL_LICENSE_FILE=C:\Program > Files\Common Files\Intel\Licenses;%INTEL_LICENSE_FILE% > C:\fft\external\scipy-0.7.1>SET PATH=%IFORT_COMPILER11%\Bin\ia32;%PATH% > C:\fft\external\scipy-0.7.1>SET LIB=%IFORT_COMPILER11%\Lib\ia32;%LIB% > C:\fft\external\scipy-0.7.1>SET > INCLUDE=%IFORT_COMPILER11%\Include;%IFORT_COMPILER11%\Include\ia32;%INCLUDE > > C:\fft\external\scipy-0.7.1>python setup.py build --compiler=msvc > --fcompiler=intel --verbose > Warning: No configuration returned, assuming unavailable. > blas_opt_info: > blas_mkl_info: > FOUND: > libraries = ['mkl_intel_c', 'mkl_intel_thread_dll', 'mkl_core_dll'] > library_dirs = ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\mkl\\ia32\\lib'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\mkl\\include'] > > FOUND: > libraries = ['mkl_intel_c', 'mkl_intel_thread_dll', 'mkl_core_dll'] > library_dirs = ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\mkl\\ia32\\lib'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\mkl\\include'] > > lapack_opt_info: > lapack_mkl_info: > mkl_info: > FOUND: > libraries = ['mkl_intel_c', 'mkl_intel_thread_dll', 'mkl_core_dll'] > library_dirs = ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\mkl\\ia32\\lib'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\mkl\\include'] > > FOUND: > libraries = ['mkl_scalapack_core_dll', 'mkl_intel_c', > 'mkl_intel_thread_dll', 'mkl_core_dll'] > library_dirs = ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\mkl\\ia32\\lib'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\mkl\\include'] > > FOUND: > libraries = ['mkl_scalapack_core_dll', 'mkl_intel_c', > 'mkl_intel_thread_dll', 'mkl_core_dll'] > library_dirs = ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\mkl\\ia32\\lib'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\mkl\\include'] > > umfpack_info: > libraries umfpack not found in c:\fft\bin\python-2.6.4\lib > libraries umfpack not found in C:\ > libraries umfpack not found in c:\fft\bin\python-2.6.4\libs > c:\fft\bin\python-2.6.4\lib\site-packages\numpy\distutils\system_info.py:452: > UserWarning: > UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) > not found. Directories to search for the libraries can be specified > in the > numpy/distutils/site.cfg file (section [umfpack]) or by setting > the UMFPACK environment variable. > warnings.warn(self.notfounderror.__doc__) > NOT AVAILABLE > > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands > --compiler options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler options > running build_src > building py_modules sources > building library "dfftpack" sources > building library "linpack_lite" sources > building library "mach" sources > building library "quadpack" sources > building library "odepack" sources > building library "fitpack" sources > building library "odrpack" sources > building library "minpack" sources > building library "rootfind" sources > building library "superlu_src" sources > building library "arpack" sources > building library "sc_c_misc" sources > building library "sc_cephes" sources > building library "sc_mach" sources > building library "sc_toms" sources > building library "sc_amos" sources > building library "sc_cdf" sources > building library "sc_specfun" sources > building library "statlib" sources > building extension "scipy.cluster._vq" sources > building extension "scipy.cluster._hierarchy_wrap" sources > building extension "scipy.fftpack._fftpack" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.fftpack.convolve" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.integrate._quadpack" sources > building extension "scipy.integrate._odepack" sources > building extension "scipy.integrate.vode" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.interpolate._fitpack" sources > building extension "scipy.interpolate.dfitpack" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > adding > 'build\src.win32-2.6\scipy\interpolate\src\dfitpack-f2pywrappers.f' to > sources. > building extension "scipy.interpolate._interpolate" sources > building extension "scipy.io.numpyio" sources > building extension "scipy.lib.blas.fblas" sources > f2py options: ['skip:', ':'] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > adding > 'build\src.win32-2.6\build\src.win32-2.6\scipy\lib\blas\fblas-f2pywrappers.f' > to sources. > building extension "scipy.lib.blas.cblas" sources > adding 'build\src.win32-2.6\scipy\lib\blas\cblas.pyf' to sources. > f2py options: ['skip:', ':'] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.lib.lapack.flapack" sources > f2py options: ['skip:', ':'] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.lib.lapack.clapack" sources > adding 'build\src.win32-2.6\scipy\lib\lapack\clapack.pyf' to sources. > f2py options: ['skip:', ':'] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.lib.lapack.calc_lwork" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.lib.lapack.atlas_version" sources > building extension "scipy.linalg.fblas" sources > adding 'build\src.win32-2.6\scipy\linalg\fblas.pyf' to sources. > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > adding > 'build\src.win32-2.6\build\src.win32-2.6\scipy\linalg\fblas-f2pywrappers.f' > to sources. > building extension "scipy.linalg.cblas" sources > adding 'build\src.win32-2.6\scipy\linalg\cblas.pyf' to sources. > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.linalg.flapack" sources > adding 'build\src.win32-2.6\scipy\linalg\flapack.pyf' to sources. > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > adding > 'build\src.win32-2.6\build\src.win32-2.6\scipy\linalg\flapack-f2pywrappers.f' > to sources. > building extension "scipy.linalg.clapack" sources > adding 'build\src.win32-2.6\scipy\linalg\clapack.pyf' to sources. > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.linalg._flinalg" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.linalg.calc_lwork" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.linalg.atlas_version" sources > building extension "scipy.odr.__odrpack" sources > building extension "scipy.optimize._minpack" sources > building extension "scipy.optimize._zeros" sources > building extension "scipy.optimize._lbfgsb" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.optimize.moduleTNC" sources > building extension "scipy.optimize._cobyla" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.optimize.minpack2" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.optimize._slsqp" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.optimize._nnls" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.signal.sigtools" sources > building extension "scipy.signal.spline" sources > building extension "scipy.sparse.linalg.isolve._iterative" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.sparse.linalg.dsolve._zsuperlu" sources > building extension "scipy.sparse.linalg.dsolve._dsuperlu" sources > building extension "scipy.sparse.linalg.dsolve._csuperlu" sources > building extension "scipy.sparse.linalg.dsolve._ssuperlu" sources > building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources > building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > adding > 'build\src.win32-2.6\build\src.win32-2.6\scipy\sparse\linalg\eigen\arpack\_arpack-f2pywrappers.f' > to sources > building extension "scipy.sparse.sparsetools._csr" sources > building extension "scipy.sparse.sparsetools._csc" sources > building extension "scipy.sparse.sparsetools._coo" sources > building extension "scipy.sparse.sparsetools._bsr" sources > building extension "scipy.sparse.sparsetools._dia" sources > building extension "scipy.spatial.ckdtree" sources > building extension "scipy.spatial._distance_wrap" sources > building extension "scipy.special._cephes" sources > building extension "scipy.special.specfun" sources > f2py options: ['--no-wrap-functions'] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.stats.statlib" sources > f2py options: ['--no-wrap-functions'] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.stats.vonmises_cython" sources > building extension "scipy.stats.futil" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > building extension "scipy.stats.mvn" sources > f2py options: [] > adding 'build\src.win32-2.6\fortranobject.c' to sources. > adding 'build\src.win32-2.6' to include_dirs. > adding 'build\src.win32-2.6\scipy\stats\mvn-f2pywrappers.f' to sources. > building extension "scipy.ndimage._nd_image" sources > building data_files sources > running build_py > copying scipy\version.py -> build\lib.win32-2.6\scipy > copying build\src.win32-2.6\scipy\__config__.py -> build\lib.win32-2.6\scipy > running build_clib > No module named msvccompiler in numpy.distutils; trying from distutils > customize MSVCCompiler > customize MSVCCompiler using build_clib > customize IntelFCompiler > debutg compiler_f90 > debugc > Found executable C:\Program Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe > debutg {'linker_exe': None, 'archiver': ['ar', '-cr'], 'compiler_f77': > [None, '-72', '-w90', '-w95'], 'version_cmd': > F77>', '-FI', '-V', '-c', > 'c:\\docume~1\\qa\\locals~1\\temp\\tmp4uc_dl\\frbyvf.f', '-o', > 'c:\\docume~1\\qa\\locals~1\ > mp\\tmp4uc_dl\\frbyvf.o'], 'compiler_fix': [None, '-FI'], 'linker_so': > ['', '-shared'], 'ranlib': ['ranlib'], 'c > iler_f90': ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe']} > debutg compiler_f77 > debugc > debutg compiler_fix > debugc > debutg linker_so > debugc > debutg linker_exe > debugc > debutg version_cmd > debugc > debutg archiver > debugc > debutg ranlib > debugc > debutg compiler_f90 > debugc > debutg {'linker_exe': None, 'archiver': ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe', '-cr' > 'compiler_f77': ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe', '-72', '-w90', > '-w95'], 'vers > _cmd': ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe', '-FI', '-V', > '-c', 'c:\\docume~1\\qa\\ > als~1\\temp\\tmp4uc_dl\\frbyvf.f', '-o', > 'c:\\docume~1\\qa\\locals~1\\temp\\tmp4uc_dl\\frbyvf.o'], 'compiler_fix': [' > \Program Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe', > '-FI'], 'linker_so': ['C:\\Program Files\\Intel\\C > iler\\11.1\\051\\Bin\\ia32\\ifort.exe', '-shared'], 'ranlib': > ['C:\\Program Files\\Intel\\Compiler\\11.1\\051\\Bin\\i > \\ifort.exe'], 'compiler_f90': ['C:\\Program > Files\\Intel\\Compiler\\11.1\\051\\Bin\\ia32\\ifort.exe']} > debutg compiler_f77 > debugc > debutg compiler_fix > debugc > debutg linker_so > debugc > debutg linker_exe > debugc > debutg version_cmd > debugc > debutg archiver > debugc > debutg ranlib > debugc > Found executable C:\Program Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe > customize IntelFCompiler using build_clib > building 'dfftpack' library > compiling Fortran sources > Fortran f77 compiler: C:\Program > Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe -72 -w90 -w95 -KPIC -cm > -O3 -unroll > Fortran f90 compiler: C:\Program > Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe -FR -KPIC -cm -O3 -unroll > Fortran fix compiler: C:\Program > Files\Intel\Compiler\11.1\051\Bin\ia32\ifort.exe -FI -KPIC -cm -O3 -unroll > creating build\temp.win32-2.6 > creating build\temp.win32-2.6\scipy > creating build\temp.win32-2.6\scipy\fftpack > creating build\temp.win32-2.6\scipy\fftpack\src > creating build\temp.win32-2.6\scipy\fftpack\src\dfftpack > compile options: > '-Ic:\fft\bin\python-2.6.4\lib\site-packages\numpy\core\include -c' > ifort.exe:f77: scipy\fftpack\src\dfftpack\dcosqb.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dcosqf.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dcosqi.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dcost.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dcosti.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dfftb.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dfftb1.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dfftf.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dfftf1.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dffti.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dffti1.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dsinqb.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dsinqf.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dsinqi.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dsint.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dsint1.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\dsinti.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\zfftb.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\zfftb1.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\zfftf.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\zfftf1.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\zffti.f > ifort.exe:f77: scipy\fftpack\src\dfftpack\zffti1.f > C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\lib.exe > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqb.o bui > ld\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqf.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqi.o build\tem > p.win32-2.6\scipy\fftpack\src\dfftpack\dcost.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosti.o build\temp.win32 > -2.6\scipy\fftpack\src\dfftpack\dfftb.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dfftb1.o > build\temp.win32-2.6\sc > ipy\fftpack\src\dfftpack\dfftf.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dfftf1.o > build\temp.win32-2.6\scipy\fft > pack\src\dfftpack\dffti.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dffti1.o > build\temp.win32-2.6\scipy\fftpack\sr > c\dfftpack\dsinqb.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsinqf.o > build\temp.win32-2.6\scipy\fftpack\src\dfft > pack\dsinqi.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsint.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\ds > int1.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsinti.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zfftb.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zfftb1.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zfftf.o build\t > emp.win32-2.6\scipy\fftpack\src\dfftpack\zfftf1.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zffti.o build\temp.win > 32-2.6\scipy\fftpack\src\dfftpack\zffti1.o > /OUT:build\temp.win32-2.6\dfftpack.lib > Found executable C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\lib.exe > Microsoft (R) Library Manager Version 9.00.30729.01 > Copyright (C) Microsoft Corporation. All rights reserved. > > LINK : fatal error LNK1181: cannot open input file > 'build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqb.o' > error: Command "C:\Program Files\Microsoft Visual Studio > 9.0\VC\BIN\lib.exe build\temp.win32-2.6\scipy\fftpack\src\dfftp > ack\dcosqb.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosqf.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dc > osqi.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcost.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dcosti.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dfftb.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dfftb1.o build\t > emp.win32-2.6\scipy\fftpack\src\dfftpack\dfftf.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dfftf1.o build\temp.win > 32-2.6\scipy\fftpack\src\dfftpack\dffti.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dffti1.o > build\temp.win32-2.6\ > scipy\fftpack\src\dfftpack\dsinqb.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsinqf.o > build\temp.win32-2.6\scipy\ > fftpack\src\dfftpack\dsinqi.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsint.o > build\temp.win32-2.6\scipy\fftpack > \src\dfftpack\dsint1.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\dsinti.o > build\temp.win32-2.6\scipy\fftpack\src\d > fftpack\zfftb.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zfftb1.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack > \zfftf.o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zfftf1.o > build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zffti. > o build\temp.win32-2.6\scipy\fftpack\src\dfftpack\zffti1.o > /OUT:build\temp.win32-2.6\dfftpack.lib" failed with exit stat > us 1181 > > C:\fft\external\scipy-0.7.1>dir > Volume in drive C has no label. > Volume Serial Number is 54FF-E262 > > Directory of C:\fft\external\scipy-0.7.1 > > 12/14/2009 01:08 PM . > 12/14/2009 01:08 PM .. > 12/09/2009 05:58 PM build > 11/27/2009 04:52 PM 585 build_scipy.x64.bat > 12/09/2009 05:14 PM 790 build_scipy.x86.bat > 12/09/2009 05:14 PM 848 build_scipy.x86.bat~ > 12/14/2009 01:07 PM 2,811 dcosqb.obj > 12/14/2009 01:07 PM 2,649 dcosqf.obj > 12/14/2009 01:07 PM 1,277 dcosqi.obj > 12/14/2009 01:07 PM 1,849 dcost.obj > 12/14/2009 01:07 PM 1,828 dcosti.obj > 12/14/2009 01:07 PM 525 dfftb.obj > 12/14/2009 01:08 PM 22,309 dfftb1.obj > 12/14/2009 01:08 PM 525 dfftf.obj > 12/14/2009 01:08 PM 23,737 dfftf1.obj > 12/14/2009 01:08 PM 525 dffti.obj > 12/14/2009 01:08 PM 1,930 dffti1.obj > 11/24/2009 04:01 PM doc > 12/14/2009 01:08 PM 1,324 dsinqb.obj > 12/14/2009 01:08 PM 1,161 dsinqf.obj > 12/14/2009 01:08 PM 509 dsinqi.obj > 12/14/2009 01:08 PM 541 dsint.obj > 12/14/2009 01:08 PM 2,746 dsint1.obj > 12/14/2009 01:08 PM 1,145 dsinti.obj > 12/14/2009 01:05 PM 2,174 > FFT_Install.aixgcc.fromnumpy13.python264.sh > 11/24/2009 04:01 PM 7,414 FFT_install.dynblas.sh > 11/24/2009 04:01 PM 759 FFT_Install.fromnumpy13.python264.sh > 12/14/2009 01:07 PM 609 frbyvf.obj > 11/24/2009 04:01 PM 13,619 INSTALL.txt > 11/24/2009 04:01 PM 291 LATEST.txt > 11/24/2009 04:01 PM 1,570 LICENSE.txt > 11/24/2009 04:01 PM 695 MANIFEST.in > 11/24/2009 04:01 PM 1,715 PKG-INFO > 11/24/2009 04:01 PM 4,097 README.txt > 12/14/2009 01:07 PM scipy > 11/24/2009 04:01 PM 4,991 setup.py > 11/24/2009 04:01 PM 149 setupegg.py > 11/24/2009 04:01 PM 3,441 setupscons.py > 12/09/2009 04:36 PM 128 site.aixxlf.cfg > 11/24/2009 04:01 PM 4,041 THANKS.txt > 11/24/2009 04:01 PM 1,233 TOCHANGE.txt > 12/14/2009 01:08 PM 541 zfftb.obj > 12/14/2009 01:08 PM 22,229 zfftb1.obj > 12/14/2009 01:08 PM 541 zfftf.obj > 12/14/2009 01:08 PM 22,341 zfftf1.obj > 12/14/2009 01:08 PM 525 zffti.obj > 12/14/2009 01:08 PM 2,046 zffti1.obj > 42 File(s) 164,763 bytes > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Eloi Gaudry Free Field Technologies Axis Park Louvain-la-Neuve Rue Emile Francqui, 1 B-1435 Mont-Saint Guibert BELGIUM Company Phone: +32 10 487 959 Company Fax: +32 10 454 626 From cournape at gmail.com Tue Dec 15 05:37:45 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 15 Dec 2009 16:07:45 +0530 Subject: [SciPy-User] [build] visual studio 2008 and intel fortran 11.1 compiler In-Reply-To: <4B275FD8.9030207@fft.be> References: <4B262C14.8010102@fft.be> <4B275FD8.9030207@fft.be> Message-ID: <5b8d13220912150237q5c91dccfk2322d481b4e13eb8@mail.gmail.com> HI Eloi, On Tue, Dec 15, 2009 at 3:37 PM, Eloi Gaudry wrote: > Hi, > > I think that the numpy/distutils/intel.py module is not up-to-date as it > doesn't support Intel Visual Fortran for 64-bits apps (windows). If you have IFort and the MKL, I advise you to use numscons to build and install numpy and scipy on windows 64. This is much easier, and it should work today (if it doesn't, I will fix the problems, whereas I have no interest in fixing distutils at this point). You only need to first install numscons (pure python, so just do python setup.py install): http://github.com/cournape/numscons.git Then, you build numpy and scipy as follows (inside their respective source tree): python setupscons.py scons -b --fcompiler=ifort --compiler=msvc install - setupscons.py instead of setup.py tells distutils to use numscons to build extensions - -b tells numscons to bypass distutils compiler detection, so that numscons does its own detection internally - in bypass mode, you need to say which C and Fortran compiler you want to use. To tell numscons where to look for the MKL, you need to create a numscons.cfg file inside the source trees, something like: [mkl] include_dirs = C:?Program? Files? (x86)?Intel?compilers?....?MKL?include library_dirs = C:?Prorgram? Files? (x86)?Intel......?MKL?ia32?lib Both numpy and scipy test suites pass almost every test on windows 64 bits. cheers, David From tmp50 at ukr.net Tue Dec 15 06:03:08 2009 From: tmp50 at ukr.net (Dmitrey) Date: Tue, 15 Dec 2009 13:03:08 +0200 Subject: [SciPy-User] fancy indexation of sparse matrices is terrible slow In-Reply-To: Message-ID: ?? ????: Nathan Bell 2009/12/14 Dmitrey : > > I have tried all 3, lil works faster than any others, but casting to numpy > ndarray and invoking it on the dense array obtained works several times > faster, that is extremely important for me. Thus on my problem with sparse > matrix of shape 151* 1178 > each time casting it to dense before applying fancy indexing makes numerical > optimization ipopt solver works faster from 130 to 60 sec. Here's the code > that demonstrates the difference: > > > So, I have filed a ticket for it. Hi Dmitrey, I think what you're asking for is simply not possible. The time spent looking up elements in a dense matrix (a completely trivial operation) will always be less than the lookup in sparse matrix data structures where indexing elements is inherently more expensive. but what about dok format? I guess it shouldn't be too expensive - it is? O(numberOfRequiredElements*log2(nonZerosNumber)), and there will be no peak memory jump as for lil. If you want something that does the above operation really fast then you can do >>> M = coo_matrix(M) >>> M.data # same as M[I,J] or But I want to have M[I, J], not just M. Some (or lots of) required elements can be zeros and thus absent in? M.data. Why do you want to index sparse matrices in the first place? For many operations there are better formulations that avoid the need to do indexing of any kind. Because some optimization problems can have constraints gradient of size nVars*nConstraints very huge.? nVars are up to ~100'000, and nConstraints are up to 200'000'000. I will just have memory overflow if I will try casting it to dense each time. BTW, why return type of numpy.where is int64, but type of scipy.sparse.find is numpy.int32? I had spent lots of time to search why ipopt hadn't work with that data, and it turned out it is due to this type difference (maybe some issues related to connecting Python code to C++). Also, does it mean that I cannot create sparse matrices with one dimension greater than max(int32)? instead of max(int64)? Thank you in advance, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagss at student.matnat.uio.no Tue Dec 15 06:15:06 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Tue, 15 Dec 2009 12:15:06 +0100 Subject: [SciPy-User] fancy indexation of sparse matrices is terrible slow In-Reply-To: References: Message-ID: <4B276FBA.5060208@student.matnat.uio.no> Dmitrey wrote: > ?? ????: Nathan Bell > > 2009/12/14 Dmitrey : > > > > I have tried all 3, lil works faster than any others, but > casting to numpy > > ndarray and invoking it on the dense array obtained works > several times > > faster, that is extremely important for me. Thus on my problem > with sparse > > matrix of shape 151* 1178 > > each time casting it to dense before applying fancy indexing > makes numerical > > optimization ipopt solver works faster from 130 to 60 sec. > Here's the code > > that demonstrates the difference: > > > > > > So, I have filed a ticket for it. > > Hi Dmitrey, > > I think what you're asking for is simply not possible. The time spent > looking up elements in a dense matrix (a completely trivial > operation) > will always be less than the lookup in sparse matrix data structures > where indexing elements is inherently more expensive. > > > but what about dok format? I guess it shouldn't be too expensive - it > is O(numberOfRequiredElements*log2(nonZerosNumber)), and there will > be no peak memory jump as for lil. > > If you want something that does the above operation really fast > then you can do > >>> M = coo_matrix(M) > >>> M.data # same as M[I,J] > or > > > But I want to have M[I, J], not just M. Some (or lots of) required > elements can be zeros and thus absent in M.data. > > Why do you want to index sparse matrices in the first place? For many > operations there are better formulations that avoid the need to do > indexing of any kind. > > > Because some optimization problems can have constraints gradient of > size nVars*nConstraints very huge. nVars are up to ~100'000, and > nConstraints are up to 200'000'000. I will just have memory overflow > if I will try casting it to dense each time. > BTW, why return type of numpy.where is int64, but type of > scipy.sparse.find is numpy.int32? I had spent lots of time to search > why ipopt hadn't work with that data, and it turned out it is due to > this type difference (maybe some issues related to connecting Python > code to C++). Also, does it mean that I cannot create sparse matrices > with one dimension greater than max(int32) instead of max(int64)? I can confirm this, scipy.sparse uses int32 indices only. Search for "intc" in the sources for the relevant lines. Dag Sverre From eg at fft.be Tue Dec 15 06:53:37 2009 From: eg at fft.be (Eloi Gaudry) Date: Tue, 15 Dec 2009 12:53:37 +0100 Subject: [SciPy-User] [build] visual studio 2008 and intel fortran 11.1 compiler In-Reply-To: <5b8d13220912150237q5c91dccfk2322d481b4e13eb8@mail.gmail.com> References: <4B262C14.8010102@fft.be> <4B275FD8.9030207@fft.be> <5b8d13220912150237q5c91dccfk2322d481b4e13eb8@mail.gmail.com> Message-ID: <4B2778C1.2070100@fft.be> David Cournapeau wrote: > HI Eloi, > > On Tue, Dec 15, 2009 at 3:37 PM, Eloi Gaudry wrote: > >> Hi, >> >> I think that the numpy/distutils/intel.py module is not up-to-date as it >> doesn't support Intel Visual Fortran for 64-bits apps (windows). >> > > If you have IFort and the MKL, I advise you to use numscons to build > and install numpy and scipy on windows 64. This is much easier, and it > should work today (if it doesn't, I will fix the problems, whereas I > have no interest in fixing distutils at this point). > > You only need to first install numscons (pure python, so just do > python setup.py install): > > http://github.com/cournape/numscons.git > > Then, you build numpy and scipy as follows (inside their respective > source tree): > > python setupscons.py scons -b --fcompiler=ifort --compiler=msvc install > > - setupscons.py instead of setup.py tells distutils to use numscons > to build extensions > - -b tells numscons to bypass distutils compiler detection, so that > numscons does its own detection internally > - in bypass mode, you need to say which C and Fortran compiler you want to use. > > To tell numscons where to look for the MKL, you need to create a > numscons.cfg file inside the source trees, something like: > > [mkl] > include_dirs = C:?Program? Files? (x86)?Intel?compilers?....?MKL?include > library_dirs = C:?Prorgram? Files? (x86)?Intel......?MKL?ia32?lib > > Both numpy and scipy test suites pass almost every test on windows 64 bits. > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi David, thanks for this advise. Can I use numscons to build scipy-0.7.1 and numpy-1.3.0 or should I use their current svn version ? Another question, using numscons to build scipy implicitly means that numpy should have been built using numscons too (cf. compilers options that might be different, etc.) ? Eloi -- Eloi Gaudry Free Field Technologies Axis Park Louvain-la-Neuve Rue Emile Francqui, 1 B-1435 Mont-Saint Guibert BELGIUM Company Phone: +32 10 487 959 Company Fax: +32 10 454 626 From cournape at gmail.com Tue Dec 15 07:14:36 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 15 Dec 2009 17:44:36 +0530 Subject: [SciPy-User] [build] visual studio 2008 and intel fortran 11.1 compiler In-Reply-To: <4B2778C1.2070100@fft.be> References: <4B262C14.8010102@fft.be> <4B275FD8.9030207@fft.be> <5b8d13220912150237q5c91dccfk2322d481b4e13eb8@mail.gmail.com> <4B2778C1.2070100@fft.be> Message-ID: <5b8d13220912150414l5f2da4e4o15967fd735e9202f@mail.gmail.com> On Tue, Dec 15, 2009 at 5:23 PM, Eloi Gaudry wrote: > thanks for this advise. > Can I use numscons to build scipy-0.7.1 and numpy-1.3.0 or should I use > their current svn version ? You should use the svn version. Numpy 1.3.0 can be built under VS 2008 64 bits, but scipy 0.7.1 cannot (because of fortran, mostly), and svn scipy requires svn numpy. For numpy, you should use the 1.4.x branch, this one is the stable one for now. > > Another question, using numscons to build scipy implicitly means that > numpy should have been built using numscons too (cf. compilers options > that might be different, etc.) ? It is not strictly required, but that's strongly recommended, in part for the reasons you mentioned. There is no advantage in using distutils to build compared to numscons if your toolchain is supported that I can see anyway, cheers, David From wnbell at gmail.com Tue Dec 15 08:32:47 2009 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 15 Dec 2009 08:32:47 -0500 Subject: [SciPy-User] fancy indexation of sparse matrices is terrible slow In-Reply-To: References: Message-ID: 2009/12/15 Dmitrey : > > Because some optimization problems can have constraints gradient of size > nVars*nConstraints very huge.? nVars are up to ~100'000, and nConstraints > are up to 200'000'000. I will just have memory overflow if I will try > casting it to dense each time. I looked at the CSR fancy indexing code and realized it was much worse than I originally thought so I implemented a faster path in C++ for the operation. With SciPy r6139 ( http://projects.scipy.org/scipy/changeset/6139 ) I get a considerable improvement with a variation of your example: from scipy.sparse import csr_matrix, find from numpy import where, zeros, prod, vstack, ones from time import time n, m = 1000, 20000 M = vstack((zeros((n, m)), ones((25, m)))) I, J = where(M) # 1: time for lil M = csr_matrix(M) print 'sparsity of the matrix:', float(M.size)/prod(M.shape) t = time() M[I, J] print('time elapsed with csr: %f' % (time()-t)) # 2: time for dense t = time() M = M.A # even with this time elapsed is less M[I, J] print('time elapsed with ndarray: %f' % (time()-t)) # output: # sparsity of the matrix: 0.0243902439024 # time elapsed with csr: 0.044210 # time elapsed with ndarray: 0.197938 > BTW, why return type of numpy.where is int64, but type of scipy.sparse.find > is numpy.int32? I had spent lots of time to search why ipopt hadn't work > with that data, and it turned out it is due to this type difference (maybe > some issues related to connecting Python code to C++). Also, does it mean > that I cannot create sparse matrices with one dimension greater than > max(int32) instead of max(int64)? That's correct, max(int32) is the largest size you can use. There are a few reasons for this limitation. The first is that few people will have matrices with more than 2B rows or columns since most meaningful matrices of that size would require 100GB+ of memory. The second is that external libraries (SuperLU, UMFPACK, ARPACK, etc.) don't necessarily support larger indices, or our wrappers for those libraries don't necessarily support it right now. Other SciPy components, like io.savemat(), would also have to be checked as well. The C++ backend of the sparse module (sparsetools) is fully templated so adding support for 64-bit integers is fairly trivial there. However, it would have the downside of doubling the compilation time and executable size. -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ From tmp50 at ukr.net Tue Dec 15 09:05:46 2009 From: tmp50 at ukr.net (Dmitrey) Date: Tue, 15 Dec 2009 16:05:46 +0200 Subject: [SciPy-User] fancy indexation of sparse matrices is terrible slow In-Reply-To: Message-ID: thank you, it work very well now for both M and M.T Doesn't anyone know when next scipy/numpy releases are planned (approximately, of course)? Thank you in advance, D. I looked at the CSR fancy indexing code and realized it was much worse than I originally thought so I implemented a faster path in C++ for the operation. With SciPy r6139 ( http://projects.scipy.org/scipy/changeset/6139 ) I get a considerable improvement with a variation of your example -------------- next part -------------- An HTML attachment was scrubbed... URL: From afraser at lanl.gov Tue Dec 15 14:48:19 2009 From: afraser at lanl.gov (Andy Fraser) Date: Tue, 15 Dec 2009 12:48:19 -0700 Subject: [SciPy-User] How to draw an image in fltk from numpy array? Message-ID: <87pr6g9fy4.fsf@lanl.gov> Users, I have used scipy for a few years. Now I want to write a GUI in fltk to look at intermediate results of image manipulations, and I'm having trouble getting started. I get segmentation faults. I started with the "hello world" example at http://pyfltk.sourceforge.net/examples.php As a next step, I tried to display a small random image. Following the code for the fltk backend in matplotlib, I wrote the code I'll include below. When I run the code, it produces: ->python GUI.py before fltk.fl_draw_image Segmentation fault I suspect that argument "bits.data" is an inappropriate argument of fltk.fl_draw_image. Does anyone have a different idea, or advice on how to construct a correct argument? Andy #file GUI.py import sys, fltk, numpy W = 180 H = 90 D = 3 X = 5 Y = 200 bits = numpy.empty( (W,H,3), numpy.uint8) # class derived from /usr/share/pyshared/matplotlib/backends/backend_fltkagg.py class FltkCanvas(fltk.Fl_Widget): def __init__(self,x,y,w,h): fltk.Fl_Widget.__init__(self, 0, 0, w, h, "canvas") self._oldsize=(None,None) self._draw_overlay = False self._button = None self._key = None def draw(self): newsize=(self.w(),self.h()) if(self._oldsize !=newsize): self._oldsize =newsize print 'before fltk.fl_draw_image' fltk.fl_draw_image(bits.data,0,0,int(W),int(H),3,0) print 'did fltk.fl_draw_image' self.redraw() def theCancelButtonCallback(ptr): sys.exit(0) window = fltk.Fl_Window(100,100,200,400) canvas = FltkCanvas(X,Y,W,H) canvas.draw() window.end() window.show(len(sys.argv), sys.argv) fltk.Fl.run() From matthew.brett at gmail.com Tue Dec 15 14:56:41 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 15 Dec 2009 14:56:41 -0500 Subject: [SciPy-User] loadmat/savemat sparse with umfpack problem In-Reply-To: <1e2af89e0912140903q368aae52t1d3f82faedf7b377@mail.gmail.com> References: <2d5132a50912120818n5d555321j2497aa3edeb3525b@mail.gmail.com> <4B25F77E.4060607@ntc.zcu.cz> <2d5132a50912140615t282c1708m5ca2b1c34d200433@mail.gmail.com> <1e2af89e0912140817x6c63e828nad30288cc6a28e48@mail.gmail.com> <2d5132a50912140854n698148e7o838617d6398dc38d@mail.gmail.com> <1e2af89e0912140903q368aae52t1d3f82faedf7b377@mail.gmail.com> Message-ID: <1e2af89e0912151156s791ba3aft5676b05fad2a94cc@mail.gmail.com> Hi, On Mon, Dec 14, 2009 at 12:03 PM, Matthew Brett wrote: > Hi, > >> The environment where is happens is python 2.5.4, scipy '0.8.0.dev6136' >> But it is not limited to sparse, everything seems to be loaded with >> writeable false (I'm not sure if this would be causing the problem >> with umfpack or if it could cause other problems): > > Ah - how odd - thanks - that's a bug I should fix... There's a very general fix in trunk r6141 - could you check whether it solves your problem with sparse? Thanks a lot, Matthew From tmp50 at ukr.net Tue Dec 15 16:09:36 2009 From: tmp50 at ukr.net (Dmitrey) Date: Tue, 15 Dec 2009 23:09:36 +0200 Subject: [SciPy-User] [ANN] OpenOpt 0.27 (optimization), FuncDesigner 0.17 (auto differentiation) Message-ID: Hi all, I'm glad to inform you about release of OpenOpt 0.27 (universal numerical optimization framework), FuncDesigner 0.17 (CAS with automatic differentiation, convenient modelling of linear/nonlinear functions, can use convenient modelling for some OpenOpt optimization problems and systems of linear/nonlinear equations, possibly sparse or overdetermined), DerApproximator 0.17 (finite-differences derivatives approximation, get or check user-supplied). license: BSD See also Changelog for details: http://openopt.org/Changelog Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Tue Dec 15 17:27:22 2009 From: robince at gmail.com (Robin) Date: Tue, 15 Dec 2009 22:27:22 +0000 Subject: [SciPy-User] loadmat/savemat sparse with umfpack problem In-Reply-To: <1e2af89e0912151156s791ba3aft5676b05fad2a94cc@mail.gmail.com> References: <2d5132a50912120818n5d555321j2497aa3edeb3525b@mail.gmail.com> <4B25F77E.4060607@ntc.zcu.cz> <2d5132a50912140615t282c1708m5ca2b1c34d200433@mail.gmail.com> <1e2af89e0912140817x6c63e828nad30288cc6a28e48@mail.gmail.com> <2d5132a50912140854n698148e7o838617d6398dc38d@mail.gmail.com> <1e2af89e0912140903q368aae52t1d3f82faedf7b377@mail.gmail.com> <1e2af89e0912151156s791ba3aft5676b05fad2a94cc@mail.gmail.com> Message-ID: <2d5132a50912151427j1342e806tfca6f31628a65c84@mail.gmail.com> On Tue, Dec 15, 2009 at 7:56 PM, Matthew Brett wrote: > Hi, > > On Mon, Dec 14, 2009 at 12:03 PM, Matthew Brett wrote: >> Hi, >> >>> The environment where is happens is python 2.5.4, scipy '0.8.0.dev6136' >>> But it is not limited to sparse, everything seems to be loaded with >>> writeable false (I'm not sure if this would be causing the problem >>> with umfpack or if it could cause other problems): >> >> Ah - how odd - thanks - that's a bug I should fix... > > There's a very general fix in trunk r6141 - could you check whether it > solves your problem with sparse? Yep that seems to have done it. Can spsolve saved/loaded sparse with umfpack and writeable flag is set. Thanks a lot for the speedy fix! Presumably it was something in the loading rather than saving routine so previously saved files will load correctly? Cheers Robin From matthew.brett at gmail.com Tue Dec 15 17:31:13 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 15 Dec 2009 17:31:13 -0500 Subject: [SciPy-User] loadmat/savemat sparse with umfpack problem In-Reply-To: <2d5132a50912151427j1342e806tfca6f31628a65c84@mail.gmail.com> References: <2d5132a50912120818n5d555321j2497aa3edeb3525b@mail.gmail.com> <4B25F77E.4060607@ntc.zcu.cz> <2d5132a50912140615t282c1708m5ca2b1c34d200433@mail.gmail.com> <1e2af89e0912140817x6c63e828nad30288cc6a28e48@mail.gmail.com> <2d5132a50912140854n698148e7o838617d6398dc38d@mail.gmail.com> <1e2af89e0912140903q368aae52t1d3f82faedf7b377@mail.gmail.com> <1e2af89e0912151156s791ba3aft5676b05fad2a94cc@mail.gmail.com> <2d5132a50912151427j1342e806tfca6f31628a65c84@mail.gmail.com> Message-ID: <1e2af89e0912151431u1ca8ee66y6adb41353ee88307@mail.gmail.com> Hi, >> There's a very general fix in trunk r6141 - could you check whether it >> solves your problem with sparse? > > Yep that seems to have done it. Can spsolve saved/loaded sparse with > umfpack and writeable flag is set. Thanks a lot for the speedy fix! No problem - I did a huge refactor in the loading routines, and I was expecting some new bugs, thanks for tracking it down. > Presumably it was something in the loading rather than saving routine > so previously saved files will load correctly? Yes - I had previously been copying all the loaded arrays, but now I am returning them directly, and I hadn't noticed that np.ndarray constructed arrays are writeable -> False by default. Best, Matthew From eg at fft.be Wed Dec 16 03:21:33 2009 From: eg at fft.be (Eloi Gaudry) Date: Wed, 16 Dec 2009 09:21:33 +0100 Subject: [SciPy-User] [build] visual studio 2008 and intel fortran 11.1 compiler In-Reply-To: <5b8d13220912150414l5f2da4e4o15967fd735e9202f@mail.gmail.com> References: <4B262C14.8010102@fft.be> <4B275FD8.9030207@fft.be> <5b8d13220912150237q5c91dccfk2322d481b4e13eb8@mail.gmail.com> <4B2778C1.2070100@fft.be> <5b8d13220912150414l5f2da4e4o15967fd735e9202f@mail.gmail.com> Message-ID: <4B28988D.8030401@fft.be> David Cournapeau wrote: > On Tue, Dec 15, 2009 at 5:23 PM, Eloi Gaudry wrote >> thanks for this advise. >> Can I use numscons to build scipy-0.7.1 and numpy-1.3.0 or should I use >> their current svn version ? > You should use the svn version. Numpy 1.3.0 can be built under VS 2008 > 64 bits, but scipy 0.7.1 cannot (because of fortran, mostly), and svn > scipy requires svn numpy. For numpy, you should use the 1.4.x branch, > this one is the stable one for now. I cannot afford to use the current svn (i.e. non-release) versions, but I will switch to numscons when the next releases will be made. An alternative to distutils is a good thing, especially if it's easily maintainable and configurable (i.e. adding support for newer compilers or changing whatever compilers flags, macro, etc. should be something easy). >> Another question, using numscons to build scipy implicitly means that >> numpy should have been built using numscons too (cf. compilers options >> that might be different, etc.) ? > It is not strictly required, but that's strongly recommended, in part > for the reasons you mentioned. There is no advantage in using > distutils to build compared to numscons if your toolchain is supported > that I can see anyway, Does this means that all future versions of numpy and scipy will be built (default behavior) using numscons ? > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Thanks for your help David, Eloi From cournape at gmail.com Wed Dec 16 06:23:25 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 16 Dec 2009 16:53:25 +0530 Subject: [SciPy-User] [build] visual studio 2008 and intel fortran 11.1 compiler In-Reply-To: <4B28988D.8030401@fft.be> References: <4B262C14.8010102@fft.be> <4B275FD8.9030207@fft.be> <5b8d13220912150237q5c91dccfk2322d481b4e13eb8@mail.gmail.com> <4B2778C1.2070100@fft.be> <5b8d13220912150414l5f2da4e4o15967fd735e9202f@mail.gmail.com> <4B28988D.8030401@fft.be> Message-ID: <5b8d13220912160323o18c923f7r27269ae52e58188c@mail.gmail.com> On Wed, Dec 16, 2009 at 1:51 PM, Eloi Gaudry wrote: > An > alternative to distutils is a good thing, especially if it's easily > maintainable and configurable (i.e. adding support for newer compilers > or changing whatever compilers flags, macro, etc. should be something easy). Yes, indeed. Numscons was a first tentative in that direction. > Does this means that all future versions of numpy and scipy will be > built (default behavior) using numscons ? I myself use numscons pretty much exclusively, but I still use distutils to build the official binaries. I think numscons works well for developers, but it often fails for end-users who want to build, and this cannot be easily fixed if at all while being used within distutils (numscons still depends on distutils for packaging, etc...). Currently, I am working on a full, written from scratch packaging solution. I have officially announced it at Scipy India: git://github.com/cournape/toydist.git The main idea is to have a declarative file which can fully describe all that's needed for most packages, and have a stable and specified interface with real build systems like make/scons/waf for the others. David From bruce at clearscienceinc.com Wed Dec 16 09:57:40 2009 From: bruce at clearscienceinc.com (Bruce Ford) Date: Wed, 16 Dec 2009 09:57:40 -0500 Subject: [SciPy-User] suptitle interfering with figure Message-ID: Using the code below, I'm placing a second title on a figure. However this title is placed over the figure and does not adjust the figure placement for the title. Is there a way to adjust the placement of the figure further down? I'm not seeing any such setting. pyplot.suptitle(MakeTitle(param, hour, day, year, month, model, figure_type,date_time_type,mei_span,composite_year,composite_years,year_array,year1,day1,month1), fontsize=10) if plot_another == 1: pyplot.suptitle(MakeTitle(param1, hour1, day1, year1, month1, model1, figure_type1,date_time_type1,mei_span1,composite_year1,composite_years1,year_array1,year11,day11,month11), fontsize=10, y=.90) Any help would be appreciated. Bruce --------------------------------------- Bruce W. Ford Clear Science, Inc. bruce at clearscienceinc.com http://www.ClearScienceInc.com Phone/Fax: 904-379-9704 Skype: bruce.w.ford Google Talk: fordbw at gmail.com From nmb at wartburg.edu Wed Dec 16 10:31:00 2009 From: nmb at wartburg.edu (Neil Martinsen-Burrell) Date: Wed, 16 Dec 2009 09:31:00 -0600 Subject: [SciPy-User] suptitle interfering with figure In-Reply-To: References: Message-ID: <4B28FD34.7010105@wartburg.edu> On 2009-12-16 08:57 , Bruce Ford wrote: > Using the code below, I'm placing a second title on a figure. However > this title is placed over the figure and does not adjust the figure > placement for the title. Is there a way to adjust the placement of > the figure further down? I'm not seeing any such setting. > > pyplot.suptitle(MakeTitle(param, hour, day, year, month, > model, figure_type,date_time_type,mei_span,composite_year,composite_years,year_array,year1,day1,month1), > fontsize=10) > if plot_another == 1: > pyplot.suptitle(MakeTitle(param1, hour1, day1, year1, > month1, model1, > figure_type1,date_time_type1,mei_span1,composite_year1,composite_years1,year_array1,year11,day11,month11), > fontsize=10, y=.90) Matplotlib questions should be directed to the matplotlib-users list: https://lists.sourceforge.net/lists/listinfo/matplotlib-users -Neil From eg at fft.be Wed Dec 16 14:29:46 2009 From: eg at fft.be (Eloi Gaudry) Date: Wed, 16 Dec 2009 20:29:46 +0100 Subject: [SciPy-User] [build] visual studio 2008 and intel fortran 11.1 compiler In-Reply-To: <5b8d13220912160323o18c923f7r27269ae52e58188c@mail.gmail.com> References: <4B262C14.8010102@fft.be> <4B275FD8.9030207@fft.be> <5b8d13220912150237q5c91dccfk2322d481b4e13eb8@mail.gmail.com> <4B2778C1.2070100@fft.be> <5b8d13220912150414l5f2da4e4o15967fd735e9202f@mail.gmail.com> <4B28988D.8030401@fft.be> <5b8d13220912160323o18c923f7r27269ae52e58188c@mail.gmail.com> Message-ID: <4B29352A.8090307@fft.be> On 16/12/2009 12:23, David Cournapeau wrote: > On Wed, Dec 16, 2009 at 1:51 PM, Eloi Gaudry wrote: > > >> An >> alternative to distutils is a good thing, especially if it's easily >> maintainable and configurable (i.e. adding support for newer compilers >> or changing whatever compilers flags, macro, etc. should be something easy). >> > Yes, indeed. Numscons was a first tentative in that direction. > > >> Does this means that all future versions of numpy and scipy will be >> built (default behavior) using numscons ? >> > I myself use numscons pretty much exclusively, but I still use > distutils to build the official binaries. I think numscons works well > for developers, but it often fails for end-users who want to build, > and this cannot be easily fixed if at all while being used within > distutils (numscons still depends on distutils for packaging, etc...). > > Currently, I am working on a full, written from scratch packaging > solution. I have officially announced it at Scipy India: > > git://github.com/cournape/toydist.git > > The main idea is to have a declarative file which can fully describe > all that's needed for most packages, and have a stable and specified > interface with real build systems like make/scons/waf for the others. > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Thanks for this information David. I just finished building scipy on both our 32-bit & 64-bit windows platforms, using disutils (patched python and numpy modules). I'll (hopefully) switch to scons for building the next official releases. Eloi From amg at iri.columbia.edu Wed Dec 16 15:21:12 2009 From: amg at iri.columbia.edu (Arthur M. Greene) Date: Wed, 16 Dec 2009 15:21:12 -0500 Subject: [SciPy-User] Sort b according to histogram(a) In-Reply-To: <4B29352A.8090307@fft.be> References: <4B262C14.8010102@fft.be> <4B275FD8.9030207@fft.be> <5b8d13220912150237q5c91dccfk2322d481b4e13eb8@mail.gmail.com> <4B2778C1.2070100@fft.be> <5b8d13220912150414l5f2da4e4o15967fd735e9202f@mail.gmail.com> <4B28988D.8030401@fft.be> <5b8d13220912160323o18c923f7r27269ae52e58188c@mail.gmail.com> <4B29352A.8090307@fft.be> Message-ID: <4B294138.70608@iri.columbia.edu> This can be accomplished in a loop, but I'm hoping there is a more efficient way: Starting with two 1-D arrays indexed the same, e.g., x0,x1,x2... y0,y1,y2... the x's are first binned normally, i.e., a set of edges is defined and each x that falls in a particular bin generates a count. (This can be accomplished using histogram.) What I then need to do though, is find the average value of the corresponding y's. Example: x = (1,4,7), y = (200,100,1000), edges = (0,5,10) Then counts = (2,1), ydata = (150,1000) Size of x or y is only about 500, but the procedure needs to be repeated many times and looping makes the execution quite slow. I've been looking at np.digitize, but haven't quite figured out how this (or some other call I don't know) might be used to "vectorize" the process. Suggestions appreciated! Thx, Arthur *^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~* Arthur M. Greene The International Research Institute for Climate and Society Monell Building, 61 Route 9W, Palisades, NY 10964-8000 USA amg at iri . columbia . edu *^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~* -------------- next part -------------- A non-text attachment was scrubbed... Name: amg.vcf Type: text/x-vcard Size: 174 bytes Desc: not available URL: From kwgoodman at gmail.com Wed Dec 16 15:52:52 2009 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 16 Dec 2009 12:52:52 -0800 Subject: [SciPy-User] Sort b according to histogram(a) In-Reply-To: <4B294138.70608@iri.columbia.edu> References: <4B262C14.8010102@fft.be> <4B275FD8.9030207@fft.be> <5b8d13220912150237q5c91dccfk2322d481b4e13eb8@mail.gmail.com> <4B2778C1.2070100@fft.be> <5b8d13220912150414l5f2da4e4o15967fd735e9202f@mail.gmail.com> <4B28988D.8030401@fft.be> <5b8d13220912160323o18c923f7r27269ae52e58188c@mail.gmail.com> <4B29352A.8090307@fft.be> <4B294138.70608@iri.columbia.edu> Message-ID: On Wed, Dec 16, 2009 at 12:21 PM, Arthur M. Greene wrote: > This can be accomplished in a loop, but I'm hoping there is a more efficient > way: Starting with two 1-D arrays indexed the same, e.g., > > x0,x1,x2... y0,y1,y2... > > the x's are first binned normally, i.e., a set of edges is defined and each > x that falls in a particular bin generates a count. (This can be > accomplished using histogram.) What I then need to do though, is find the > average value of the corresponding y's. Example: > > x = (1,4,7), y = (200,100,1000), edges = (0,5,10) > > Then > > counts = (2,1), ydata = (150,1000) > > Size of x or y is only about 500, but the procedure needs to be repeated > many times and looping makes the execution quite slow. I've been looking at > np.digitize, but haven't quite figured out how this (or some other call I > don't know) might be used to "vectorize" the process. Suggestions > appreciated! If you don't have many edges then looping might be faster than something like this: >> x = (1,4,7) >> y = (200,100,1000) >> edges = (0,5,10) >> idx = np.digitize(x, edges) >> idx array([1, 1, 2]) >> jdx = np.equal.outer(idx, np.unique1d(idx)) >> jdx array([[ True, False], [ True, False], [False, True]], dtype=bool) Now, how to use jdx to find the means.... From aisaac at american.edu Wed Dec 16 15:59:22 2009 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 16 Dec 2009 15:59:22 -0500 Subject: [SciPy-User] Sort b according to histogram(a) In-Reply-To: <4B294138.70608@iri.columbia.edu> References: <4B262C14.8010102@fft.be> <4B275FD8.9030207@fft.be> <5b8d13220912150237q5c91dccfk2322d481b4e13eb8@mail.gmail.com> <4B2778C1.2070100@fft.be> <5b8d13220912150414l5f2da4e4o15967fd735e9202f@mail.gmail.com> <4B28988D.8030401@fft.be> <5b8d13220912160323o18c923f7r27269ae52e58188c@mail.gmail.com> <4B29352A.8090307@fft.be> <4B294138.70608@iri.columbia.edu> Message-ID: <4B294A2A.4050101@american.edu> On 12/16/2009 3:21 PM, Arthur M. Greene wrote: > x = (1,4,7), y = (200,100,1000), edges = (0,5,10) > counts = (2,1), ydata = (150,1000) >>> gety = np.where(np.equal.outer(np.arange(1,len(edges)),np.digitize(x,edges)),y,0) >>> counts = (gety>0).sum(axis=1) >>> gety.sum(axis=1) / counts array([ 150, 1000]) fwiw, Alan Isaac From ferrell at diablotech.com Wed Dec 16 16:45:31 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Wed, 16 Dec 2009 14:45:31 -0700 Subject: [SciPy-User] Pickling timeseries Dates In-Reply-To: <161504.16801.qm@web33006.mail.mud.yahoo.com> References: <161504.16801.qm@web33006.mail.mud.yahoo.com> Message-ID: On Dec 10, 2009, at 9:28 PM, David Baddeley wrote: > It also shouldn't be to hard to register a new pickle handler that > takes care of the conversion to e.g. datetime for you. I'm not > familiar with timeseries stuff, but wrote something similar to let > me pickle slices, which I've included below - as you can see it's > pretty trivial. After that you can carry on treating your date > (slice) objects as though they were picklable. > > import copy_reg > > def pickleSlice(slice): > return unpickleSlice, (slice.start, slice.stop, slice.step) > > def unpickleSlice(start, stop, step): > return slice(start, stop, step) > > copy_reg.pickle(slice, pickleSlice, unpickleSlice) > > best wishes, > David > That was a great suggestion. Seems to work for me. If I import this at the top of my package (or as soon as I can) will it be available whenever I need it? In case anybody else needs to pickle Dates, here's the code that seems to work for me. = = = = = = = = = = = = = = = = = ======================================================================== # Attempt to pickle timeseries date instances. # This converts a ts.Date instance to a string # Frequency is lost, but I'm only using 'd', so that's okay for me. import pickle import copy_reg import scikits.timeseries as ts def pickleDT(dt): """Pickle __reduce__ method for a date instance dt by converting to a string, then pickling.""" dA = str(dt) ds = pickle.dumps(dA) # This is what __reduce__ is supposed to return return unpickleDT, (ds,) def unpickleDT(dA): """Unpickle a date instance with frequency 'd'.""" ds = pickle.loads(dA) dt = ts.Date('d', ds) return dt # Register these methods copy_reg.pickle(ts.Date, pickleDT, unpickleDT) if __name__ == '__main__': sD = ts.Date('d', '2009-11-30') ds = pickle.dumps(sD) dt = pickle.loads(ds) print 'Started with %s, unpickled %s.' % (sD, dt) if dt == sD: print 'Yeah, flawless victory!' else: print 'Boo, shameful failure.' From amg at iri.columbia.edu Wed Dec 16 16:53:18 2009 From: amg at iri.columbia.edu (Arthur M. Greene) Date: Wed, 16 Dec 2009 16:53:18 -0500 Subject: [SciPy-User] Sort b according to histogram(a) In-Reply-To: <4B294A2A.4050101@american.edu> References: <4B262C14.8010102@fft.be> <4B275FD8.9030207@fft.be> <5b8d13220912150237q5c91dccfk2322d481b4e13eb8@mail.gmail.com> <4B2778C1.2070100@fft.be> <5b8d13220912150414l5f2da4e4o15967fd735e9202f@mail.gmail.com> <4B28988D.8030401@fft.be> <5b8d13220912160323o18c923f7r27269ae52e58188c@mail.gmail.com> <4B29352A.8090307@fft.be> <4B294138.70608@iri.columbia.edu> <4B294A2A.4050101@american.edu> Message-ID: <4B2956CE.5040807@iri.columbia.edu> Alan G Isaac wrote: > On 12/16/2009 3:21 PM, Arthur M. Greene wrote: >> x = (1,4,7), y = (200,100,1000), edges = (0,5,10) >> counts = (2,1), ydata = (150,1000) > > >>>> gety = >>>>np.where(np.equal.outer(np.arange(1,len(edges)),np.digitize(x,edges)),y,0) >>>> counts = (gety>0).sum(axis=1) >>>> gety.sum(axis=1) / counts > array([ 150, 1000]) > The above looks like it will work (took a minute to figure out what "outer" is doing), with one modification: y can be of either sign, so counts becomes (gety!=0).sum(axis=1). y could also conceivably be exactly zero, in which case the count would be missed even with the above modification. But this is not too likely -- the raw values are provided to many decimal places. Thanks for the quick and helpful replies! Arthur *^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~* Arthur M. Greene The International Research Institute for Climate and Society The Earth Institute, Columbia University amg at iri . columbia . edu *^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~*^*~* -------------- next part -------------- A non-text attachment was scrubbed... Name: amg.vcf Type: text/x-vcard Size: 174 bytes Desc: not available URL: From rjchacko at gmail.com Thu Dec 17 10:23:40 2009 From: rjchacko at gmail.com (Ranjit Chacko) Date: Thu, 17 Dec 2009 10:23:40 -0500 Subject: [SciPy-User] fftpack problem Message-ID: Hi, I have a loop in my program which repeatedly calls ifft2 and fft2 in order to do a convolution. After many iterations the program crashed because of a memory error. Is there a memory leak in these functions? Python(8811) malloc: *** mmap(size=4194304) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Traceback (most recent call last): File "/Users/rjchacko/Python/pythonsims/src/gagne.py", line 92, in nonLocalTerm = ifft2(fft2(phi) * Ktilde) File "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", line 415, in ifft2 return ifftn(x,shape,axes,overwrite_x) File "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", line 393, in ifftn return _raw_fftnd(tmp,shape,axes,-1,overwrite_x,work_function) File "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", line 312, in _raw_fftnd x = _fix_shape(x, s[i], waxes[i]) File "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", line 38, in _fix_shape z = zeros(s,x.dtype.char) MemoryError -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Dec 17 10:35:06 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 17 Dec 2009 08:35:06 -0700 Subject: [SciPy-User] fftpack problem In-Reply-To: References: Message-ID: On Thu, Dec 17, 2009 at 8:23 AM, Ranjit Chacko wrote: > Hi, > > I have a loop in my program which repeatedly calls ifft2 and fft2 in order > to do a convolution. After many iterations the program crashed because of a > memory error. Is there a memory leak in these functions? > > Python(8811) malloc: *** mmap(size=4194304) failed (error code=12) > *** error: can't allocate region > *** set a breakpoint in malloc_error_break to debug > Traceback (most recent call last): > File "/Users/rjchacko/Python/pythonsims/src/gagne.py", line 92, in > > nonLocalTerm = ifft2(fft2(phi) * Ktilde) > File > "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", > line 415, in ifft2 > return ifftn(x,shape,axes,overwrite_x) > File > "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", > line 393, in ifftn > return _raw_fftnd(tmp,shape,axes,-1,overwrite_x,work_function) > File > "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", > line 312, in _raw_fftnd > x = _fix_shape(x, s[i], waxes[i]) > File > "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", > line 38, in _fix_shape > z = zeros(s,x.dtype.char) > MemoryError > > Maybe. But we need a code sample that reproduces the problem to track it down. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Dec 17 10:58:30 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 17 Dec 2009 10:58:30 -0500 Subject: [SciPy-User] fftpack problem In-Reply-To: References: Message-ID: <1cd32cbb0912170758x41e7ead5j5f779c7dcf3b1530@mail.gmail.com> On Thu, Dec 17, 2009 at 10:35 AM, Charles R Harris wrote: > > > On Thu, Dec 17, 2009 at 8:23 AM, Ranjit Chacko wrote: >> >> Hi, >> I have a loop in my program which repeatedly calls ifft2 and fft2 in order >> to do a convolution. After many iterations the program crashed because of a >> memory error. Is there a memory leak in these functions? >> Python(8811) malloc: *** mmap(size=4194304) failed (error code=12) >> *** error: can't allocate region >> *** set a breakpoint in malloc_error_break to debug >> Traceback (most recent call last): >> ??File "/Users/rjchacko/Python/pythonsims/src/gagne.py", line 92, in >> >> ?? ?nonLocalTerm = ifft2(fft2(phi) * Ktilde) >> ??File >> "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", >> line 415, in ifft2 >> ?? ?return ifftn(x,shape,axes,overwrite_x) >> ??File >> "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", >> line 393, in ifftn >> ?? ?return _raw_fftnd(tmp,shape,axes,-1,overwrite_x,work_function) >> ??File >> "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", >> line 312, in _raw_fftnd >> ?? ?x = _fix_shape(x, s[i], waxes[i]) >> ??File >> "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", >> line 38, in _fix_shape >> ?? ?z = zeros(s,x.dtype.char) >> MemoryError > > Maybe. But we need a code sample that reproduces the problem to track it > down. Isn't there a warning somewhere that fft is caching? I cannot find it anymore. (or maybe it's for something else) Josef > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From ferrell at diablotech.com Thu Dec 17 16:47:05 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Thu, 17 Dec 2009 14:47:05 -0700 Subject: [SciPy-User] SciKits.TimeSeries, Error unpickling time series of dicts Message-ID: I get an error unpickling a time series of dicts. I'm on OS X 10.6. -robert Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51) Type "copyright", "credits" or "license" for more information. IPython 0.10 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: import scikits.timeseries as ts In [2]: import pickle In [3]: sD = ts.Date('d', '2009-01-01') In [4]: dictTS = ts.time_series(dates=[sD], data=[{'a':1}]) In [5]: dp = pickle.dumps(dictTS) In [6]: pickle.loads(dp) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/TandG/ in () /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ pickle.pyc in loads(str) 1372 def loads(str): 1373 file = StringIO(str) -> 1374 return Unpickler(file).load() 1375 1376 # Doctest /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ pickle.pyc in load(self) 856 while 1: 857 key = read(1) --> 858 dispatch[key](self) 859 except _Stop, stopinst: 860 return stopinst.value /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ pickle.pyc in load_build(self) 1215 setstate = getattr(inst, "__setstate__", None) 1216 if setstate: -> 1217 setstate(state) 1218 return 1219 slotstate = None /Library/Python/2.6/site-packages/scikits.timeseries-0.91.3-py2.6- macosx-10.6-universal.egg/scikits/timeseries/tseries.pyc in __setstate__(self, state) 1303 """ 1304 (ver, shp, typ, isf, raw, msk, flv, dsh, dtm, frq, infodict) = state -> 1305 MaskedArray.__setstate__(self, (ver, shp, typ, isf, raw, msk, flv)) 1306 _dates = self._dates 1307 _dates.__setstate__((dsh, dtype(int_), isf, dtm)) /Library/Python/2.6/site-packages/numpy-1.4.0.dev7542-py2.6- macosx-10.6-universal.egg/numpy/ma/core.pyc in __setstate__(self, state) 5321 """ 5322 (ver, shp, typ, isf, raw, msk, flv) = state -> 5323 ndarray.__setstate__(self, (shp, typ, isf, raw)) 5324 self._mask.__setstate__((shp, make_mask_descr(typ), isf, msk)) 5325 self.fill_value = flv TypeError: object pickle not returning list In [7]: From pgmdevlist at gmail.com Thu Dec 17 17:01:22 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 17 Dec 2009 17:01:22 -0500 Subject: [SciPy-User] SciKits.TimeSeries, Error unpickling time series of dicts In-Reply-To: References: Message-ID: <1F22835F-BA5D-4C0E-A208-1DFB5ECEF180@gmail.com> On Dec 17, 2009, at 4:47 PM, Robert Ferrell wrote: > I get an error unpickling a time series of dicts. I'm on OS X 10.6. > Robert, I gonna check that. For future reference, please create a ticket as well, so that we can keep track of what must be done Cheers P. From ferrell at diablotech.com Thu Dec 17 17:09:55 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Thu, 17 Dec 2009 15:09:55 -0700 Subject: [SciPy-User] SciKits.TimeSeries, Error unpickling time series of dicts In-Reply-To: <1F22835F-BA5D-4C0E-A208-1DFB5ECEF180@gmail.com> References: <1F22835F-BA5D-4C0E-A208-1DFB5ECEF180@gmail.com> Message-ID: <430A413E-1255-4BC9-AEAC-C10784AA0CA8@diablotech.com> On Dec 17, 2009, at 3:01 PM, Pierre GM wrote: > On Dec 17, 2009, at 4:47 PM, Robert Ferrell wrote: >> I get an error unpickling a time series of dicts. I'm on OS X 10.6. >> > > Robert, I gonna check that. > For future reference, please create a ticket as well, so that we can > keep track of what must be done I'll certainly do that. Where/how do I file a ticket to a SciKit? Actually, I think I just found it at http://projects.scipy.org/scikits/query?component=timeseries&order=status . -robert From pgmdevlist at gmail.com Thu Dec 17 17:12:24 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 17 Dec 2009 17:12:24 -0500 Subject: [SciPy-User] SciKits.TimeSeries, Error unpickling time series of dicts In-Reply-To: <430A413E-1255-4BC9-AEAC-C10784AA0CA8@diablotech.com> References: <1F22835F-BA5D-4C0E-A208-1DFB5ECEF180@gmail.com> <430A413E-1255-4BC9-AEAC-C10784AA0CA8@diablotech.com> Message-ID: <9B1AAD07-B172-49B7-83B7-5717B0DDBD8B@gmail.com> On Dec 17, 2009, at 5:09 PM, Robert Ferrell wrote: > > On Dec 17, 2009, at 3:01 PM, Pierre GM wrote: > >> On Dec 17, 2009, at 4:47 PM, Robert Ferrell wrote: >>> I get an error unpickling a time series of dicts. I'm on OS X 10.6. >>> >> >> Robert, I gonna check that. >> For future reference, please create a ticket as well, so that we can >> keep track of what must be done > > I'll certainly do that. Where/how do I file a ticket to a SciKit? > Actually, I think I just found it at http://projects.scipy.org/scikits/query?component=timeseries&order=status You got it. You can assign the ticket to me directly (that way I get notified...) From ndbecker2 at gmail.com Fri Dec 18 08:01:01 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 18 Dec 2009 08:01:01 -0500 Subject: [SciPy-User] [ANN] OpenOpt 0.27 (optimization), FuncDesigner 0.17 (auto differentiation) References: Message-ID: Dmitrey wrote: > > > Hi all, > I'm glad to inform you about release of OpenOpt 0.27 (universal > numerical optimization framework), FuncDesigner 0.17 (CAS with > automatic differentiation, convenient modelling of linear/nonlinear > functions, can use convenient modelling for some OpenOpt optimization > problems and systems of linear/nonlinear equations, possibly sparse or > overdetermined), DerApproximator 0.17 (finite-differences derivatives > approximation, get or check user-supplied). > > > > license: BSD > > > > See also Changelog for details: > > > http://openopt.org/Changelog > > Regards, D. This was also announced on pypi, but it doesn't work: sudo easy_install FuncDesigner Searching for FuncDesigner Reading http://pypi.python.org/simple/FuncDesigner/ Reading http://openopt.org No local packages or download links found for FuncDesigner error: Could not find suitable distribution for Requirement.parse('FuncDesigner') From rjchacko at gmail.com Fri Dec 18 10:50:54 2009 From: rjchacko at gmail.com (Ranjit Chacko) Date: Fri, 18 Dec 2009 10:50:54 -0500 Subject: [SciPy-User] fftpack problem In-Reply-To: References: Message-ID: The problem was with matplotlib not fftpack. The memory error just happened to occur during the call to ifft. Thanks, -Ranjit On Thu, Dec 17, 2009 at 10:35 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Thu, Dec 17, 2009 at 8:23 AM, Ranjit Chacko wrote: > >> Hi, >> >> I have a loop in my program which repeatedly calls ifft2 and fft2 in order >> to do a convolution. After many iterations the program crashed because of a >> memory error. Is there a memory leak in these functions? >> >> Python(8811) malloc: *** mmap(size=4194304) failed (error code=12) >> *** error: can't allocate region >> *** set a breakpoint in malloc_error_break to debug >> Traceback (most recent call last): >> File "/Users/rjchacko/Python/pythonsims/src/gagne.py", line 92, in >> >> nonLocalTerm = ifft2(fft2(phi) * Ktilde) >> File >> "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", >> line 415, in ifft2 >> return ifftn(x,shape,axes,overwrite_x) >> File >> "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", >> line 393, in ifftn >> return _raw_fftnd(tmp,shape,axes,-1,overwrite_x,work_function) >> File >> "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", >> line 312, in _raw_fftnd >> x = _fix_shape(x, s[i], waxes[i]) >> File >> "/Library/Frameworks/Python.framework/Versions/5.1.1/lib/python2.5/site-packages/scipy/fftpack/basic.py", >> line 38, in _fix_shape >> z = zeros(s,x.dtype.char) >> MemoryError >> >> > Maybe. But we need a code sample that reproduces the problem to track it > down. > > Chuck > -------------- next part -------------- An HTML attachment was scrubbed... URL: From afraser at lanl.gov Fri Dec 18 12:57:58 2009 From: afraser at lanl.gov (Andy Fraser) Date: Fri, 18 Dec 2009 10:57:58 -0700 Subject: [SciPy-User] How to draw an image in fltk from numpy array? In-Reply-To: <87pr6g9fy4.fsf@lanl.gov> (Andy Fraser's message of "Tue\, 15 Dec 2009 12\:48\:19 -0700") References: <87pr6g9fy4.fsf@lanl.gov> Message-ID: <87oclw88rd.fsf@lanl.gov> I've done some more looking around and found answers to my own questions. The key was to fetch the source for pyfltk from http://pyfltk.sourceforge.net/index.php#download and look at the examples (in particular draw.py) in pyFltk-1.1.4/fltk/test. I don't understand why, but after removing the line: canvas.draw() from my code, it ran. The code also runs if the first argument to fl_draw_image is bits rather than bits.data. I include some of my question for context. >>>>> "AF" == Andy Fraser writes: AF> [...] I want to write a GUI in fltk to look at intermediate AF> results of image manipulations, [...] I get segmentation AF> faults. AF> [...] code [...] below. AF> #file GUI.py AF> import sys, fltk, numpy AF> W = 180 AF> H = 90 AF> D = 3 AF> X = 5 AF> Y = 200 AF> bits = numpy.empty( (W,H,3), numpy.uint8) AF> # class derived from /usr/share/pyshared/matplotlib/backends/backend_fltkagg.py AF> AF> class FltkCanvas(fltk.Fl_Widget): AF> AF> def __init__(self,x,y,w,h): AF> fltk.Fl_Widget.__init__(self, 0, 0, w, h, "canvas") AF> self._oldsize=(None,None) AF> self._draw_overlay = False AF> self._button = None AF> self._key = None AF> AF> AF> def draw(self): AF> newsize=(self.w(),self.h()) AF> if(self._oldsize !=newsize): AF> self._oldsize =newsize AF> print 'before fltk.fl_draw_image' AF> fltk.fl_draw_image(bits.data,0,0,int(W),int(H),3,0) AF> print 'did fltk.fl_draw_image' AF> self.redraw() AF> AF> def theCancelButtonCallback(ptr): AF> sys.exit(0) AF> AF> window = fltk.Fl_Window(100,100,200,400) AF> canvas = FltkCanvas(X,Y,W,H) AF> canvas.draw() AF> window.end() AF> window.show(len(sys.argv), sys.argv) AF> fltk.Fl.run() Andy From Chris.Barker at noaa.gov Fri Dec 18 15:05:46 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 18 Dec 2009 12:05:46 -0800 Subject: [SciPy-User] How to draw an image in fltk from numpy array? In-Reply-To: <87oclw88rd.fsf@lanl.gov> References: <87pr6g9fy4.fsf@lanl.gov> <87oclw88rd.fsf@lanl.gov> Message-ID: <4B2BE09A.9090607@noaa.gov> Andy Fraser wrote: > I've done some more looking around and found answers to my own > questions. The key was to fetch the source for pyfltk from > http://pyfltk.sourceforge.net/index.php#download and look at the > examples yup -- I actually poked around a little looking for an answer for you, but docs are very sparse! > The code also runs if the first argument to > fl_draw_image is bits rather than bits.data. I figured -- numpy arrays expose the buffer interface, so you should be able to pass one into anything expecting a buffer. Glad you got it figured out. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From peter.shepard at gmail.com Sun Dec 20 12:26:30 2009 From: peter.shepard at gmail.com (Pete Shepard) Date: Sun, 20 Dec 2009 09:26:30 -0800 Subject: [SciPy-User] fisherexact.py returns None Message-ID: <5c2c43620912200926p2e1e959fv17dda36d2aedfa62@mail.gmail.com> Hello, I am using a fisher exact test I got from http://projects.scipy.org/scipy/attachment/ticket/956/fisher.py. This program takes in two tuples and returns an odds ratio and a p-value. Most tuples are handled nicely by the script but certain tuples eg "[[6, 1], [1, 6]]" return "None". I am wondering if anyone knows why this is true? TIA -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Dec 20 13:37:38 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 20 Dec 2009 13:37:38 -0500 Subject: [SciPy-User] fisherexact.py returns None In-Reply-To: <5c2c43620912200926p2e1e959fv17dda36d2aedfa62@mail.gmail.com> References: <5c2c43620912200926p2e1e959fv17dda36d2aedfa62@mail.gmail.com> Message-ID: <1cd32cbb0912201037n95e9eagbab15e5f2ac2beee@mail.gmail.com> On Sun, Dec 20, 2009 at 12:26 PM, Pete Shepard wrote: > Hello, > > I am using a fisher exact test I got from > http://projects.scipy.org/scipy/attachment/ticket/956/fisher.py. This > program takes in two tuples and returns an odds ratio and a? p-value. Most > tuples are handled nicely by the script but certain tuples eg "[[6, 1], [1, > 6]]" return "None". I am wondering if anyone knows why this is true? I get this, with and without the patch of thomas in the comments >>> fisherExact([[6, 1], [1, 6]]) (36.0, 0.029137529137528768) Do you have an exact example? Which version of numpy and scipy? Looking at the code, I don't see any reason why it should return None. Josef > > TIA > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From peter.shepard at gmail.com Sun Dec 20 15:50:33 2009 From: peter.shepard at gmail.com (Pete Shepard) Date: Sun, 20 Dec 2009 12:50:33 -0800 Subject: [SciPy-User] fisherexact.py returns None In-Reply-To: <1cd32cbb0912201037n95e9eagbab15e5f2ac2beee@mail.gmail.com> References: <5c2c43620912200926p2e1e959fv17dda36d2aedfa62@mail.gmail.com> <1cd32cbb0912201037n95e9eagbab15e5f2ac2beee@mail.gmail.com> Message-ID: <5c2c43620912201250n4b9a6c26k9e8c3d7fb2f9c987@mail.gmail.com> Thanks for the reply Josef, Numpy version =1:1.3.0-3 SciPy version=0.7.0-2 Here is the code I am using, interstingly line 99 chokes in the testFisherExact() : example, don't know if these problems are related # -*- coding: utf-8 -*- #!/usr/bin/python import re import sys, os from scipy.stats import * #from scipy import array,searchsorted, histogram import re import operator import matplotlib matplotlib.use('PDF') import pylab " def fisherExact(c) : """Performs a Fisher exact test on a 2x2 contingency table in list of lists format. Returns a tuple of (odds ratio, two-tailed P-value). Examples: >>> fisherExact([[100, 2], [1000, 5]]) (0.25, 0.13007593634330314) """ oddsRatio = c[0][0] * c[1][1] / float(c[1][0] * c[0][1]) n1 = c[0][0] + c[0][1] n2 = c[1][0] + c[1][1] n = c[0][0] + c[1][0] mode = int(float((n + 1) * (n1 + 1)) / (n1 + n2 + 2)) pExact = hypergeom.pmf(c[0][0], n1 + n2, n1, n) pMode = hypergeom.pmf(c[0][0], n1 + n2, n1, n) if c[0][0] == mode : return oddsRatio, 1.0 elif c[0][0] < mode : pLower = hypergeom.cdf(c[0][0], n1 + n2, n1, n) # Binary search for where to begin upper half. min = mode max = n guess = -1 while min != max : guess = max if (max == min + 1 and guess == min) else \ (max + min) / 2 pGuess = hypergeom.pmf(guess, n1 + n2, n1, n) if pGuess <= pExact and hypergeom.pmf(guess - 1, n1 + n2, n1, n) > pExact : break elif pGuess < pExact : max = guess else : min = guess if guess == -1 and min == max : guess = min return oddsRatio, pLower + hypergeom.sf(guess - 1, n1 + n2, n1, n) else : pUpper = hypergeom.sf(c[0][0] - 1, n1 + n2, n1, n); # Special case to prevent binary search from getting stuck. if hypergeom.pmf(0, n1 + n2, n1, n) > pExact : return oddsRatio, pUpper # Binary search for where to begin lower half. min = 0 max = mode guess = -1 while min != max : guess = max if (max == min + 1 and guess == min) else \ (max + min) / 2 pGuess = hypergeom.pmf(guess, n1 + n2, n1, n); if pGuess <= pExact and hypergeom.pmf(guess + 1, n1 + n2, n1, n) > pExact : break; elif pGuess <= pExact : min = guess else : max = guess if guess == -1 and min == max : guess = min return oddsRatio, pUpper + hypergeom.cdf(guess, n1 + n2, n1, n) def testFisherExact() : """Just some tests to show that fisherExact() works correctly.""" def approxEqual(n1, n2) : return abs(n1 - n2) < 0.01 res = fisherExact([[100, 2], [1000, 5]]) assert(approxEqual(res[1], 0.1301)) assert(approxEqual(res[0], 0.25)) res = fisherExact([[2, 7], [8, 2]]) assert(approxEqual(res[1], 0.0230141)) assert(approxEqual(res[0], 4.0 / 56)) res = fisherExact([[100, 2], [1000, 5]]) #assert(approxEqual(res[1], 0.1973244)) res = fisherExact([[5, 15], [20, 20]]) assert(approxEqual(res[1], 0.0958044)) res = fisherExact([[5, 16], [20, 25]]) assert(approxEqual(res[1], 0.1725862)) res = fisherExact([[10, 5], [10, 1]]) assert(approxEqual(res[1], 0.1973244)) D=[[6, 1], [1, 6]] testFisherExact() p=fisherExact(D) print p On Sun, Dec 20, 2009 at 10:37 AM, wrote: > On Sun, Dec 20, 2009 at 12:26 PM, Pete Shepard > wrote: > > Hello, > > > > I am using a fisher exact test I got from > > http://projects.scipy.org/scipy/attachment/ticket/956/fisher.py. This > > program takes in two tuples and returns an odds ratio and a p-value. > Most > > tuples are handled nicely by the script but certain tuples eg "[[6, 1], > [1, > > 6]]" return "None". I am wondering if anyone knows why this is true? > > I get this, with and without the patch of thomas in the comments > > >>> fisherExact([[6, 1], [1, 6]]) > (36.0, 0.029137529137528768) > > Do you have an exact example? Which version of numpy and scipy? > Looking at the code, I don't see any reason why it should return None. > > Josef > > > > TIA > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Dec 20 16:05:04 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 20 Dec 2009 16:05:04 -0500 Subject: [SciPy-User] fisherexact.py returns None In-Reply-To: <5c2c43620912201250n4b9a6c26k9e8c3d7fb2f9c987@mail.gmail.com> References: <5c2c43620912200926p2e1e959fv17dda36d2aedfa62@mail.gmail.com> <1cd32cbb0912201037n95e9eagbab15e5f2ac2beee@mail.gmail.com> <5c2c43620912201250n4b9a6c26k9e8c3d7fb2f9c987@mail.gmail.com> Message-ID: <1cd32cbb0912201305t592b92del87c4b702b60a8be5@mail.gmail.com> On Sun, Dec 20, 2009 at 3:50 PM, Pete Shepard wrote: > Thanks for the reply Josef, > > Numpy version =1:1.3.0-3 > SciPy version=0.7.0-2 > > > Here is the code I am using, interstingly line 99 chokes in the > testFisherExact() : example, don't know if these problems are related > > > # -*- coding: utf-8 -*- > > #!/usr/bin/python > > > import re > import sys, os > from scipy.stats import * > #from scipy import array,searchsorted, histogram > import re > import operator > import matplotlib > matplotlib.use('PDF') > import pylab > " > > def fisherExact(c) : > ??? ??????? """Performs a Fisher exact test on a 2x2 contingency table in > list of lists > ??? ?????????? format.? Returns a tuple of (odds ratio, two-tailed P-value). > > ??? ?????????? Examples: > ??? ?????????? >>> fisherExact([[100, 2], [1000, 5]]) > ??? ?????????? (0.25, 0.13007593634330314) > ??? ?????????? """ > ??? ??????? oddsRatio = c[0][0] * c[1][1] / float(c[1][0] * c[0][1]) > ??? ??????? n1 = c[0][0] + c[0][1] > ??? ??????? n2 = c[1][0] + c[1][1] > ??? ??????? n? = c[0][0] + c[1][0] > > ??? ??????? mode = int(float((n + 1) * (n1 + 1)) / (n1 + n2 + 2)) > ??? ??????? pExact = hypergeom.pmf(c[0][0], n1 + n2, n1, n) > ??? ??????? pMode = hypergeom.pmf(c[0][0], n1 + n2, n1, n) > > ??? ??????? if c[0][0] == mode : > ??? ??????????????? return oddsRatio, 1.0 > ??? ??????? elif c[0][0] < mode : > ??? ??????????????? pLower = hypergeom.cdf(c[0][0], n1 + n2, n1, n) > > ??? ??????????????? # Binary search for where to begin upper half. > ??? ??????????????? min = mode > ??? ??????????????? max = n > ??? ??????????????? guess = -1 > ??? ??????????????? while min != max : > ??? ??????????????????????? guess = max if (max == min + 1 and guess == min) > else \ > ??? ??????????????????????????????????????? (max + min) / 2 > > ??? ??????????????????????? pGuess = hypergeom.pmf(guess, n1 + n2, n1, n) > ??? ??????????????????????? if pGuess <= pExact and hypergeom.pmf(guess - 1, > n1 + n2, n1, n) > pExact : > ??? ??????????????????????????????? break > ??? ??????????????????????? elif pGuess < pExact : > ??? ??????????????????????????????? max = guess > ??? ??????????????????????? else : > ??? ??????????????????????????????? min = guess > > ??? ??????????????? if guess == -1 and min == max : > ??? ??????????????????????? guess = min > > ??? ??????????????? return oddsRatio, pLower + hypergeom.sf(guess - 1, n1 + > n2, n1, n) > ??? ??????? else : > ??? ??????????????? pUpper = hypergeom.sf(c[0][0] - 1, n1 + n2, n1, n); > > ??? ??????????????? # Special case to prevent binary search from getting > stuck. > ??? ??????????????? if hypergeom.pmf(0, n1 + n2, n1, n) > pExact : > ??? ??????????????????????? return oddsRatio, pUpper > > ??? ??????????????? # Binary search for where to begin lower half. > ??? ??????????????? min = 0 > ??? ??????????????? max = mode > ??? ??????????????? guess = -1 > ??? ??????????????? while min != max : > ??? ??????????????????????? guess = max if (max == min + 1 and guess == min) > else \ > ??? ??????????????????????????????????????? (max + min) / 2 > ??? ??????????????????????? pGuess = hypergeom.pmf(guess, n1 + n2, n1, n); > > ??? ??????????????????????? if pGuess <= pExact and hypergeom.pmf(guess + 1, > n1 + n2, n1, n) > pExact : > ??? ??????????????????????????????? break; > ??? ??????????????????????? elif pGuess <= pExact? : > ??? ??????????????????????????????? min = guess > ??? ??????????????????????? else : > ??? ??????????????????????????????? max = guess > > ??? ??????????????? if guess == -1 and min == max : > ??? ??????????????????????? guess = min > ??? ??? ??????????????? return oddsRatio, pUpper + hypergeom.cdf(guess, n1 + > n2, n1, n) this return line has the wrong indent, it should be at the same column as the if guess == -1 and min == max : Note: I had to reformat the indent to 4 spaces because IDLE complained. In these cases, it would be better to also attach the file, then I would be sure I have the same file than with copy and paste from the mail reader. Tell me if that solves the problem, I would think it's unlikely that your versions of scipy is to old for this. Josef > > def testFisherExact() : > ??? """Just some tests to show that fisherExact() works correctly.""" > ??? def approxEqual(n1, n2) : > ??? ??? return abs(n1 - n2) < 0.01 > > ??? res = fisherExact([[100, 2], [1000, 5]]) > ??? assert(approxEqual(res[1], 0.1301)) > ??? assert(approxEqual(res[0], 0.25)) > ??? res = fisherExact([[2, 7], [8, 2]]) > ??? assert(approxEqual(res[1], 0.0230141)) > ??? assert(approxEqual(res[0], 4.0 / 56)) > ??? res = fisherExact([[100, 2], [1000, 5]]) > ??? #assert(approxEqual(res[1],? 0.1973244)) > ??????? res = fisherExact([[5, 15], [20, 20]]) > ??? assert(approxEqual(res[1], 0.0958044)) > ??? res = fisherExact([[5, 16], [20, 25]]) > ??? assert(approxEqual(res[1], 0.1725862)) > ??? res = fisherExact([[10, 5], [10, 1]]) > ??? assert(approxEqual(res[1], 0.1973244)) > > D=[[6, 1], [1, 6]] > testFisherExact() > p=fisherExact(D) > print p > On Sun, Dec 20, 2009 at 10:37 AM, wrote: >> >> On Sun, Dec 20, 2009 at 12:26 PM, Pete Shepard >> wrote: >> > Hello, >> > >> > I am using a fisher exact test I got from >> > http://projects.scipy.org/scipy/attachment/ticket/956/fisher.py. This >> > program takes in two tuples and returns an odds ratio and a? p-value. >> > Most >> > tuples are handled nicely by the script but certain tuples eg "[[6, 1], >> > [1, >> > 6]]" return "None". I am wondering if anyone knows why this is true? >> >> I get this, with and without the patch of thomas in the comments >> >> >>> fisherExact([[6, 1], [1, 6]]) >> (36.0, 0.029137529137528768) >> >> Do you have an exact example? Which version of numpy and scipy? >> Looking at the code, I don't see any reason why it should return None. >> >> Josef >> > >> > TIA >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From peter.shepard at gmail.com Sun Dec 20 16:44:57 2009 From: peter.shepard at gmail.com (Pete Shepard) Date: Sun, 20 Dec 2009 13:44:57 -0800 Subject: [SciPy-User] fisherexact.py returns None In-Reply-To: <1cd32cbb0912201305t592b92del87c4b702b60a8be5@mail.gmail.com> References: <5c2c43620912200926p2e1e959fv17dda36d2aedfa62@mail.gmail.com> <1cd32cbb0912201037n95e9eagbab15e5f2ac2beee@mail.gmail.com> <5c2c43620912201250n4b9a6c26k9e8c3d7fb2f9c987@mail.gmail.com> <1cd32cbb0912201305t592b92del87c4b702b60a8be5@mail.gmail.com> Message-ID: <5c2c43620912201344i4bb6c739t22d159e287c3e631@mail.gmail.com> I think the indent issue was due to copying and pasting also, attached is the code. On Sun, Dec 20, 2009 at 1:05 PM, wrote: > On Sun, Dec 20, 2009 at 3:50 PM, Pete Shepard > wrote: > > Thanks for the reply Josef, > > > > Numpy version =1:1.3.0-3 > > SciPy version=0.7.0-2 > > > > > > Here is the code I am using, interstingly line 99 chokes in the > > testFisherExact() : example, don't know if these problems are related > > > > > > # -*- coding: utf-8 -*- > > > > #!/usr/bin/python > > > > > > import re > > import sys, os > > from scipy.stats import * > > #from scipy import array,searchsorted, histogram > > import re > > import operator > > import matplotlib > > matplotlib.use('PDF') > > import pylab > > " > > > > def fisherExact(c) : > > """Performs a Fisher exact test on a 2x2 contingency table in > > list of lists > > format. Returns a tuple of (odds ratio, two-tailed > P-value). > > > > Examples: > > >>> fisherExact([[100, 2], [1000, 5]]) > > (0.25, 0.13007593634330314) > > """ > > oddsRatio = c[0][0] * c[1][1] / float(c[1][0] * c[0][1]) > > n1 = c[0][0] + c[0][1] > > n2 = c[1][0] + c[1][1] > > n = c[0][0] + c[1][0] > > > > mode = int(float((n + 1) * (n1 + 1)) / (n1 + n2 + 2)) > > pExact = hypergeom.pmf(c[0][0], n1 + n2, n1, n) > > pMode = hypergeom.pmf(c[0][0], n1 + n2, n1, n) > > > > if c[0][0] == mode : > > return oddsRatio, 1.0 > > elif c[0][0] < mode : > > pLower = hypergeom.cdf(c[0][0], n1 + n2, n1, n) > > > > # Binary search for where to begin upper half. > > min = mode > > max = n > > guess = -1 > > while min != max : > > guess = max if (max == min + 1 and guess == > min) > > else \ > > (max + min) / 2 > > > > pGuess = hypergeom.pmf(guess, n1 + n2, n1, n) > > if pGuess <= pExact and hypergeom.pmf(guess - > 1, > > n1 + n2, n1, n) > pExact : > > break > > elif pGuess < pExact : > > max = guess > > else : > > min = guess > > > > if guess == -1 and min == max : > > guess = min > > > > return oddsRatio, pLower + hypergeom.sf(guess - 1, n1 > + > > n2, n1, n) > > else : > > pUpper = hypergeom.sf(c[0][0] - 1, n1 + n2, n1, n); > > > > # Special case to prevent binary search from getting > > stuck. > > if hypergeom.pmf(0, n1 + n2, n1, n) > pExact : > > return oddsRatio, pUpper > > > > # Binary search for where to begin lower half. > > min = 0 > > max = mode > > guess = -1 > > while min != max : > > guess = max if (max == min + 1 and guess == > min) > > else \ > > (max + min) / 2 > > pGuess = hypergeom.pmf(guess, n1 + n2, n1, > n); > > > > if pGuess <= pExact and hypergeom.pmf(guess + > 1, > > n1 + n2, n1, n) > pExact : > > break; > > elif pGuess <= pExact : > > min = guess > > else : > > max = guess > > > > if guess == -1 and min == max : > > guess = min > > return oddsRatio, pUpper + hypergeom.cdf(guess, > n1 + > > n2, n1, n) > > this return line has the wrong indent, it should be at the same column > as the if guess == -1 and min == max : > > Note: I had to reformat the indent to 4 spaces because IDLE > complained. In these cases, it would be better to also attach the > file, then I would be sure I have the same file than with copy and > paste from the mail reader. > > Tell me if that solves the problem, I would think it's unlikely that > your versions of scipy is to old for this. > > Josef > > > > > def testFisherExact() : > > """Just some tests to show that fisherExact() works correctly.""" > > def approxEqual(n1, n2) : > > return abs(n1 - n2) < 0.01 > > > > res = fisherExact([[100, 2], [1000, 5]]) > > assert(approxEqual(res[1], 0.1301)) > > assert(approxEqual(res[0], 0.25)) > > res = fisherExact([[2, 7], [8, 2]]) > > assert(approxEqual(res[1], 0.0230141)) > > assert(approxEqual(res[0], 4.0 / 56)) > > res = fisherExact([[100, 2], [1000, 5]]) > > #assert(approxEqual(res[1], 0.1973244)) > > res = fisherExact([[5, 15], [20, 20]]) > > assert(approxEqual(res[1], 0.0958044)) > > res = fisherExact([[5, 16], [20, 25]]) > > assert(approxEqual(res[1], 0.1725862)) > > res = fisherExact([[10, 5], [10, 1]]) > > assert(approxEqual(res[1], 0.1973244)) > > > > D=[[6, 1], [1, 6]] > > testFisherExact() > > p=fisherExact(D) > > print p > > On Sun, Dec 20, 2009 at 10:37 AM, wrote: > >> > >> On Sun, Dec 20, 2009 at 12:26 PM, Pete Shepard > > >> wrote: > >> > Hello, > >> > > >> > I am using a fisher exact test I got from > >> > http://projects.scipy.org/scipy/attachment/ticket/956/fisher.py. This > >> > program takes in two tuples and returns an odds ratio and a p-value. > >> > Most > >> > tuples are handled nicely by the script but certain tuples eg "[[6, > 1], > >> > [1, > >> > 6]]" return "None". I am wondering if anyone knows why this is true? > >> > >> I get this, with and without the patch of thomas in the comments > >> > >> >>> fisherExact([[6, 1], [1, 6]]) > >> (36.0, 0.029137529137528768) > >> > >> Do you have an exact example? Which version of numpy and scipy? > >> Looking at the code, I don't see any reason why it should return None. > >> > >> Josef > >> > > >> > TIA > >> > > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fisher_exact.py Type: text/x-python Size: 3926 bytes Desc: not available URL: From josef.pktd at gmail.com Sun Dec 20 17:04:51 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 20 Dec 2009 17:04:51 -0500 Subject: [SciPy-User] fisherexact.py returns None In-Reply-To: <5c2c43620912201344i4bb6c739t22d159e287c3e631@mail.gmail.com> References: <5c2c43620912200926p2e1e959fv17dda36d2aedfa62@mail.gmail.com> <1cd32cbb0912201037n95e9eagbab15e5f2ac2beee@mail.gmail.com> <5c2c43620912201250n4b9a6c26k9e8c3d7fb2f9c987@mail.gmail.com> <1cd32cbb0912201305t592b92del87c4b702b60a8be5@mail.gmail.com> <5c2c43620912201344i4bb6c739t22d159e287c3e631@mail.gmail.com> Message-ID: <1cd32cbb0912201404n437a0e8dgfeb9effa35518650@mail.gmail.com> On Sun, Dec 20, 2009 at 4:44 PM, Pete Shepard wrote: > I think the indent issue was due to copying and pasting also, attached is > the code. your attached file has inconsistent indentation all over the place (according to IDLE, which refuses to run the file, or maybe there are additional tabs instead of white space) your return statement is indented relative to the if , as mentioned in my previous message. If you unindent then the None disappears. I attach my indent corrected version. What editor are you using? It is possible that you are seeing different formatting than I do on Windows. Josef > > On Sun, Dec 20, 2009 at 1:05 PM, wrote: >> >> On Sun, Dec 20, 2009 at 3:50 PM, Pete Shepard >> wrote: >> > Thanks for the reply Josef, >> > >> > Numpy version =1:1.3.0-3 >> > SciPy version=0.7.0-2 >> > >> > >> > Here is the code I am using, interstingly line 99 chokes in the >> > testFisherExact() : example, don't know if these problems are related >> > >> > >> > # -*- coding: utf-8 -*- >> > >> > #!/usr/bin/python >> > >> > >> > import re >> > import sys, os >> > from scipy.stats import * >> > #from scipy import array,searchsorted, histogram >> > import re >> > import operator >> > import matplotlib >> > matplotlib.use('PDF') >> > import pylab >> > " >> > >> > def fisherExact(c) : >> > ??? ??????? """Performs a Fisher exact test on a 2x2 contingency table >> > in >> > list of lists >> > ??? ?????????? format.? Returns a tuple of (odds ratio, two-tailed >> > P-value). >> > >> > ??? ?????????? Examples: >> > ??? ?????????? >>> fisherExact([[100, 2], [1000, 5]]) >> > ??? ?????????? (0.25, 0.13007593634330314) >> > ??? ?????????? """ >> > ??? ??????? oddsRatio = c[0][0] * c[1][1] / float(c[1][0] * c[0][1]) >> > ??? ??????? n1 = c[0][0] + c[0][1] >> > ??? ??????? n2 = c[1][0] + c[1][1] >> > ??? ??????? n? = c[0][0] + c[1][0] >> > >> > ??? ??????? mode = int(float((n + 1) * (n1 + 1)) / (n1 + n2 + 2)) >> > ??? ??????? pExact = hypergeom.pmf(c[0][0], n1 + n2, n1, n) >> > ??? ??????? pMode = hypergeom.pmf(c[0][0], n1 + n2, n1, n) >> > >> > ??? ??????? if c[0][0] == mode : >> > ??? ??????????????? return oddsRatio, 1.0 >> > ??? ??????? elif c[0][0] < mode : >> > ??? ??????????????? pLower = hypergeom.cdf(c[0][0], n1 + n2, n1, n) >> > >> > ??? ??????????????? # Binary search for where to begin upper half. >> > ??? ??????????????? min = mode >> > ??? ??????????????? max = n >> > ??? ??????????????? guess = -1 >> > ??? ??????????????? while min != max : >> > ??? ??????????????????????? guess = max if (max == min + 1 and guess == >> > min) >> > else \ >> > ??? ??????????????????????????????????????? (max + min) / 2 >> > >> > ??? ??????????????????????? pGuess = hypergeom.pmf(guess, n1 + n2, n1, >> > n) >> > ??? ??????????????????????? if pGuess <= pExact and hypergeom.pmf(guess >> > - 1, >> > n1 + n2, n1, n) > pExact : >> > ??? ??????????????????????????????? break >> > ??? ??????????????????????? elif pGuess < pExact : >> > ??? ??????????????????????????????? max = guess >> > ??? ??????????????????????? else : >> > ??? ??????????????????????????????? min = guess >> > >> > ??? ??????????????? if guess == -1 and min == max : >> > ??? ??????????????????????? guess = min >> > >> > ??? ??????????????? return oddsRatio, pLower + hypergeom.sf(guess - 1, >> > n1 + >> > n2, n1, n) >> > ??? ??????? else : >> > ??? ??????????????? pUpper = hypergeom.sf(c[0][0] - 1, n1 + n2, n1, n); >> > >> > ??? ??????????????? # Special case to prevent binary search from getting >> > stuck. >> > ??? ??????????????? if hypergeom.pmf(0, n1 + n2, n1, n) > pExact : >> > ??? ??????????????????????? return oddsRatio, pUpper >> > >> > ??? ??????????????? # Binary search for where to begin lower half. >> > ??? ??????????????? min = 0 >> > ??? ??????????????? max = mode >> > ??? ??????????????? guess = -1 >> > ??? ??????????????? while min != max : >> > ??? ??????????????????????? guess = max if (max == min + 1 and guess == >> > min) >> > else \ >> > ??? ??????????????????????????????????????? (max + min) / 2 >> > ??? ??????????????????????? pGuess = hypergeom.pmf(guess, n1 + n2, n1, >> > n); >> > >> > ??? ??????????????????????? if pGuess <= pExact and hypergeom.pmf(guess >> > + 1, >> > n1 + n2, n1, n) > pExact : >> > ??? ??????????????????????????????? break; >> > ??? ??????????????????????? elif pGuess <= pExact? : >> > ??? ??????????????????????????????? min = guess >> > ??? ??????????????????????? else : >> > ??? ??????????????????????????????? max = guess >> > >> > ??? ??????????????? if guess == -1 and min == max : >> > ??? ??????????????????????? guess = min >> > ??? ??? ??????????????? return oddsRatio, pUpper + hypergeom.cdf(guess, >> > n1 + >> > n2, n1, n) >> >> this return line has the wrong indent, it should be at the same column >> as the ? if guess == -1 and min == max : >> >> Note: I had to reformat the indent to 4 spaces because IDLE >> complained. In these cases, it would be better to also attach the >> file, then I would be sure I have the same file than with copy and >> paste from the mail reader. >> >> Tell me if that solves the problem, I would think it's unlikely that >> your versions of scipy is to old for this. >> >> Josef >> >> > >> > def testFisherExact() : >> > ??? """Just some tests to show that fisherExact() works correctly.""" >> > ??? def approxEqual(n1, n2) : >> > ??? ??? return abs(n1 - n2) < 0.01 >> > >> > ??? res = fisherExact([[100, 2], [1000, 5]]) >> > ??? assert(approxEqual(res[1], 0.1301)) >> > ??? assert(approxEqual(res[0], 0.25)) >> > ??? res = fisherExact([[2, 7], [8, 2]]) >> > ??? assert(approxEqual(res[1], 0.0230141)) >> > ??? assert(approxEqual(res[0], 4.0 / 56)) >> > ??? res = fisherExact([[100, 2], [1000, 5]]) >> > ??? #assert(approxEqual(res[1],? 0.1973244)) >> > ??????? res = fisherExact([[5, 15], [20, 20]]) >> > ??? assert(approxEqual(res[1], 0.0958044)) >> > ??? res = fisherExact([[5, 16], [20, 25]]) >> > ??? assert(approxEqual(res[1], 0.1725862)) >> > ??? res = fisherExact([[10, 5], [10, 1]]) >> > ??? assert(approxEqual(res[1], 0.1973244)) >> > >> > D=[[6, 1], [1, 6]] >> > testFisherExact() >> > p=fisherExact(D) >> > print p >> > On Sun, Dec 20, 2009 at 10:37 AM, wrote: >> >> >> >> On Sun, Dec 20, 2009 at 12:26 PM, Pete Shepard >> >> >> >> wrote: >> >> > Hello, >> >> > >> >> > I am using a fisher exact test I got from >> >> > http://projects.scipy.org/scipy/attachment/ticket/956/fisher.py. This >> >> > program takes in two tuples and returns an odds ratio and a? p-value. >> >> > Most >> >> > tuples are handled nicely by the script but certain tuples eg "[[6, >> >> > 1], >> >> > [1, >> >> > 6]]" return "None". I am wondering if anyone knows why this is true? >> >> >> >> I get this, with and without the patch of thomas in the comments >> >> >> >> >>> fisherExact([[6, 1], [1, 6]]) >> >> (36.0, 0.029137529137528768) >> >> >> >> Do you have an exact example? Which version of numpy and scipy? >> >> Looking at the code, I don't see any reason why it should return None. >> >> >> >> Josef >> >> > >> >> > TIA >> >> > >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> > >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- # -*- coding: utf-8 -*- #!/usr/bin/python import re import sys, os from scipy.stats import * #from scipy import array,searchsorted, histogram import re import operator import matplotlib matplotlib.use('PDF') import pylab def fisherExact(c) : """Performs a Fisher exact test on a 2x2 contingency table in list of lists format. Returns a tuple of (odds ratio, two-tailed P-value). Examples: >>> fisherExact([[100, 2], [1000, 5]]) (0.25, 0.13007593634330314) """ oddsRatio = c[0][0] * c[1][1] / float(c[1][0] * c[0][1]) n1 = c[0][0] + c[0][1] n2 = c[1][0] + c[1][1] n = c[0][0] + c[1][0] mode = int(float((n + 1) * (n1 + 1)) / (n1 + n2 + 2)) pExact = hypergeom.pmf(c[0][0], n1 + n2, n1, n) pMode = hypergeom.pmf(c[0][0], n1 + n2, n1, n) if c[0][0] == mode : return oddsRatio, 1.0 elif c[0][0] < mode : pLower = hypergeom.cdf(c[0][0], n1 + n2, n1, n) # Binary search for where to begin upper half. min = mode max = n guess = -1 while min != max : guess = max if (max == min + 1 and guess == min) else \ (max + min) / 2 pGuess = hypergeom.pmf(guess, n1 + n2, n1, n) if pGuess <= pExact and hypergeom.pmf(guess - 1, n1 + n2, n1, n) > pExact : break elif pGuess < pExact : max = guess else : min = guess if guess == -1 and min == max : guess = min return oddsRatio, pLower + hypergeom.sf(guess - 1, n1 + n2, n1, n) else : pUpper = hypergeom.sf(c[0][0] - 1, n1 + n2, n1, n); # Special case to prevent binary search from getting stuck. if hypergeom.pmf(0, n1 + n2, n1, n) > pExact : return oddsRatio, pUpper # Binary search for where to begin lower half. min = 0 max = mode guess = -1 while min != max : guess = max if (max == min + 1 and guess == min) else \ (max + min) / 2 pGuess = hypergeom.pmf(guess, n1 + n2, n1, n); if pGuess <= pExact and hypergeom.pmf(guess + 1, n1 + n2, n1, n) > pExact : break; elif pGuess <= pExact : min = guess else : max = guess if guess == -1 and min == max : guess = min return oddsRatio, pUpper + hypergeom.cdf(guess, n1 + n2, n1, n) def testFisherExact() : """Just some tests to show that fisherExact() works correctly.""" def approxEqual(n1, n2) : return abs(n1 - n2) < 0.01 res = fisherExact([[100, 2], [1000, 5]]) assert(approxEqual(res[1], 0.1301)) assert(approxEqual(res[0], 0.25)) res = fisherExact([[2, 7], [8, 2]]) assert(approxEqual(res[1], 0.0230141)) assert(approxEqual(res[0], 4.0 / 56)) res = fisherExact([[100, 2], [1000, 5]]) #assert(approxEqual(res[1], 0.1973244)) res = fisherExact([[5, 15], [20, 20]]) assert(approxEqual(res[1], 0.0958044)) res = fisherExact([[5, 16], [20, 25]]) assert(approxEqual(res[1], 0.1725862)) res = fisherExact([[10, 5], [10, 1]]) assert(approxEqual(res[1], 0.1973244)) D=[[6, 1], [1, 6]] testFisherExact() p=fisherExact(D) print p From peter.shepard at gmail.com Sun Dec 20 20:07:28 2009 From: peter.shepard at gmail.com (Pete Shepard) Date: Sun, 20 Dec 2009 17:07:28 -0800 Subject: [SciPy-User] fisherexact.py returns None In-Reply-To: <1cd32cbb0912201404n437a0e8dgfeb9effa35518650@mail.gmail.com> References: <5c2c43620912200926p2e1e959fv17dda36d2aedfa62@mail.gmail.com> <1cd32cbb0912201037n95e9eagbab15e5f2ac2beee@mail.gmail.com> <5c2c43620912201250n4b9a6c26k9e8c3d7fb2f9c987@mail.gmail.com> <1cd32cbb0912201305t592b92del87c4b702b60a8be5@mail.gmail.com> <5c2c43620912201344i4bb6c739t22d159e287c3e631@mail.gmail.com> <1cd32cbb0912201404n437a0e8dgfeb9effa35518650@mail.gmail.com> Message-ID: <5c2c43620912201707l5ff5e3aauc4947540bc1053e3@mail.gmail.com> Your corrected version seems to work. Strangely though when I uncomment line 99 it fails? 98 res = fisherExact([[100, 2], [1000, 5]]) 99 assert(approxEqual(res[1], 0.1973244)) On Sun, Dec 20, 2009 at 2:04 PM, wrote: > On Sun, Dec 20, 2009 at 4:44 PM, Pete Shepard > wrote: > > I think the indent issue was due to copying and pasting also, attached is > > the code. > > your attached file has inconsistent indentation all over the place > (according to IDLE, which refuses to run the file, or maybe there are > additional tabs instead of white space) > your return statement is indented relative to the if , as mentioned in > my previous message. If you unindent then the None disappears. > > I attach my indent corrected version. > > What editor are you using? It is possible that you are seeing > different formatting than I do on Windows. > > Josef > > > > > On Sun, Dec 20, 2009 at 1:05 PM, wrote: > >> > >> On Sun, Dec 20, 2009 at 3:50 PM, Pete Shepard > >> wrote: > >> > Thanks for the reply Josef, > >> > > >> > Numpy version =1:1.3.0-3 > >> > SciPy version=0.7.0-2 > >> > > >> > > >> > Here is the code I am using, interstingly line 99 chokes in the > >> > testFisherExact() : example, don't know if these problems are related > >> > > >> > > >> > # -*- coding: utf-8 -*- > >> > > >> > #!/usr/bin/python > >> > > >> > > >> > import re > >> > import sys, os > >> > from scipy.stats import * > >> > #from scipy import array,searchsorted, histogram > >> > import re > >> > import operator > >> > import matplotlib > >> > matplotlib.use('PDF') > >> > import pylab > >> > " > >> > > >> > def fisherExact(c) : > >> > """Performs a Fisher exact test on a 2x2 contingency table > >> > in > >> > list of lists > >> > format. Returns a tuple of (odds ratio, two-tailed > >> > P-value). > >> > > >> > Examples: > >> > >>> fisherExact([[100, 2], [1000, 5]]) > >> > (0.25, 0.13007593634330314) > >> > """ > >> > oddsRatio = c[0][0] * c[1][1] / float(c[1][0] * c[0][1]) > >> > n1 = c[0][0] + c[0][1] > >> > n2 = c[1][0] + c[1][1] > >> > n = c[0][0] + c[1][0] > >> > > >> > mode = int(float((n + 1) * (n1 + 1)) / (n1 + n2 + 2)) > >> > pExact = hypergeom.pmf(c[0][0], n1 + n2, n1, n) > >> > pMode = hypergeom.pmf(c[0][0], n1 + n2, n1, n) > >> > > >> > if c[0][0] == mode : > >> > return oddsRatio, 1.0 > >> > elif c[0][0] < mode : > >> > pLower = hypergeom.cdf(c[0][0], n1 + n2, n1, n) > >> > > >> > # Binary search for where to begin upper half. > >> > min = mode > >> > max = n > >> > guess = -1 > >> > while min != max : > >> > guess = max if (max == min + 1 and guess > == > >> > min) > >> > else \ > >> > (max + min) / 2 > >> > > >> > pGuess = hypergeom.pmf(guess, n1 + n2, n1, > >> > n) > >> > if pGuess <= pExact and > hypergeom.pmf(guess > >> > - 1, > >> > n1 + n2, n1, n) > pExact : > >> > break > >> > elif pGuess < pExact : > >> > max = guess > >> > else : > >> > min = guess > >> > > >> > if guess == -1 and min == max : > >> > guess = min > >> > > >> > return oddsRatio, pLower + hypergeom.sf(guess - 1, > >> > n1 + > >> > n2, n1, n) > >> > else : > >> > pUpper = hypergeom.sf(c[0][0] - 1, n1 + n2, n1, > n); > >> > > >> > # Special case to prevent binary search from > getting > >> > stuck. > >> > if hypergeom.pmf(0, n1 + n2, n1, n) > pExact : > >> > return oddsRatio, pUpper > >> > > >> > # Binary search for where to begin lower half. > >> > min = 0 > >> > max = mode > >> > guess = -1 > >> > while min != max : > >> > guess = max if (max == min + 1 and guess > == > >> > min) > >> > else \ > >> > (max + min) / 2 > >> > pGuess = hypergeom.pmf(guess, n1 + n2, n1, > >> > n); > >> > > >> > if pGuess <= pExact and > hypergeom.pmf(guess > >> > + 1, > >> > n1 + n2, n1, n) > pExact : > >> > break; > >> > elif pGuess <= pExact : > >> > min = guess > >> > else : > >> > max = guess > >> > > >> > if guess == -1 and min == max : > >> > guess = min > >> > return oddsRatio, pUpper + > hypergeom.cdf(guess, > >> > n1 + > >> > n2, n1, n) > >> > >> this return line has the wrong indent, it should be at the same column > >> as the if guess == -1 and min == max : > >> > >> Note: I had to reformat the indent to 4 spaces because IDLE > >> complained. In these cases, it would be better to also attach the > >> file, then I would be sure I have the same file than with copy and > >> paste from the mail reader. > >> > >> Tell me if that solves the problem, I would think it's unlikely that > >> your versions of scipy is to old for this. > >> > >> Josef > >> > >> > > >> > def testFisherExact() : > >> > """Just some tests to show that fisherExact() works correctly.""" > >> > def approxEqual(n1, n2) : > >> > return abs(n1 - n2) < 0.01 > >> > > >> > res = fisherExact([[100, 2], [1000, 5]]) > >> > assert(approxEqual(res[1], 0.1301)) > >> > assert(approxEqual(res[0], 0.25)) > >> > res = fisherExact([[2, 7], [8, 2]]) > >> > assert(approxEqual(res[1], 0.0230141)) > >> > assert(approxEqual(res[0], 4.0 / 56)) > >> > res = fisherExact([[100, 2], [1000, 5]]) > >> > #assert(approxEqual(res[1], 0.1973244)) > >> > res = fisherExact([[5, 15], [20, 20]]) > >> > assert(approxEqual(res[1], 0.0958044)) > >> > res = fisherExact([[5, 16], [20, 25]]) > >> > assert(approxEqual(res[1], 0.1725862)) > >> > res = fisherExact([[10, 5], [10, 1]]) > >> > assert(approxEqual(res[1], 0.1973244)) > >> > > >> > D=[[6, 1], [1, 6]] > >> > testFisherExact() > >> > p=fisherExact(D) > >> > print p > >> > On Sun, Dec 20, 2009 at 10:37 AM, wrote: > >> >> > >> >> On Sun, Dec 20, 2009 at 12:26 PM, Pete Shepard > >> >> > >> >> wrote: > >> >> > Hello, > >> >> > > >> >> > I am using a fisher exact test I got from > >> >> > http://projects.scipy.org/scipy/attachment/ticket/956/fisher.py. > This > >> >> > program takes in two tuples and returns an odds ratio and a > p-value. > >> >> > Most > >> >> > tuples are handled nicely by the script but certain tuples eg "[[6, > >> >> > 1], > >> >> > [1, > >> >> > 6]]" return "None". I am wondering if anyone knows why this is > true? > >> >> > >> >> I get this, with and without the patch of thomas in the comments > >> >> > >> >> >>> fisherExact([[6, 1], [1, 6]]) > >> >> (36.0, 0.029137529137528768) > >> >> > >> >> Do you have an exact example? Which version of numpy and scipy? > >> >> Looking at the code, I don't see any reason why it should return > None. > >> >> > >> >> Josef > >> >> > > >> >> > TIA > >> >> > > >> >> > _______________________________________________ > >> >> > SciPy-User mailing list > >> >> > SciPy-User at scipy.org > >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> >> > > >> >> > > >> >> _______________________________________________ > >> >> SciPy-User mailing list > >> >> SciPy-User at scipy.org > >> >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sungsuha at gmail.com Sun Dec 20 20:19:17 2009 From: sungsuha at gmail.com (Ewan Maxwell) Date: Mon, 21 Dec 2009 01:19:17 +0000 (UTC) Subject: [SciPy-User] Numpy Building(AMD64) Message-ID: Hi! I cannot [Build Numpy] which I want for [PyCuda] I have [Windows 7(x64)] and PSDK 6.1(tried 7, didn't work either) Compilers: VS 2008(Not Express) gfortran(http://gcc.gnu.org/wiki/GFortranBinaries; http://quatramaran.ens.fr/~coudert/gfortran/gfortran-windows.exe) Python 2.6.4(AMD64) I downloaded the numpy-1.3.0 source I try python.exe setup.py install I get this C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Apps\Python26\libs /LIBPATH:C:\Apps\Python26\PCbuil \amd64 /LIBPATH:build\temp.win-amd64-2.6 /EXPORT:init_sort build\temp.win-amd64 2.6\Release\build\src.win-amd64-2.6\numpy\core\src\_sortmodule.obj /OUT:build\l b.win-amd64-2.6\numpy\core\_sort.pyd /IMPLIB:build\temp.win-amd64-2.6\Release\b ild\src.win-amd64-2.6\numpy\core\src\_sort.lib /MANIFESTFILE:build\temp.win-amd 4-2.6\Release\build\src.win-amd64-2.6\numpy\core\src\_sort.pyd.manifest Creating library build\temp.win-amd64-2.6\Release\build\src.win-amd64-2.6\nu py\core\src\_sort.lib and object build\temp.win-amd64-2.6\Release\build\src.win amd64-2.6\numpy\core\src\_sort.exp _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyErr_Format referenced in function __import_array _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyExc_Runtim Error referenced in function __import_array _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyCObject_As oidPtr referenced in function __import_array _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyCObject_Ty e referenced in function __import_array _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyObject_Get ttrString referenced in function __import_array _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyImport_Imp rtModule referenced in function __import_array _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyErr_NoMemo y referenced in function _BOOL_mergesort _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyMem_Free r ferenced in function _BOOL_amergesort _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyMem_Malloc referenced in function _BOOL_amergesort _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyErr_SetStr ng referenced in function _init_sort _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyExc_Import rror referenced in function _init_sort _sortmodule.obj : error LNK2019: unresolved external symbol __imp__PyErr_Print eferenced in function _init_sort _sortmodule.obj : error LNK2019: unresolved external symbol __imp__Py_InitModul 4 referenced in function _init_sort build\lib.win-amd64-2.6\numpy\core\_sort.pyd : fatal error LNK1120: 13 unresolv d externals error: Command "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\link. xe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Apps\Python26\libs /LIBPATH:C:\Apps Python26\PCbuild\amd64 /LIBPATH:build\temp.win-amd64-2.6 /EXPORT:init_sort buil \temp.win-amd64-2.6\Release\build\src.win-amd64-2.6\numpy\core\src\_sortmodule. bj /OUT:build\lib.win-amd64-2.6\numpy\core\_sort.pyd /IMPLIB:build\temp.win-amd 4-2.6\Release\build\src.win-amd64-2.6\numpy\core\src\_sort.lib /MANIFESTFILE:bu ld\temp.win-amd64-2.6\Release\build\src.win-amd64-2.6\numpy\core\src\_sort.pyd. anifest" failed with exit status 1120 What I'm I doing wrong? What can I do better? From sungsuha at gmail.com Sun Dec 20 22:16:08 2009 From: sungsuha at gmail.com (Ewan Maxwell) Date: Mon, 21 Dec 2009 03:16:08 +0000 (UTC) Subject: [SciPy-User] Numpy Building(AMD64) References: Message-ID: Ewan Maxwell gmail.com> writes: > ld\temp.win-amd64-2.6\Release\build\src.win-amd64- 2.6\numpy\core\src\_sort.pyd. > anifest" failed with exit status 1120 > > What I'm I doing wrong? What can I do better? > Build ended(with running install_egg_info writing~~ egg-info being the last lines) I will follow up on whether this is in fact, successful. From josef.pktd at gmail.com Sun Dec 20 23:20:16 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 20 Dec 2009 23:20:16 -0500 Subject: [SciPy-User] fisherexact.py returns None In-Reply-To: <5c2c43620912201707l5ff5e3aauc4947540bc1053e3@mail.gmail.com> References: <5c2c43620912200926p2e1e959fv17dda36d2aedfa62@mail.gmail.com> <1cd32cbb0912201037n95e9eagbab15e5f2ac2beee@mail.gmail.com> <5c2c43620912201250n4b9a6c26k9e8c3d7fb2f9c987@mail.gmail.com> <1cd32cbb0912201305t592b92del87c4b702b60a8be5@mail.gmail.com> <5c2c43620912201344i4bb6c739t22d159e287c3e631@mail.gmail.com> <1cd32cbb0912201404n437a0e8dgfeb9effa35518650@mail.gmail.com> <5c2c43620912201707l5ff5e3aauc4947540bc1053e3@mail.gmail.com> Message-ID: <1cd32cbb0912202020t97c2714q259430e1d963c81b@mail.gmail.com> On Sun, Dec 20, 2009 at 8:07 PM, Pete Shepard wrote: > Your corrected version seems to work. Strangely though when I uncomment line > 99 it fails? > > 98 ?? res = fisherExact([[100, 2], [1000, 5]]) > 99 ?? assert(approxEqual(res[1],? 0.1973244)) doesn't seem strange to me. 0.1973244 is the answer for a different matrix in the original file of the ticket >>> def approxEqual(n1, n2) : return abs(n1 - n2) < 0.01 >>> fisherExact([[100, 2], [1000, 5]]) (0.25, 0.13007593634330314) >>> res = fisherExact([[5, 1], [10, 10]]) >>> assert(approxEqual(res[1], 0.1973244)) >>> res (5.0, 0.19732441471571543) Josef > > On Sun, Dec 20, 2009 at 2:04 PM, wrote: >> >> On Sun, Dec 20, 2009 at 4:44 PM, Pete Shepard >> wrote: >> > I think the indent issue was due to copying and pasting also, attached >> > is >> > the code. >> >> your attached file has inconsistent indentation all over the place >> (according to IDLE, which refuses to run the file, or maybe there are >> additional tabs instead of white space) >> your return statement is indented relative to the if , as mentioned in >> my previous message. If you unindent then the None disappears. >> >> I attach my indent corrected version. >> >> What editor are you using? It is possible that you are seeing >> different formatting than I do on Windows. >> >> Josef >> >> > >> > On Sun, Dec 20, 2009 at 1:05 PM, wrote: >> >> >> >> On Sun, Dec 20, 2009 at 3:50 PM, Pete Shepard >> >> wrote: >> >> > Thanks for the reply Josef, >> >> > >> >> > Numpy version =1:1.3.0-3 >> >> > SciPy version=0.7.0-2 >> >> > >> >> > >> >> > Here is the code I am using, interstingly line 99 chokes in the >> >> > testFisherExact() : example, don't know if these problems are related >> >> > >> >> > >> >> > # -*- coding: utf-8 -*- >> >> > >> >> > #!/usr/bin/python >> >> > >> >> > >> >> > import re >> >> > import sys, os >> >> > from scipy.stats import * >> >> > #from scipy import array,searchsorted, histogram >> >> > import re >> >> > import operator >> >> > import matplotlib >> >> > matplotlib.use('PDF') >> >> > import pylab >> >> > " >> >> > >> >> > def fisherExact(c) : >> >> > ??? ??????? """Performs a Fisher exact test on a 2x2 contingency >> >> > table >> >> > in >> >> > list of lists >> >> > ??? ?????????? format.? Returns a tuple of (odds ratio, two-tailed >> >> > P-value). >> >> > >> >> > ??? ?????????? Examples: >> >> > ??? ?????????? >>> fisherExact([[100, 2], [1000, 5]]) >> >> > ??? ?????????? (0.25, 0.13007593634330314) >> >> > ??? ?????????? """ >> >> > ??? ??????? oddsRatio = c[0][0] * c[1][1] / float(c[1][0] * c[0][1]) >> >> > ??? ??????? n1 = c[0][0] + c[0][1] >> >> > ??? ??????? n2 = c[1][0] + c[1][1] >> >> > ??? ??????? n? = c[0][0] + c[1][0] >> >> > >> >> > ??? ??????? mode = int(float((n + 1) * (n1 + 1)) / (n1 + n2 + 2)) >> >> > ??? ??????? pExact = hypergeom.pmf(c[0][0], n1 + n2, n1, n) >> >> > ??? ??????? pMode = hypergeom.pmf(c[0][0], n1 + n2, n1, n) >> >> > >> >> > ??? ??????? if c[0][0] == mode : >> >> > ??? ??????????????? return oddsRatio, 1.0 >> >> > ??? ??????? elif c[0][0] < mode : >> >> > ??? ??????????????? pLower = hypergeom.cdf(c[0][0], n1 + n2, n1, n) >> >> > >> >> > ??? ??????????????? # Binary search for where to begin upper half. >> >> > ??? ??????????????? min = mode >> >> > ??? ??????????????? max = n >> >> > ??? ??????????????? guess = -1 >> >> > ??? ??????????????? while min != max : >> >> > ??? ??????????????????????? guess = max if (max == min + 1 and guess >> >> > == >> >> > min) >> >> > else \ >> >> > ??? ??????????????????????????????????????? (max + min) / 2 >> >> > >> >> > ??? ??????????????????????? pGuess = hypergeom.pmf(guess, n1 + n2, >> >> > n1, >> >> > n) >> >> > ??? ??????????????????????? if pGuess <= pExact and >> >> > hypergeom.pmf(guess >> >> > - 1, >> >> > n1 + n2, n1, n) > pExact : >> >> > ??? ??????????????????????????????? break >> >> > ??? ??????????????????????? elif pGuess < pExact : >> >> > ??? ??????????????????????????????? max = guess >> >> > ??? ??????????????????????? else : >> >> > ??? ??????????????????????????????? min = guess >> >> > >> >> > ??? ??????????????? if guess == -1 and min == max : >> >> > ??? ??????????????????????? guess = min >> >> > >> >> > ??? ??????????????? return oddsRatio, pLower + hypergeom.sf(guess - >> >> > 1, >> >> > n1 + >> >> > n2, n1, n) >> >> > ??? ??????? else : >> >> > ??? ??????????????? pUpper = hypergeom.sf(c[0][0] - 1, n1 + n2, n1, >> >> > n); >> >> > >> >> > ??? ??????????????? # Special case to prevent binary search from >> >> > getting >> >> > stuck. >> >> > ??? ??????????????? if hypergeom.pmf(0, n1 + n2, n1, n) > pExact : >> >> > ??? ??????????????????????? return oddsRatio, pUpper >> >> > >> >> > ??? ??????????????? # Binary search for where to begin lower half. >> >> > ??? ??????????????? min = 0 >> >> > ??? ??????????????? max = mode >> >> > ??? ??????????????? guess = -1 >> >> > ??? ??????????????? while min != max : >> >> > ??? ??????????????????????? guess = max if (max == min + 1 and guess >> >> > == >> >> > min) >> >> > else \ >> >> > ??? ??????????????????????????????????????? (max + min) / 2 >> >> > ??? ??????????????????????? pGuess = hypergeom.pmf(guess, n1 + n2, >> >> > n1, >> >> > n); >> >> > >> >> > ??? ??????????????????????? if pGuess <= pExact and >> >> > hypergeom.pmf(guess >> >> > + 1, >> >> > n1 + n2, n1, n) > pExact : >> >> > ??? ??????????????????????????????? break; >> >> > ??? ??????????????????????? elif pGuess <= pExact? : >> >> > ??? ??????????????????????????????? min = guess >> >> > ??? ??????????????????????? else : >> >> > ??? ??????????????????????????????? max = guess >> >> > >> >> > ??? ??????????????? if guess == -1 and min == max : >> >> > ??? ??????????????????????? guess = min >> >> > ??? ??? ??????????????? return oddsRatio, pUpper + >> >> > hypergeom.cdf(guess, >> >> > n1 + >> >> > n2, n1, n) >> >> >> >> this return line has the wrong indent, it should be at the same column >> >> as the ? if guess == -1 and min == max : >> >> >> >> Note: I had to reformat the indent to 4 spaces because IDLE >> >> complained. In these cases, it would be better to also attach the >> >> file, then I would be sure I have the same file than with copy and >> >> paste from the mail reader. >> >> >> >> Tell me if that solves the problem, I would think it's unlikely that >> >> your versions of scipy is to old for this. >> >> >> >> Josef >> >> >> >> > >> >> > def testFisherExact() : >> >> > ??? """Just some tests to show that fisherExact() works correctly.""" >> >> > ??? def approxEqual(n1, n2) : >> >> > ??? ??? return abs(n1 - n2) < 0.01 >> >> > >> >> > ??? res = fisherExact([[100, 2], [1000, 5]]) >> >> > ??? assert(approxEqual(res[1], 0.1301)) >> >> > ??? assert(approxEqual(res[0], 0.25)) >> >> > ??? res = fisherExact([[2, 7], [8, 2]]) >> >> > ??? assert(approxEqual(res[1], 0.0230141)) >> >> > ??? assert(approxEqual(res[0], 4.0 / 56)) >> >> > ??? res = fisherExact([[100, 2], [1000, 5]]) >> >> > ??? #assert(approxEqual(res[1],? 0.1973244)) >> >> > ??????? res = fisherExact([[5, 15], [20, 20]]) >> >> > ??? assert(approxEqual(res[1], 0.0958044)) >> >> > ??? res = fisherExact([[5, 16], [20, 25]]) >> >> > ??? assert(approxEqual(res[1], 0.1725862)) >> >> > ??? res = fisherExact([[10, 5], [10, 1]]) >> >> > ??? assert(approxEqual(res[1], 0.1973244)) >> >> > >> >> > D=[[6, 1], [1, 6]] >> >> > testFisherExact() >> >> > p=fisherExact(D) >> >> > print p >> >> > On Sun, Dec 20, 2009 at 10:37 AM, wrote: >> >> >> >> >> >> On Sun, Dec 20, 2009 at 12:26 PM, Pete Shepard >> >> >> >> >> >> wrote: >> >> >> > Hello, >> >> >> > >> >> >> > I am using a fisher exact test I got from >> >> >> > http://projects.scipy.org/scipy/attachment/ticket/956/fisher.py. >> >> >> > This >> >> >> > program takes in two tuples and returns an odds ratio and a >> >> >> > p-value. >> >> >> > Most >> >> >> > tuples are handled nicely by the script but certain tuples eg >> >> >> > "[[6, >> >> >> > 1], >> >> >> > [1, >> >> >> > 6]]" return "None". I am wondering if anyone knows why this is >> >> >> > true? >> >> >> >> >> >> I get this, with and without the patch of thomas in the comments >> >> >> >> >> >> >>> fisherExact([[6, 1], [1, 6]]) >> >> >> (36.0, 0.029137529137528768) >> >> >> >> >> >> Do you have an exact example? Which version of numpy and scipy? >> >> >> Looking at the code, I don't see any reason why it should return >> >> >> None. >> >> >> >> >> >> Josef >> >> >> > >> >> >> > TIA >> >> >> > >> >> >> > _______________________________________________ >> >> >> > SciPy-User mailing list >> >> >> > SciPy-User at scipy.org >> >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> >> >> > >> >> >> _______________________________________________ >> >> >> SciPy-User mailing list >> >> >> SciPy-User at scipy.org >> >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> > >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> > >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From massimodisasha at yahoo.it Mon Dec 21 07:17:18 2009 From: massimodisasha at yahoo.it (Massimo Di Stefano) Date: Mon, 21 Dec 2009 13:17:18 +0100 Subject: [SciPy-User] scipy svn on mac osx 10.6 Message-ID: Hi, I'm tring to install scipy from svn source on a mac 10.6 numpy install without problems, but scipy give me errors about missed "umfpack" thanks for any hints ! this the errors log : ######################### cd /Users/Shared/source/scipy/ MacBook-Pro-15-di-Massimo-Di-Stefano:scipy sasha$ LDFLAGS="-lgfortran -arch x86_64 -arch i386" FFLAGS="-arch x86_64 -arch i386" python setup.py build Warning: No configuration returned, assuming unavailable. blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] umfpack_info: libraries umfpack not found in /System/Library/Frameworks/Python.framework/Versions/2.6/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/system_info.py:414: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE Traceback (most recent call last): File "setup.py", line 160, in setup_package() File "setup.py", line 152, in setup_package configuration=configuration ) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/core.py", line 150, in setup config = configuration() File "setup.py", line 118, in configuration config.add_subpackage('scipy') File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 851, in add_subpackage caller_level = 2) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 834, in get_subpackage caller_level = caller_level + 1) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 781, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 20, in configuration config.add_subpackage('special') File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 851, in add_subpackage caller_level = 2) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 834, in get_subpackage caller_level = caller_level + 1) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 766, in _get_configuration_from_setup_py ('.py', 'U', 1)) File "scipy/special/setup.py", line 7, in from numpy.distutils.misc_util import get_numpy_include_dirs, get_info ImportError: cannot import name get_info ######################### From massimodisasha at yahoo.it Mon Dec 21 12:20:46 2009 From: massimodisasha at yahoo.it (Massimo Di Stefano) Date: Mon, 21 Dec 2009 18:20:46 +0100 Subject: [SciPy-User] build svn scipy on osx 10.6 Message-ID: <41D80739-5D94-4228-84C0-E4D6E92D5115@yahoo.it> Hi, i'm tring to install scipy from svn on mac osx 10.6 to build numpy i had to remove the system numpy, then to have 64 bit support i had to install gfortran from macscience site. the build ends without erros for both numpy and scipy, but while numpy.test give me no errors : ---------------------------------------------------------------------- Ran 2496 tests in 11.179s OK (KNOWNFAIL=5, SKIP=1) scipy ends with full errors : http://www.geofemengineering.it/data/scipy_test_log.txt import : linalg, io gived me a segfault In [1]: from scipy import io, linalg Segmentation fault thanks for any help! Massimo From dwf at cs.toronto.edu Mon Dec 21 12:26:05 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 21 Dec 2009 12:26:05 -0500 Subject: [SciPy-User] scipy svn on mac osx 10.6 In-Reply-To: References: Message-ID: On 21-Dec-09, at 7:17 AM, Massimo Di Stefano wrote: > > Hi, > > I'm tring to install scipy from svn source on a mac 10.6 > > numpy install without problems, but scipy give me errors about > missed "umfpack" > thanks for any hints ! UMFPACK isn't the problem; it's an optional dependency, that is just a warning. The actual error is in importing a numpy function get_info. It looks like NumPy didn't install correctly, and the (rather old) version that ships with OS X is interfering. David From massimodisasha at yahoo.it Mon Dec 21 13:04:35 2009 From: massimodisasha at yahoo.it (Massimo Di Stefano) Date: Mon, 21 Dec 2009 19:04:35 +0100 Subject: [SciPy-User] SciPy-User Digest, Vol 76, Issue 37 In-Reply-To: References: Message-ID: <2BB3F2F8-5513-4C0C-96BE-747E57B2AA5E@yahoo.it> Yes Trhue, i had to remove the old version, now numpy install without errors, i tried the test and all seems to works fine. i send a second mail to the list about the scipy build, i had to remove my old gfortran in favour of a recent 64bit version. the build ends without errors but unluky the test fails and i have a python segfault tring to inport the linear algebra solver and io function. thanks, Massimo. Il giorno 21/dic/2009, alle ore 19.00, scipy-user-request at scipy.org ha scritto: > Send SciPy-User mailing list submissions to > scipy-user at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-user > or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > > You can reach the person managing the list at > scipy-user-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-User digest..." > > > Today's Topics: > > 1. Re: scipy svn on mac osx 10.6 (David Warde-Farley) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 21 Dec 2009 12:26:05 -0500 > From: David Warde-Farley > Subject: Re: [SciPy-User] scipy svn on mac osx 10.6 > To: SciPy Users List > Message-ID: > Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes > > > > On 21-Dec-09, at 7:17 AM, Massimo Di Stefano wrote: > >> >> Hi, >> >> I'm tring to install scipy from svn source on a mac 10.6 >> >> numpy install without problems, but scipy give me errors about >> missed "umfpack" >> thanks for any hints ! > > UMFPACK isn't the problem; it's an optional dependency, that is just a > warning. The actual error is in importing a numpy function get_info. > It looks like NumPy didn't install correctly, and the (rather old) > version that ships with OS X is interfering. > > David > > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 76, Issue 37 > ****************************************** From denis-bz-gg at t-online.de Mon Dec 21 12:48:47 2009 From: denis-bz-gg at t-online.de (denis) Date: Mon, 21 Dec 2009 09:48:47 -0800 (PST) Subject: [SciPy-User] basic spline interpolation in a few lines of numpy Message-ID: Folks, I've put a couple of tiny spline functions in http://advice.mechanicalkern.com; their aim is to help students get started with splines, with the aid of numpy. (They may, though, interpolate between two stools: unclear for beginners, trivial for experts.) Comments welcome. """ basic spline interpolation in a few lines of numpy. Keywords: interpolate, resample, B-spline, Catmull-Rom, numpy, getting- started Examples: t = arange( 1000+1 ) points = f(t) # points in 1d 2d 3d ... curve = dotspline1( points, times=10 ) # at 0 .1 .2 ... 999.9 1000 surface = dotspline2( array, times=10 ) # an array of e.g. pixels ... cheers -- denis From nwagner at iam.uni-stuttgart.de Mon Dec 21 14:52:05 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 21 Dec 2009 20:52:05 +0100 Subject: [SciPy-User] Flow chart generator Message-ID: Hi all, I am looking for a free flow chart generator for (C,C++,Python). Any pointer would be appreciated. Thanks in advance Nils From timmichelsen at gmx-topmail.de Mon Dec 21 15:03:11 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Mon, 21 Dec 2009 21:03:11 +0100 Subject: [SciPy-User] Flow chart generator In-Reply-To: References: Message-ID: > I am looking for a free flow chart generator for > (C,C++,Python). > Any pointer would be appreciated. I am also interested ;-) PLease share whatever you find. Thanks. From mailinglist.honeypot at gmail.com Mon Dec 21 15:46:58 2009 From: mailinglist.honeypot at gmail.com (Steve Lianoglou) Date: Mon, 21 Dec 2009 15:46:58 -0500 Subject: [SciPy-User] scipy svn on mac osx 10.6 In-Reply-To: References: Message-ID: Btw, I just installed a fat 32/64-bit "python workbench" from source: meaning, my own 64bit python (not the one that comes w/ os x), numpy (with umfpack + fftw3 ) and scipy ... no dependencies on macports or anything on snow leopard. Here's a list of links I found very handy: * Compiling readline: http://techdebug.com/blog/2009/01/03/compiling-readline-on-an-osx-105-intel-x86_64/ * 64bit python: http://www.object2.us/python/install-64bit-python-mod_wsgi-on-mac-os-x-105-leopard * 64bit scipy/numpy: http://blog.hyperjeff.net/?p=160 The first two links tell you to set MACOSX_DEPLOYMENT_TARGET=10.5 , because they were written for Leopard, I just set that environment flag to 10.6 All of those resources were very helpful, that last one especially so. There are a lot of comments at the bottom of the post with people identifying problems with instructions, and their fix (I believe the blog text itself is updated to reflect them, but if you're stumbling on something, skim the comments is all). Hope that helps, -steve On Mon, Dec 21, 2009 at 12:26 PM, David Warde-Farley wrote: > > > On 21-Dec-09, at 7:17 AM, Massimo Di Stefano wrote: > >> >> Hi, >> >> I'm tring to install scipy from svn source on a mac 10.6 >> >> numpy install without problems, but scipy give me errors about >> missed "umfpack" >> thanks for any hints ! > > UMFPACK isn't the problem; it's an optional dependency, that is just a > warning. The actual error is in importing a numpy function get_info. > It looks like NumPy didn't install correctly, and the (rather old) > version that ships with OS X is interfering. > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Steve Lianoglou Graduate Student: Computational Systems Biology | Memorial Sloan-Kettering Cancer Center | Weill Medical College of Cornell University Contact Info: http://cbio.mskcc.org/~lianos/contact From cournape at gmail.com Mon Dec 21 17:34:08 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 22 Dec 2009 07:34:08 +0900 Subject: [SciPy-User] build svn scipy on osx 10.6 In-Reply-To: <41D80739-5D94-4228-84C0-E4D6E92D5115@yahoo.it> References: <41D80739-5D94-4228-84C0-E4D6E92D5115@yahoo.it> Message-ID: <5b8d13220912211434j49dd0d13m61decc520223203b@mail.gmail.com> On Tue, Dec 22, 2009 at 2:20 AM, Massimo Di Stefano wrote: > Hi, > > i'm tring to install scipy from svn on mac osx 10.6 > > to build numpy i had to remove the system numpy, > then to have 64 bit support i had to install gfortran from macscience site. the tests errors are caused by setuptools being stupid: for some reasons totally beyond me, setuptools (or easy_install) thinks it is a good idea to make the test scripts executables whereas they should not. Use the setup.py, there is no reason to use setuptools unless you know what you are doing, and installing as egg has no use in normal configuration. > > import : ? linalg, io gived me a segfault > > In [1]: from scipy import io, linalg > Segmentation fault What happened most likely is that you did not rebuild scipy from scratch: remove the installed scipy (in /Library/....), remove the build directory of scipy, and then rebuild and reinstall scipy. cheers, David From ryanlists at gmail.com Mon Dec 21 20:53:19 2009 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 Dec 2009 19:53:19 -0600 Subject: [SciPy-User] equivalent of tolist().index(entry) for numpy 1d array of strings Message-ID: I wrote some code to work with csv spreadsheet files by reading the columns into lists, but I need to rework the code to work with numpy 1d arrays of strings rather than lists. I need to search one of these columns/arrays. What is the best way to find the index for the element that matches a certain string (or maybe just the first element to match such a string)? With the columns as lists, I was doing index = mylist.index(entry) So, I could obviously do index = mylist.tolist().index(entry) but I don't know if that would be slower or clumsier than something like bool_vect = where(mylist==entry)[0] index = bool_vect[0] or just index = where(mylist==entry)[0][0] Any thoughts? Is there an easier way? Thanks, Ryan From ryanlists at gmail.com Mon Dec 21 21:09:17 2009 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 Dec 2009 20:09:17 -0600 Subject: [SciPy-User] equivalent of tolist().index(entry) for numpy 1d array of strings In-Reply-To: References: Message-ID: I am still open to more elegant solutions, but it seems like my concerns about .tolist() being inefficient are unfounded (this may be an indicator that I don't understand the inner workings of numpy very well). Here is my test: t1 = time.time() index1 = where(self.md5sum==photo.md5sum)[0][0] t2 = time.time() index2 = mysearch(self.md5sum, photo.md5sum) t3 = time.time() index3 = self.md5sum.tolist().index(photo.md5sum) t4 = time.time() All 3 approaches lead to the same result. Here are my timing results: t2-t1=4.81605529785e-05 t3-t2=4.98294830322e-05 t4-t3=2.00271606445e-05 def mysearch(arrayin, element): bool_vect = where(arrayin==element)[0] assert(len(bool_vect)==1), 'Did not find exactly 1 match for ' + str(element) return bool_vect[0] Now, for this test, the arrays didn't have very many elements (10 ish). FWIW, Ryan On Mon, Dec 21, 2009 at 7:53 PM, Ryan Krauss wrote: > I wrote some code to work with csv spreadsheet files by reading the > columns into lists, but I need to rework the code to work with numpy > 1d arrays of strings rather than lists. ?I need to search one of these > columns/arrays. ?What is the best way to find the index for the > element that matches a certain string (or maybe just the first element > to match such a string)? > > With the columns as lists, I was doing > index = mylist.index(entry) > > So, I could obviously do > index = mylist.tolist().index(entry) > > but I don't know if that would be slower or clumsier than something like > bool_vect = where(mylist==entry)[0] > index = bool_vect[0] > > or just > > index = where(mylist==entry)[0][0] > > Any thoughts? ?Is there an easier way? > > Thanks, > > Ryan > From kwgoodman at gmail.com Mon Dec 21 21:27:05 2009 From: kwgoodman at gmail.com (Keith Goodman) Date: Mon, 21 Dec 2009 18:27:05 -0800 Subject: [SciPy-User] equivalent of tolist().index(entry) for numpy 1d array of strings In-Reply-To: References: Message-ID: On Mon, Dec 21, 2009 at 6:09 PM, Ryan Krauss wrote: > I am still open to more elegant solutions, but it seems like my > concerns about .tolist() being inefficient are unfounded (this may be > an indicator that I don't understand the inner workings of numpy very > well). > > Here is my test: > > t1 = time.time() > index1 = where(self.md5sum==photo.md5sum)[0][0] > t2 = time.time() > index2 = mysearch(self.md5sum, photo.md5sum) > t3 = time.time() > index3 = self.md5sum.tolist().index(photo.md5sum) > t4 = time.time() If you are using ipython then it is handly, and more accurate, to use timeit. At the ipython prompt try: timeit where(self.md5sum==photo.md5sum)[0][0] > > All 3 approaches lead to the same result. ?Here are my timing results: > t2-t1=4.81605529785e-05 > t3-t2=4.98294830322e-05 > t4-t3=2.00271606445e-05 > > def mysearch(arrayin, element): > ? ?bool_vect = where(arrayin==element)[0] > ? ?assert(len(bool_vect)==1), 'Did not find exactly 1 match for ' + > str(element) > ? ?return bool_vect[0] If element is not in arrayin then mysearch will crash. Same for .index. > > Now, for this test, the arrays didn't have very many elements (10 ish). > > FWIW, > > Ryan > > On Mon, Dec 21, 2009 at 7:53 PM, Ryan Krauss wrote: >> I wrote some code to work with csv spreadsheet files by reading the >> columns into lists, but I need to rework the code to work with numpy >> 1d arrays of strings rather than lists. ?I need to search one of these >> columns/arrays. ?What is the best way to find the index for the >> element that matches a certain string (or maybe just the first element >> to match such a string)? >> >> With the columns as lists, I was doing >> index = mylist.index(entry) >> >> So, I could obviously do >> index = mylist.tolist().index(entry) >> >> but I don't know if that would be slower or clumsier than something like >> bool_vect = where(mylist==entry)[0] >> index = bool_vect[0] >> >> or just >> >> index = where(mylist==entry)[0][0] >> >> Any thoughts? ?Is there an easier way? >> >> Thanks, >> >> Ryan >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Mon Dec 21 23:39:11 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 21 Dec 2009 23:39:11 -0500 Subject: [SciPy-User] equivalent of tolist().index(entry) for numpy 1d array of strings In-Reply-To: References: Message-ID: <1cd32cbb0912212039v4e18d818h5b5e97b673701dcc@mail.gmail.com> On Mon, Dec 21, 2009 at 9:09 PM, Ryan Krauss wrote: > I am still open to more elegant solutions, but it seems like my > concerns about .tolist() being inefficient are unfounded (this may be > an indicator that I don't understand the inner workings of numpy very > well). > > Here is my test: > > t1 = time.time() > index1 = where(self.md5sum==photo.md5sum)[0][0] "where" finds all matching elements, there is no early stopping. I think there was a similar argument recently that python is faster if early stopping/return is desired. > t2 = time.time() > index2 = mysearch(self.md5sum, photo.md5sum) > t3 = time.time() > index3 = self.md5sum.tolist().index(photo.md5sum) index finds only the first match, and then can return immediately. I guess, if you only care about the first element that matches the condition, then python might always be faster than numpy. The advantage of "where" would be, if you need to do something with all elements that match, e.g. replace them with something else. Josef > t4 = time.time() > > All 3 approaches lead to the same result. ?Here are my timing results: > t2-t1=4.81605529785e-05 > t3-t2=4.98294830322e-05 > t4-t3=2.00271606445e-05 > > def mysearch(arrayin, element): > ? ?bool_vect = where(arrayin==element)[0] > ? ?assert(len(bool_vect)==1), 'Did not find exactly 1 match for ' + > str(element) > ? ?return bool_vect[0] > > Now, for this test, the arrays didn't have very many elements (10 ish). > > FWIW, > > Ryan > > On Mon, Dec 21, 2009 at 7:53 PM, Ryan Krauss wrote: >> I wrote some code to work with csv spreadsheet files by reading the >> columns into lists, but I need to rework the code to work with numpy >> 1d arrays of strings rather than lists. ?I need to search one of these >> columns/arrays. ?What is the best way to find the index for the >> element that matches a certain string (or maybe just the first element >> to match such a string)? >> >> With the columns as lists, I was doing >> index = mylist.index(entry) >> >> So, I could obviously do >> index = mylist.tolist().index(entry) >> >> but I don't know if that would be slower or clumsier than something like >> bool_vect = where(mylist==entry)[0] >> index = bool_vect[0] >> >> or just >> >> index = where(mylist==entry)[0][0] >> >> Any thoughts? ?Is there an easier way? >> >> Thanks, >> >> Ryan >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From kwgoodman at gmail.com Mon Dec 21 23:51:51 2009 From: kwgoodman at gmail.com (Keith Goodman) Date: Mon, 21 Dec 2009 20:51:51 -0800 Subject: [SciPy-User] weave.inline version of scipy.stats.rankdata Message-ID: scipy.stats.rankdata looks like an easy function to speed up with weave.inline. Well, easy if I knew what I was doing. My attempt below sometimes segfaults and sometimes gives the wrong result on the first run (but gives the right result on subsequent runs). Can anyone spot what I am doing wrong? (A note about the code: I used "if...else if" since ||, or, wouldn't compile.) import numpy as np from scipy import weave from scipy.weave import converters def rankdata(a): a = np.asarray(a, dtype=float) n = len(a) svec, ivec = fastsort(a) newarray = np.zeros(n, float) code = """ int dupcount; float sumranks, averank; sumranks = 0.0; dupcount = 0; for (int i=0; i < n; ++i) { sumranks = sumranks + i; dupcount = dupcount + 1; if (i==n-1) { averank = sumranks / float(dupcount) + 1; for (int j=i-dupcount+1; j < i+2; ++j) { newarray[ivec[j]] = averank; } sumranks = 0; dupcount = 0; } else if (svec[i] != svec[i+1]) { averank = sumranks / float(dupcount) + 1; for (int j=i-dupcount+1; j < i+2; ++j) { newarray[ivec[j]] = averank; } sumranks = 0; dupcount = 0; } } """ err = weave.inline(code, ['svec', 'ivec', 'n', 'newarray'], type_converters=converters.blitz, compiler='gcc') return newarray def fastsort(a): # fixme: the wording in the docstring is nonsense. """Sort an array and provide the argsort. Parameters ---------- a : array Returns ------- (sorted array, indices into the original array, ) """ it = np.argsort(a) as_ = a[it] return as_, it From cournape at gmail.com Tue Dec 22 00:57:47 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 22 Dec 2009 14:57:47 +0900 Subject: [SciPy-User] Numpy Building(AMD64) In-Reply-To: References: Message-ID: <5b8d13220912212157n3a8ad37bw8a7fee4e50678062@mail.gmail.com> On Mon, Dec 21, 2009 at 10:19 AM, Ewan Maxwell wrote: > Hi! > > I cannot [Build Numpy] which I want for [PyCuda] > > I have [Windows 7(x64)] and PSDK 6.1(tried 7, didn't work either) > ? ?Compilers: > ? ? ? ?VS 2008(Not Express) > ? ? ? ?gfortran(http://gcc.gnu.org/wiki/GFortranBinaries; > http://quatramaran.ens.fr/~coudert/gfortran/gfortran-windows.exe) > ? ? ? ?Python 2.6.4(AMD64) Note that the gfortran on that page seems to be 32 bits only, so that cannot work. But even if you use gfortran which targets 64 bits (from mingw-w64), it will not work anyway, as linking VS 2008 and gfortran together on 64 bits windows is tricky to say the least. > > What I'm I doing wrong? What can I do better? Two choices at least: - you actually do not need a fortran compiler at all to build numpy (only needed for scipy) - if you still want to compile with fortran, then the only one known to work at the moment is Ifort from Intel. I recently wrote how to build numpy with VS 2008 and ifort on 64 bits with numscons on the ML, look in the archive, cheers, David From ryanlists at gmail.com Tue Dec 22 09:16:04 2009 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 22 Dec 2009 08:16:04 -0600 Subject: [SciPy-User] equivalent of tolist().index(entry) for numpy 1d array of strings In-Reply-To: References: Message-ID: > If you are using ipython then it is handly, and more accurate, to use > timeit. At the ipython prompt try: > timeit where(self.md5sum==photo.md5sum)[0][0] Thanks for the tip. I have all but given up on using timeit in scripts because I can't find my way around namespace issues. But ipython (not surprisingly) handles the namespace problems nicely. Thanks again, Ryan On Mon, Dec 21, 2009 at 8:27 PM, Keith Goodman wrote: > On Mon, Dec 21, 2009 at 6:09 PM, Ryan Krauss wrote: >> I am still open to more elegant solutions, but it seems like my >> concerns about .tolist() being inefficient are unfounded (this may be >> an indicator that I don't understand the inner workings of numpy very >> well). >> >> Here is my test: >> >> t1 = time.time() >> index1 = where(self.md5sum==photo.md5sum)[0][0] >> t2 = time.time() >> index2 = mysearch(self.md5sum, photo.md5sum) >> t3 = time.time() >> index3 = self.md5sum.tolist().index(photo.md5sum) >> t4 = time.time() > > If you are using ipython then it is handly, and more accurate, to use > timeit. At the ipython prompt try: > > timeit where(self.md5sum==photo.md5sum)[0][0] > >> >> All 3 approaches lead to the same result. ?Here are my timing results: >> t2-t1=4.81605529785e-05 >> t3-t2=4.98294830322e-05 >> t4-t3=2.00271606445e-05 >> >> def mysearch(arrayin, element): >> ? ?bool_vect = where(arrayin==element)[0] >> ? ?assert(len(bool_vect)==1), 'Did not find exactly 1 match for ' + >> str(element) >> ? ?return bool_vect[0] > > If element is not in arrayin then mysearch will crash. Same for .index. > >> >> Now, for this test, the arrays didn't have very many elements (10 ish). >> >> FWIW, >> >> Ryan >> >> On Mon, Dec 21, 2009 at 7:53 PM, Ryan Krauss wrote: >>> I wrote some code to work with csv spreadsheet files by reading the >>> columns into lists, but I need to rework the code to work with numpy >>> 1d arrays of strings rather than lists. ?I need to search one of these >>> columns/arrays. ?What is the best way to find the index for the >>> element that matches a certain string (or maybe just the first element >>> to match such a string)? >>> >>> With the columns as lists, I was doing >>> index = mylist.index(entry) >>> >>> So, I could obviously do >>> index = mylist.tolist().index(entry) >>> >>> but I don't know if that would be slower or clumsier than something like >>> bool_vect = where(mylist==entry)[0] >>> index = bool_vect[0] >>> >>> or just >>> >>> index = where(mylist==entry)[0][0] >>> >>> Any thoughts? ?Is there an easier way? >>> >>> Thanks, >>> >>> Ryan >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From massimodisasha at yahoo.it Tue Dec 22 13:41:03 2009 From: massimodisasha at yahoo.it (Massimo Di Stefano) Date: Tue, 22 Dec 2009 19:41:03 +0100 Subject: [SciPy-User] SciPy-User Digest, Vol 76, Issue 38 In-Reply-To: References: Message-ID: Guys, thanks to All ! removing the old scipy.egg (from the superpack installation) all works fine no more segfault. i also tried the test getting this results : ---------------------------------------------------------------------- Ran 4871 tests in 937.000s FAILED (KNOWNFAIL=8, SKIP=38, errors=5, failures=4) Out[4]: thanks for the wonderful link too! in it there are a lot of usefull information. thanks again, happy end-year ti All! Massimo. Il giorno 22/dic/2009, alle ore 05.39, scipy-user-request at scipy.org ha scritto: >> Hi, >> >> i'm tring to install scipy from svn on mac osx 10.6 From kwgoodman at gmail.com Tue Dec 22 16:46:00 2009 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 22 Dec 2009 13:46:00 -0800 Subject: [SciPy-User] scipy unit test failures Message-ID: I built scipy for the first time (0.7.1). No different than building numpy, it just takes longer. Somebody did something right when designing all this. I get 16 failures. They all look similar to this: ====================================================================== FAIL: test_identity (test_construct.TestConstructUtils) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.6/site-packages/scipy/sparse/tests/test_construct.py", line 70, in test_identity assert_equal( I.dtype, 'int8' ) File "/usr/local/lib/python2.6/site-packages/numpy/testing/utils.py", line 284, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: dtype('int8') DESIRED: 'int8' Is this anything to worry about? Or it is just a missing .type in the unit tests? From Scott.Askey at afit.edu Wed Dec 23 17:11:37 2009 From: Scott.Askey at afit.edu (Askey, Scott A Capt USAF AETC AFIT/ENY) Date: Wed, 23 Dec 2009 17:11:37 -0500 Subject: [SciPy-User] Parallelizing a for loop Message-ID: <792700546363C941B876B9D41AF4475902D689E6@MS-AFIT-03.afit.edu> I am looking for a good way to speed the calculation of the of the residuals and and the Jacobian of for fsolve. For a small problem with 10 elements (420x420) the Jacobians .04 seconds to calculate and the residuals 420x1 take .007 seconds. The residual and Jacobian elements are independent and calculated in a for loop. Is threading or multiprocessing more appropriate. Cheers, Scott From peridot.faceted at gmail.com Wed Dec 23 17:16:35 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 23 Dec 2009 18:16:35 -0400 Subject: [SciPy-User] Parallelizing a for loop In-Reply-To: <792700546363C941B876B9D41AF4475902D689E6@MS-AFIT-03.afit.edu> References: <792700546363C941B876B9D41AF4475902D689E6@MS-AFIT-03.afit.edu> Message-ID: 2009/12/23 Askey, Scott A Capt USAF AETC AFIT/ENY : > > I am looking for a good way to speed the calculation of the of the residuals and and the Jacobian of for fsolve. > > For a small problem with 10 elements (420x420) the Jacobians .04 seconds to calculate and the residuals 420x1 take .007 seconds. > The residual and Jacobian elements are independent and calculated in a for loop. > > Is threading or multiprocessing more appropriate. A first step, if this is possible, would be to eliminate the for loops using numpy. Since interpreting each python statement takes a fair number of microseconds, your code will run much faster if you can rewrite it in terms of array operations. With a little luck it may also simplify your code. Anne > Cheers, > > Scott > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From charlesr.harris at gmail.com Wed Dec 23 18:57:16 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 23 Dec 2009 16:57:16 -0700 Subject: [SciPy-User] Parallelizing a for loop In-Reply-To: <792700546363C941B876B9D41AF4475902D689E6@MS-AFIT-03.afit.edu> References: <792700546363C941B876B9D41AF4475902D689E6@MS-AFIT-03.afit.edu> Message-ID: On Wed, Dec 23, 2009 at 3:11 PM, Askey, Scott A Capt USAF AETC AFIT/ENY < Scott.Askey at afit.edu> wrote: > > I am looking for a good way to speed the calculation of the of the > residuals and and the Jacobian of for fsolve. > > For a small problem with 10 elements (420x420) the Jacobians .04 seconds to > calculate and the residuals 420x1 take .007 seconds. > The residual and Jacobian elements are independent and calculated in a for > loop. > > Is threading or multiprocessing more appropriate. > > What problem are you trying to solve? That is, what problem leads you to fsolve? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From scotta_2002 at yahoo.com Thu Dec 24 09:50:13 2009 From: scotta_2002 at yahoo.com (Scott Askey) Date: Thu, 24 Dec 2009 06:50:13 -0800 (PST) Subject: [SciPy-User] Parallelizing a for loop Message-ID: <669636.46522.qm@web36501.mail.mud.yahoo.com> Anne: A pointer to the tools in numpy to apply array operation to my function would be appreciated. I found found the scipy.vectorize but it did not seem applicable to my 42 input As best I could tell I seemed that the time required to calculate two elements (2 x dof in residual 500 times) was the same as calculating 1 x dof residual 1000 times in a for loop. Chuck: It is a time marching structural dynamics 1-d beam problem. The 100 or so parameters are the initial conditions, geometric and material properties. The 42 dof are the configuration of the element at the end of the time step. Given the array of (42 * elements) initial conditions the residuals for each element may be independently calculated. Currently I calculate the 42*elements residual by running "for i in range(element)" and getting 42 residuals in each iteration. My element residual function is at most at worst cubic polynomials. The element wise residual and fprime functions are lambdafiedsympy functions. Cheers, Scott From scotta_2002 at yahoo.com Thu Dec 24 12:02:11 2009 From: scotta_2002 at yahoo.com (Scott Askey) Date: Thu, 24 Dec 2009 09:02:11 -0800 (PST) Subject: [SciPy-User] Parallelizing a for loop Message-ID: <673196.1921.qm@web36502.mail.mud.yahoo.com> Also, where might I find guidance for how to get rid of for loops/ ----- Original Message ---- From: Scott Askey To: scipy-user at scipy.org Sent: Thu, December 24, 2009 9:50:13 AM Subject: Re: Parallelizing a for loop Anne: A pointer to the tools in numpy to apply array operation to my function would be appreciated. I found found the scipy.vectorize but it did not seem applicable to my 42 input As best I could tell I seemed that the time required to calculate two elements (2 x dof in residual 500 times) was the same as calculating 1 x dof residual 1000 times in a for loop. Chuck: It is a time marching structural dynamics 1-d beam problem. The 100 or so parameters are the initial conditions, geometric and material properties. The 42 dof are the configuration of the element at the end of the time step. Given the array of (42 * elements) initial conditions the residuals for each element may be independently calculated. Currently I calculate the 42*elements residual by running "for i in range(element)" and getting 42 residuals in each iteration. My element residual function is at most at worst cubic polynomials. The element wise residual and fprime functions are lambdafiedsympy functions. Cheers, Scott From wesmckinn at gmail.com Fri Dec 25 18:31:37 2009 From: wesmckinn at gmail.com (Wes McKinney) Date: Fri, 25 Dec 2009 18:31:37 -0500 Subject: [SciPy-User] [ANN] pandas 0.1, a new NumPy-based data analysis library Message-ID: <6c476c8a0912251531k6fd08e75k103a5d93b5a0359f@mail.gmail.com> Hello all, I'm very happy to announce the release of a new data analysis library that many of you will hopefully find useful. This release is the product of a long period of development and use; hence, despite the low version number, it is quite suitable for general use. The documentation is still a bit sparse but will become much more complete in the coming weeks and months. Info / Documentation: http://pandas.sourceforge.net/ Overview slides: http://pandas.googlecode.com/files/nyfpug.pdf What it is ========== pandas is a library for pan-el da-ta analysis, i.e. multidimensional time series and cross-sectional data sets commonly found in statistics, econometrics, or finance. It provides convenient and easy-to-understand NumPy-based data structures for generic labeled data, with focus on automatically aligning data based on its label(s) and handling missing observations. One major goal of the library is to simplify the implementation of statistical models on unreliable data. Main Features ============= * Data structures: for 1, 2, and 3 dimensional labeled data sets. Some of their main features include: * Automatically aligning data * Handling missing observations in calculations * Convenient slicing and reshaping ("reindexing") functions * Provide 'group by' aggregation or transformation functionality * Tools for merging / joining together data sets * Simple matplotlib integration for plotting * Date tools: objects for expressing date offsets or generating date ranges; some functionality similar to scikits.timeseries * Statistical models: convenient ordinary least squares and panel OLS implementations for in-sample or rolling time series / cross-sectional regressions. These will hopefully be the starting point for implementing other models pandas is not necessarily intended as a standalone library but rather as something which can be used in tandem with other NumPy-based packages like scikits.statsmodels. Where possible wheel-reinvention has largely been avoided. Also, its time series manipulation capability is not as extensive as scikits.timeseries; pandas does have its own time series object which fits into the unified data model. Some other useful tools for time series data (moving average, standard deviation, etc.) are available in the codebase but do not yet have a convenient interface. These will be highlighted in a future release. Where to get it =============== The source code is currently hosted on googlecode at: http://pandas.googlecode.com Releases can be downloaded currently on the Python package index or using easy_install PyPi: http://pypi.python.org/pypi/pandas/ License ======= BSD Documentation ============= The official documentation is hosted on SourceForge. http://pandas.sourceforge.net/ The sphinx documentation is still in an incomplete state, but it should provide a good starting point for learning how to use the library. Expect the docs to continue to expand as time goes on. Background ========== Work on pandas started at AQR (a quantitative hedge fund) in 2008 and has been under active development since then. Discussion and Development ========================== Since pandas development is related to a number of other scientific Python projects, questions are welcome on the scipy-user mailing list. Specialized discussions or design issues should take place on the pystatsmodels mailing list / google group, where scikits.statsmodels and other libraries will also be discussed: http://groups.google.com/group/pystatsmodels Best regards, Wes McKinney From charlesr.harris at gmail.com Thu Dec 24 15:10:45 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 24 Dec 2009 13:10:45 -0700 Subject: [SciPy-User] Parallelizing a for loop In-Reply-To: <673196.1921.qm@web36502.mail.mud.yahoo.com> References: <673196.1921.qm@web36502.mail.mud.yahoo.com> Message-ID: On Thu, Dec 24, 2009 at 10:02 AM, Scott Askey wrote: > Also, where might I find guidance for how to get rid of for loops/ > > > > > ----- Original Message ---- > From: Scott Askey > To: scipy-user at scipy.org > Sent: Thu, December 24, 2009 9:50:13 AM > Subject: Re: Parallelizing a for loop > > Anne: > > A pointer to the tools in numpy to apply array operation to my function > would be appreciated. > I found found the scipy.vectorize but it did not seem applicable to my 42 > input > > As best I could tell I seemed that the time required to calculate two > elements (2 x dof in residual 500 times) was the same as calculating 1 x > dof residual 1000 times in a for loop. > > Chuck: > > It is a time marching structural dynamics 1-d beam problem. The 100 or so > parameters are the initial conditions, geometric and material properties. > The 42 dof are the configuration of the element at the end of the time > step. Given the array of (42 * elements) initial conditions the residuals > for each element may be independently calculated. > > Currently I calculate the 42*elements residual by running "for i in > range(element)" and getting 42 residuals in each iteration. > > My element residual function is at most at worst cubic polynomials. > The element wise residual and fprime functions are lambdafiedsympy > functions. > > The reason I asked is that there are a couple of finite element packages available for python. Two of them are SfePy and FiPy. I don't know if either is directly applicable to your problem. As to fsolve, are you directly computing the Jacobian or do you let the routine compute it numerically? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesmckinn at gmail.com Sat Dec 26 10:44:19 2009 From: wesmckinn at gmail.com (Wes McKinney) Date: Sat, 26 Dec 2009 10:44:19 -0500 Subject: [SciPy-User] [ANN] pandas 0.1, a new NumPy-based data analysis library Message-ID: <6c476c8a0912260744p649b6c82s1289fe434f7ace84@mail.gmail.com> Hello all, (resending, as this didn't make it through to the ML the first time) I'm very happy to announce the release of a new data analysis library that many of you will hopefully find useful. This release is the product of a long period of development and use; hence, despite the low version number, it is quite suitable for general use. The documentation is still a bit sparse but will become much more complete in the coming weeks and months. Info / Documentation: http://pandas.sourceforge.net/ Overview slides: http://pandas.googlecode.com/files/nyfpug.pdf What it is ========== pandas is a library for pan-el da-ta analysis, i.e. multidimensional time series and cross-sectional data sets commonly found in statistics, econometrics, or finance. It provides convenient and easy-to-understand NumPy-based data structures for generic labeled data, with focus on automatically aligning data based on its label(s) and handling missing observations. One major goal of the library is to simplify the implementation of statistical models on unreliable data. Main Features ============= * Data structures: for 1, 2, and 3 dimensional labeled data sets. Some of their main features include: * Automatically aligning data * Handling missing observations in calculations * Convenient slicing and reshaping ("reindexing") functions * Provide 'group by' aggregation or transformation functionality * Tools for merging / joining together data sets * Simple matplotlib integration for plotting * Date tools: objects for expressing date offsets or generating date ranges; some functionality similar to scikits.timeseries * Statistical models: convenient ordinary least squares and panel OLS implementations for in-sample or rolling time series / cross-sectional regressions. These will hopefully be the starting point for implementing other models pandas is not necessarily intended as a standalone library but rather as something which can be used in tandem with other NumPy-based packages like scikits.statsmodels. Where possible wheel-reinvention has largely been avoided. Also, its time series manipulation capability is not as extensive as scikits.timeseries; pandas does have its own time series object which fits into the unified data model. Some other useful tools for time series data (moving average, standard deviation, etc.) are available in the codebase but do not yet have a convenient interface. These will be highlighted in a future release. Where to get it =============== The source code is currently hosted on googlecode at: http://pandas.googlecode.com Releases can be downloaded currently on the Python package index or using easy_install PyPi: http://pypi.python.org/pypi/pandas/ License ======= BSD Documentation ============= The official documentation is hosted on SourceForge. http://pandas.sourceforge.net/ The sphinx documentation is still in an incomplete state, but it should provide a good starting point for learning how to use the library. Expect the docs to continue to expand as time goes on. Background ========== Work on pandas started at AQR (a quantitative hedge fund) in 2008 and has been under active development since then. Discussion and Development ========================== Since pandas development is related to a number of other scientific Python projects, questions are welcome on the scipy-user mailing list. Specialized discussions or design issues should take place on the pystatsmodels mailing list / google group, where scikits.statsmodels and other libraries will also be discussed: http://groups.google.com/group/pystatsmodels Best regards, Wes McKinney From eadrogue at gmx.net Sat Dec 26 11:41:50 2009 From: eadrogue at gmx.net (Ernest =?iso-8859-1?Q?Adrogu=E9?=) Date: Sat, 26 Dec 2009 17:41:50 +0100 Subject: [SciPy-User] [ANN] pandas 0.1, a new NumPy-based data analysis library In-Reply-To: <6c476c8a0912251531k6fd08e75k103a5d93b5a0359f@mail.gmail.com> References: <6c476c8a0912251531k6fd08e75k103a5d93b5a0359f@mail.gmail.com> Message-ID: <20091226164150.GA14481@doriath.local> 25/12/09 @ 18:31 (-0500), thus spake Wes McKinney: [...] > Main Features > ============= > > * Data structures: for 1, 2, and 3 dimensional labeled data > sets. Some of their main features include: > > * Automatically aligning data > * Handling missing observations in calculations > * Convenient slicing and reshaping ("reindexing") functions > * Provide 'group by' aggregation or transformation functionality > * Tools for merging / joining together data sets > * Simple matplotlib integration for plotting [...] This looks great. I was really missing a convenient data structure like DataFrame and DataMatrix in Numpy... I think they will come in very handy!! Cheers. Ernest From Scott.Askey at afit.edu Sat Dec 26 14:09:40 2009 From: Scott.Askey at afit.edu (Askey, Scott A Capt USAF AETC AFIT/ENY) Date: Sat, 26 Dec 2009 14:09:40 -0500 Subject: [SciPy-User] Parallelizing a for loop References: Message-ID: <792700546363C941B876B9D41AF4475902D689E8@MS-AFIT-03.afit.edu> Chuck For a "small" problem 10 elements (420 dof total) runs in 2.2 second when fsolve is is given a jacobian function the problem solves in 2.2 seconds jerval=3 and feval =23 , without the jacobian it takes 20+ seconds. Scott - The reason I asked is that there are a couple of finite element packages available for python. Two of them are SfePy and FiPy. I don't know if either is directly applicable to your problem. As to fsolve, are you directly computing the Jacobian or do you let the routine compute it numerically? Chuck From peridot.faceted at gmail.com Sat Dec 26 14:49:41 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 26 Dec 2009 15:49:41 -0400 Subject: [SciPy-User] Parallelizing a for loop In-Reply-To: <669636.46522.qm@web36501.mail.mud.yahoo.com> References: <669636.46522.qm@web36501.mail.mud.yahoo.com> Message-ID: 2009/12/24 Scott Askey : > Anne: > > A pointer to the tools in numpy to apply array operation to my function would be appreciated. > I found found the scipy.vectorize but it did not seem applicable to my 42 input That's a little difficult, since vectorizing operations is the basic function of numpy. (You're right, the vectorize function isn't really relevant here; it doesn't give any speed benefit, just a convenience one.) For a start, pretty much any calculation that you can write as a simple formula is automatically vectorized: In [2]: x = np.linspace(0,1,10) In [3]: y = x**3 - x**2 + x - 1 In [4]: y Out[4]: array([-1. , -0.89986283, -0.81618656, -0.74074074, -0.66529492, -0.58161866, -0.48148148, -0.35665295, -0.19890261, 0. ]) If your coefficients vary, that works too: In [5]: a = np.linspace(0,1,10) In [6]: b = np.linspace(-1,1,10) In [7]: a*x**3+b Out[7]: array([-1. , -0.77762536, -0.5531169 , -0.32098765, -0.07209267, 0.20637098, 0.5308642 , 0.92150587, 1.40207285, 2. ]) There's lots more, but the point is, since each of these operations runs in only a handful of python-level instructions, and all the element-wise operations are done in tight C loops, the calculations run vastly faster when you write them this way. > As best I could tell I seemed that the time required to calculate two elements (2 x dof in residual 500 times) was the same as calculating ?1 x dof residual 1000 times in a for loop. > > Chuck: > > It is a time marching structural dynamics 1-d beam problem. ?The 100 or so parameters are the initial conditions, geometric and material properties. ?The 42 dof are the configuration of the ?element at the end of the time step. ?Given the array of (42 * elements) initial conditions the residuals for each element may be independently calculated. > > Currently I calculate the 42*elements residual by running ?"for i in range(element)" and getting ?42 residuals in each iteration. > > My element residual function is at most at worst cubic polynomials. > The element wise residual and fprime functions are lambdafiedsympy functions. I strongly recommend rewriting these so that they are expressed as (say) a single cubic with arrays as coefficients. You may have to sue some cleverness to reexpress the sympy functions in a convenient way, but you will almost certainly find the speed gain worth it. Anne From burak.o.cankurtaran at alumni.uts.edu.au Sat Dec 26 18:29:04 2009 From: burak.o.cankurtaran at alumni.uts.edu.au (Burak1327) Date: Sat, 26 Dec 2009 15:29:04 -0800 (PST) Subject: [SciPy-User] [SciPy-user] 3D interpolation Message-ID: <26919717.post@talk.nabble.com> Hi, I have a 3D array of data with dimensions (40x40x40). The majority of data consists of zeros with a relative handful of negative numbers (all between -3115 to -3020). I've been attempting to interpolate this data to a grid of 80x80x80 dimensions. I want to interpolate the existing data for points in between the existing points. The newly interpolated results seem to be incorect. There are now large positive numbers and the new minimum value is much lower than the from the coarser grid dataset. I probably have misunderstood the purpose of the coordinates variable in map_coordinates(). I've listed a snippet of the code below: # Read the PES data pes40 = ReadPES("pes-0.5-noopt.xsf", 40) # Interpolation newx,newy,newz = mgrid[0:40:0.5, 0:40:0.5, 0:40:0.5] coords = array([newx, newy, newz]) pes80 = ndimage.map_coordinates(pes40, coords) Thanks Burak -- View this message in context: http://old.nabble.com/3D-interpolation-tp26919717p26919717.html Sent from the Scipy-User mailing list archive at Nabble.com. From denis-bz-gg at t-online.de Sun Dec 27 05:45:42 2009 From: denis-bz-gg at t-online.de (denis) Date: Sun, 27 Dec 2009 02:45:42 -0800 (PST) Subject: [SciPy-User] [SciPy-user] 3D interpolation In-Reply-To: <26919717.post@talk.nabble.com> References: <26919717.post@talk.nabble.com> Message-ID: <40994017-3929-4841-93f8-a00d2095827e@a21g2000yqc.googlegroups.com> Burak, there's a short 2d example in http://advice.mechanicalkern.com/question/17/getting-started-with-2d-interpolation-in-scipy ; does that help ? Let me know please cheers -- denis From wizzard028wise at gmail.com Sun Dec 27 07:42:31 2009 From: wizzard028wise at gmail.com (Dorian) Date: Sun, 27 Dec 2009 13:42:31 +0100 Subject: [SciPy-User] Functional nonlinear equation Message-ID: <674a602a0912270442g5bc29af0lc36f2573cb464a11@mail.gmail.com> Hi all, How do I solve the following functional nonlinear equation f(x,y)= (-1/a^2)*(x*y)^(1/a-1)+(1/a)*x^(1/a-1)+(1/a)*y^(1/a-1) =0 where "a" is positive parameter. If there is a way to plot f(x,y) for different values of "a" it will be very appreciated. Thanking you in advance Dorian -------------- next part -------------- An HTML attachment was scrubbed... URL: From reg8 at cornell.edu Sun Dec 27 09:00:39 2009 From: reg8 at cornell.edu (Richard) Date: Sun, 27 Dec 2009 09:00:39 -0500 Subject: [SciPy-User] Functional nonlinear equation In-Reply-To: <674a602a0912270442g5bc29af0lc36f2573cb464a11@mail.gmail.com> References: <674a602a0912270442g5bc29af0lc36f2573cb464a11@mail.gmail.com> Message-ID: I assume , x,y,a are real variables. I would try using a contour plotting program. There is probably something like that already available in matplotlib. You would evaluate f(x,y) on a grid of (x,y) values and the contouring program will perform an efficient search for the points in the x,y plane where f = . Richard On Dec 27, 2009, at 7:42 AM, Dorian wrote: > Hi all, > > How do I solve the following functional nonlinear equation > > f(x,y)= (-1/a^2)*(x*y)^(1/a-1)+(1/a)*x^(1/a-1)+(1/a)*y^(1/a-1) =0 > > where "a" is positive parameter. > > If there is a way to plot f(x,y) for different values of "a" > it will be very appreciated. > > Thanking you in advance > > Dorian > _______________________________________________ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wizzard028wise at gmail.com Sun Dec 27 09:35:28 2009 From: wizzard028wise at gmail.com (Dorian) Date: Sun, 27 Dec 2009 15:35:28 +0100 Subject: [SciPy-User] Functional nonlinear equation In-Reply-To: References: <674a602a0912270442g5bc29af0lc36f2573cb464a11@mail.gmail.com> Message-ID: <674a602a0912270635i49adbda1ga57d9e41d666e748@mail.gmail.com> Is there any way to solve it in the functional form ? I do not have "mathematica" to do it. Thanks for the plot suggestion. Dorian On Sun, Dec 27, 2009 at 3:00 PM, Richard wrote: > I assume , x,y,a are real variables. I would try using a contour plotting > program. There is probably something like that already available in > matplotlib. You would evaluate f(x,y) on a grid of (x,y) values and the > contouring program will perform an efficient search for the points in the > x,y plane where f = . > > > Richard > > > > On Dec 27, 2009, at 7:42 AM, Dorian wrote: > > Hi all, > > How do I solve the following functional nonlinear equation > > f(x,y)= (-1/a^2)*(x*y)^(1/a-1)+(1/a)*x^(1/a-1)+(1/a)*y^(1/a-1) =0 > > where "a" is positive parameter. > > If there is a way to plot f(x,y) for different values of "a" > it will be very appreciated. > > Thanking you in advance > > Dorian > _______________________________________________ > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Dec 27 09:58:10 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 27 Dec 2009 09:58:10 -0500 Subject: [SciPy-User] Functional nonlinear equation In-Reply-To: <674a602a0912270635i49adbda1ga57d9e41d666e748@mail.gmail.com> References: <674a602a0912270442g5bc29af0lc36f2573cb464a11@mail.gmail.com> <674a602a0912270635i49adbda1ga57d9e41d666e748@mail.gmail.com> Message-ID: <1cd32cbb0912270658j358b7f12m9e83191495e4940d@mail.gmail.com> On Sun, Dec 27, 2009 at 9:35 AM, Dorian wrote: > Is there any way to solve it in the functional form ? > I do not have "mathematica" to do it. > Thanks for the plot suggestion. > > Dorian > > On Sun, Dec 27, 2009 at 3:00 PM, Richard wrote: >> >> I assume , x,y,a are real variables. ?I would try using a contour plotting >> program. There is probably something like that already available in >> matplotlib. You would evaluate f(x,y) on a grid of (x,y) values and the >> contouring program will perform an efficient search for the points in the >> x,y plane where f = . >> >> Richard >> >> >> On Dec 27, 2009, at 7:42 AM, Dorian wrote: >> >> Hi? all, >> >> How do I solve? the following functional nonlinear equation >> >> f(x,y)= (-1/a^2)*(x*y)^(1/a-1)+(1/a)*x^(1/a-1)+(1/a)*y^(1/a-1) =0 >> >> where? "a"? is positive parameter. >> >> If there is a way to plot? f(x,y) for different values of "a" >> it will be very appreciated. >> >> Thanking you in advance >> >> Dorian for y(x) st. f(x,y) =0 I would loop over x in linspace(xl, xh) and use scipy.optimize.fsolve or any other root finder to find y given x (and a) The only problem is if y(x) is not a function, e.g. if f(x,y)=0 describes a circle. Maybe local starting values help tracing the different arms. There might be other parameterizations, to get the full contour, e.g. use the same trick as matplotlib or interpolation on a grid. The only cases I did, had a unique y given x. Josef >> _______________________________________________ >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From lou_boog2000 at yahoo.com Sun Dec 27 10:21:22 2009 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sun, 27 Dec 2009 07:21:22 -0800 (PST) Subject: [SciPy-User] Functional nonlinear equation In-Reply-To: <674a602a0912270635i49adbda1ga57d9e41d666e748@mail.gmail.com> References: <674a602a0912270442g5bc29af0lc36f2573cb464a11@mail.gmail.com> <674a602a0912270635i49adbda1ga57d9e41d666e748@mail.gmail.com> Message-ID: <665830.49858.qm@web34408.mail.mud.yahoo.com> From: Dorian To: SciPy Users List Sent: Sun, December 27, 2009 9:35:28 AM Subject: Re: [SciPy-User] Functional nonlinear equation Is there any way to solve it in the functional form ? I do not have "mathematica" to do it. Thanks for the plot suggestion. Dorian On Sun, Dec 27, 2009 at 3:00 PM, Richard wrote: I assume , x,y,a are real variables. I would try using a contour plotting program. There is probably something like that already available in matplotlib. You would evaluate f(x,y) on a grid of (x,y) values and the contouring program will perform an efficient search for the points in the x,y plane where f = . > > > >Richard > > > > > >On Dec 27, 2009, at 7:42 AM, Dorian wrote: > >Hi all, >> >>How do I solve the following functional nonlinear equation >> >>f(x,y)= (-1/a^2)*(x*y)^(1/a-1)+(1/a)*x^(1/a-1)+(1/a)*y^(1/a-1) =0 >> >> >>where "a" is positive parameter. >> >>If there is a way to plot f(x,y) for different values of "a" >>it will be very appreciated. >> >>Thanking you in advance >> >>Dorian >>>>_______________________________________________ >> >> >Am I missing something? (It is early). Just do some algebra and get: > > >x = y/(y^(1/a-1)/a - 1)^(1/(1/a-1)) > > >Please check my math. But you can solve for either x or y in terms of y or x, respectively. Is that what you want? > > >Bonus question #1: what happens when a=1? >Bonus question #2: what happens when y^(1/a-1)=a ? > > -- Lou Pecora, my views are my own. > > > >_______________________________________________ >>SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Dec 27 10:41:10 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 27 Dec 2009 08:41:10 -0700 Subject: [SciPy-User] Functional nonlinear equation In-Reply-To: <674a602a0912270442g5bc29af0lc36f2573cb464a11@mail.gmail.com> References: <674a602a0912270442g5bc29af0lc36f2573cb464a11@mail.gmail.com> Message-ID: On Sun, Dec 27, 2009 at 5:42 AM, Dorian wrote: > Hi all, > > How do I solve the following functional nonlinear equation > > f(x,y)= (-1/a^2)*(x*y)^(1/a-1)+(1/a)*x^(1/a-1)+(1/a)*y^(1/a-1) =0 > > where "a" is positive parameter. > > If there is a way to plot f(x,y) for different values of "a" > it will be very appreciated. > > Substitute u = x^(1/a - 1), v = y^(1/a - 1) and things will simplify considerably. The graph of f(x,y) is a translated hyperbola in those coordinates. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Sun Dec 27 11:00:08 2009 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sun, 27 Dec 2009 17:00:08 +0100 Subject: [SciPy-User] Functional nonlinear equation In-Reply-To: <665830.49858.qm@web34408.mail.mud.yahoo.com> References: <674a602a0912270442g5bc29af0lc36f2573cb464a11@mail.gmail.com> <674a602a0912270635i49adbda1ga57d9e41d666e748@mail.gmail.com> <665830.49858.qm@web34408.mail.mud.yahoo.com> Message-ID: <1261929608.2081.4.camel@Portable-s2m.cnrs-mrs.fr> Le dimanche 27 d?cembre 2009 ? 07:21 -0800, Lou Pecora a ?crit : > Hi all, > How do I solve the following functional nonlinear equation > f(x,y)= (-1/a^2)*(x*y)^(1/a-1)+(1/a)*x^(1/a-1)+(1/a)*y^(1/a-1) =0 > where "a" is positive parameter. > > > Am I missing something? (It is early). Just do some algebra and get: > x = y/(y^(1/a-1)/a - 1)^(1/(1/a-1)) > Please check my math. But you can solve for either x or y in terms of > y or x, respectively. Is that what you want? You may be careful with assertion like (x^b)^(1/b)=x which may be false for example sqrt(x**2)=abs(x)!=x The consequence is that the locus given by Charles (hyperbola) may not be the entire set of solutions. Plotting the function may help to 'guess' others solutions. -- Fabrice Silva LMA UPR CNRS 7051 - ?quipe S2M From lou_boog2000 at yahoo.com Sun Dec 27 11:21:47 2009 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sun, 27 Dec 2009 08:21:47 -0800 (PST) Subject: [SciPy-User] Functional nonlinear equation In-Reply-To: <1261929608.2081.4.camel@Portable-s2m.cnrs-mrs.fr> References: <674a602a0912270442g5bc29af0lc36f2573cb464a11@mail.gmail.com> <674a602a0912270635i49adbda1ga57d9e41d666e748@mail.gmail.com> <665830.49858.qm@web34408.mail.mud.yahoo.com> <1261929608.2081.4.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <389144.78893.qm@web34406.mail.mud.yahoo.com> ----- Original Message ---- From: Fabrice Silva To: scipy-user at scipy.org Sent: Sun, December 27, 2009 11:00:08 AM Subject: Re: [SciPy-User] Functional nonlinear equation Le dimanche 27 d?cembre 2009 ? 07:21 -0800, Lou Pecora a ?crit : > Hi all, > How do I solve the following functional nonlinear equation > f(x,y)= (-1/a^2)*(x*y)^(1/a-1)+(1/a)*x^(1/a-1)+(1/a)*y^(1/a-1) =0 > where "a" is positive parameter. > > > Am I missing something? (It is early). Just do some algebra and get: > x = y/(y^(1/a-1)/a - 1)^(1/(1/a-1)) > Please check my math. But you can solve for either x or y in terms of > y or x, respectively. Is that what you want? You may be careful with assertion like (x^b)^(1/b)=x which may be false for example sqrt(x**2)=abs(x)!=x The consequence is that the locus given by Charles (hyperbola) may not be the entire set of solutions. Plotting the function may help to 'guess' others solutions. -- Fabrice Silva Right. And for negative x or y values we are working on the complex plane (sort of a generalization to you point). There are plenty of angles in this problem. I just mentioned the singularities. Nothing against plotting the function (for real values), but writing explicit solutions can give you more insight, too. -- Lou Pecora, my views are my own. From kwgoodman at gmail.com Sun Dec 27 12:13:45 2009 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 27 Dec 2009 09:13:45 -0800 Subject: [SciPy-User] kmeans Message-ID: Just for fun I tried kmeans on a problem that has a LOT of features. I got an error: UnboundLocalError: local variable 'best_book' referenced before assignment The exception is raised because all sets of randomly selected starting centroids gave a distance greater than the initialized value of best_dist (100k) and therefore best_book is never assigned. Increasing the hard coded best_dist solved the problem. Is there a reason not to initialize best_dist to np.inf? From dwf at cs.toronto.edu Sun Dec 27 13:52:02 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 27 Dec 2009 13:52:02 -0500 Subject: [SciPy-User] kmeans In-Reply-To: References: Message-ID: On 27-Dec-09, at 12:13 PM, Keith Goodman wrote: > Just for fun I tried kmeans on a problem that has a LOT of features. I > got an error: > > UnboundLocalError: local variable 'best_book' referenced before > assignment > > The exception is raised because all sets of randomly selected starting > centroids gave a distance greater than the initialized value of > best_dist (100k) and therefore best_book is never assigned. Increasing > the hard coded best_dist solved the problem. Is there a reason not to > initialize best_dist to np.inf? No, probably not. Good catch. Can you open a ticket? David From kwgoodman at gmail.com Sun Dec 27 20:09:15 2009 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 27 Dec 2009 17:09:15 -0800 Subject: [SciPy-User] kmeans In-Reply-To: References: Message-ID: On Sun, Dec 27, 2009 at 10:52 AM, David Warde-Farley wrote: > On 27-Dec-09, at 12:13 PM, Keith Goodman wrote: > >> Just for fun I tried kmeans on a problem that has a LOT of features. I >> got an error: >> >> UnboundLocalError: local variable 'best_book' referenced before >> assignment >> >> The exception is raised because all sets of randomly selected starting >> centroids gave a distance greater than the initialized value of >> best_dist (100k) and therefore best_book is never assigned. Increasing >> the hard coded best_dist solved the problem. Is there a reason not to >> initialize best_dist to np.inf? > > No, probably not. Good catch. Can you open a ticket? Here's the ticket: http://projects.scipy.org/scipy/ticket/1077 From cournape at gmail.com Sun Dec 27 20:34:38 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 28 Dec 2009 10:34:38 +0900 Subject: [SciPy-User] [ANN] Numpy 1.4.0 release Message-ID: <5b8d13220912271734m29388676m4db200fa6a02dc92@mail.gmail.com> Hi, I am pleased to announce the release of numpy 1.4.0. The highlights of this release are: - Faster import time - Extended array wrapping mechanism for ufuncs - New Neighborhood iterator (C-level only) - C99-like complex functions in npymath, and a lot of portability fixes for basic floating point math functions The full release notes are at the end of the email. The sources are uploaded on Pypi, and the binary installers will soon come on the sourceforge page: https://sourceforge.net/projects/numpy/ Thank you to everyone involved in this release, developers, users who reported bugs, fix documentation, etc... enjoy, the numpy developers. ========================= NumPy 1.4.0 Release Notes ========================= This minor includes numerous bug fixes, as well as a few new features. It is backward compatible with 1.3.0 release. Highlights ========== * Faster import time * Extended array wrapping mechanism for ufuncs * New Neighborhood iterator (C-level only) * C99-like complex functions in npymath New features ============ Extended array wrapping mechanism for ufuncs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ An __array_prepare__ method has been added to ndarray to provide subclasses greater flexibility to interact with ufuncs and ufunc-like functions. ndarray already provided __array_wrap__, which allowed subclasses to set the array type for the result and populate metadata on the way out of the ufunc (as seen in the implementation of MaskedArray). For some applications it is necessary to provide checks and populate metadata *on the way in*. __array_prepare__ is therefore called just after the ufunc has initialized the output array but before computing the results and populating it. This way, checks can be made and errors raised before operations which may modify data in place. Automatic detection of forward incompatibilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Previously, if an extension was built against a version N of NumPy, and used on a system with NumPy M < N, the import_array was successfull, which could cause crashes because the version M does not have a function in N. Starting from NumPy 1.4.0, this will cause a failure in import_array, so the error will be catched early on. New iterators ~~~~~~~~~~~~~ A new neighborhood iterator has been added to the C API. It can be used to iterate over the items in a neighborhood of an array, and can handle boundaries conditions automatically. Zero and one padding are available, as well as arbitrary constant value, mirror and circular padding. New polynomial support ~~~~~~~~~~~~~~~~~~~~~~ New modules chebyshev and polynomial have been added. The new polynomial module is not compatible with the current polynomial support in numpy, but is much like the new chebyshev module. The most noticeable difference to most will be that coefficients are specified from low to high power, that the low level functions do *not* work with the Chebyshev and Polynomial classes as arguements, and that the Chebyshev and Polynomial classes include a domain. Mapping between domains is a linear substitution and the two classes can be converted one to the other, allowing, for instance, a Chebyshev series in one domain to be expanded as a polynomial in another domain. The new classes should generally be used instead of the low level functions, the latter are provided for those who wish to build their own classes. The new modules are not automatically imported into the numpy namespace, they must be explicitly brought in with an "import numpy.polynomial" statement. New C API ~~~~~~~~~ The following C functions have been added to the C API: #. PyArray_GetNDArrayCFeatureVersion: return the *API* version of the loaded numpy. #. PyArray_Correlate2 - like PyArray_Correlate, but implements the usual definition of correlation. Inputs are not swapped, and conjugate is taken for complex arrays. #. PyArray_NeighborhoodIterNew - a new iterator to iterate over a neighborhood of a point, with automatic boundaries handling. It is documented in the iterators section of the C-API reference, and you can find some examples in the multiarray_test.c.src file in numpy.core. New ufuncs ~~~~~~~~~~ The following ufuncs have been added to the C API: #. copysign - return the value of the first argument with the sign copied from the second argument. #. nextafter - return the next representable floating point value of the first argument toward the second argument. New defines ~~~~~~~~~~~ The alpha processor is now defined and available in numpy/npy_cpu.h. The failed detection of the PARISC processor has been fixed. The defines are: #. NPY_CPU_HPPA: PARISC #. NPY_CPU_ALPHA: Alpha Testing ~~~~~~~ #. deprecated decorator: this decorator may be used to avoid cluttering testing output while testing DeprecationWarning is effectively raised by the decorated test. #. assert_array_almost_equal_nulps: new method to compare two arrays of floating point values. With this function, two values are considered close if there are not many representable floating point values in between, thus being more robust than assert_array_almost_equal when the values fluctuate a lot. #. assert_array_max_ulp: raise an assertion if there are more than N representable numbers between two floating point values. #. assert_warns: raise an AssertionError if a callable does not generate a warning of the appropriate class, without altering the warning state. Reusing npymath ~~~~~~~~~~~~~~~ In 1.3.0, we started putting portable C math routines in npymath library, so that people can use those to write portable extensions. Unfortunately, it was not possible to easily link against this library: in 1.4.0, support has been added to numpy.distutils so that 3rd party can reuse this library. See coremath documentation for more information. Improved set operations ~~~~~~~~~~~~~~~~~~~~~~~ In previous versions of NumPy some set functions (intersect1d, setxor1d, setdiff1d and setmember1d) could return incorrect results if the input arrays contained duplicate items. These now work correctly for input arrays with duplicates. setmember1d has been renamed to in1d, as with the change to accept arrays with duplicates it is no longer a set operation, and is conceptually similar to an elementwise version of the Python operator 'in'. All of these functions now accept the boolean keyword assume_unique. This is False by default, but can be set True if the input arrays are known not to contain duplicates, which can increase the functions' execution speed. Improvements ============ #. numpy import is noticeably faster (from 20 to 30 % depending on the platform and computer) #. The sort functions now sort nans to the end. * Real sort order is [R, nan] * Complex sort order is [R + Rj, R + nanj, nan + Rj, nan + nanj] Complex numbers with the same nan placements are sorted according to the non-nan part if it exists. #. The type comparison functions have been made consistent with the new sort order of nans. Searchsorted now works with sorted arrays containing nan values. #. Complex division has been made more resistent to overflow. #. Complex floor division has been made more resistent to overflow. Deprecations ============ The following functions are deprecated: #. correlate: it takes a new keyword argument old_behavior. When True (the default), it returns the same result as before. When False, compute the conventional correlation, and take the conjugate for complex arrays. The old behavior will be removed in NumPy 1.5, and raises a DeprecationWarning in 1.4. #. unique1d: use unique instead. unique1d raises a deprecation warning in 1.4, and will be removed in 1.5. #. intersect1d_nu: use intersect1d instead. intersect1d_nu raises a deprecation warning in 1.4, and will be removed in 1.5. #. setmember1d: use in1d instead. setmember1d raises a deprecation warning in 1.4, and will be removed in 1.5. The following raise errors: #. When operating on 0-d arrays, ``numpy.max`` and other functions accept only ``axis=0``, ``axis=-1`` and ``axis=None``. Using an out-of-bounds axes is an indication of a bug, so Numpy raises an error for these cases now. #. Specifying ``axis > MAX_DIMS`` is no longer allowed; Numpy raises now an error instead of behaving similarly as for ``axis=None``. Internal changes ================ Use C99 complex functions when available ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The numpy complex types are now guaranteed to be ABI compatible with C99 complex type, if availble on the platform. Moreoever, the complex ufunc now use the platform C99 functions intead of our own. split multiarray and umath source code ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The source code of multiarray and umath has been split into separate logic compilation units. This should make the source code more amenable for newcomers. Separate compilation ~~~~~~~~~~~~~~~~~~~~ By default, every file of multiarray (and umath) is merged into one for compilation as was the case before, but if NPY_SEPARATE_COMPILATION env variable is set to a non-negative value, experimental individual compilation of each file is enabled. This makes the compile/debug cycle much faster when working on core numpy. Separate core math library ~~~~~~~~~~~~~~~~~~~~~~~~~~ New functions which have been added: * npy_copysign * npy_nextafter * npy_cpack * npy_creal * npy_cimag * npy_cabs * npy_cexp * npy_clog * npy_cpow * npy_csqr * npy_ccos * npy_csin From kwgoodman at gmail.com Sun Dec 27 20:37:21 2009 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 27 Dec 2009 17:37:21 -0800 Subject: [SciPy-User] kmeans and initial centroid guesses Message-ID: The kmeans function has two modes. In one of the modes the initial guesses for the centroids are randomly selected from the input data. The selection is currently done with replacement: guess = take(obs, randint(0, No, k), 0) That means some of the centroids in the intial guess might be the same. Wouldn't it be better to select without replacement? Something like guess = take(obs, rand(No).argsort()[:k], 0) Here's an extreme example of what can go wrong if the selection is done with replacement: >> obs array([[ 1, 1], [-1, -1], [-1, 1], [ 1, -1]]) >> vq.kmeans(obs, k_or_guess=4) (array([[-1, -1], [-1, 1], [ 1, -1], [ 1, 1]]), 0.0) # <--- good >> >> k_or_guess = obs[[1,1,1,1],:] >> k_or_guess array([[-1, -1], [-1, -1], [-1, -1], [-1, -1]]) >> vq.kmeans(obs, k_or_guess) (array([[0, 0]]), 1.4142135623730951) # <--- not as good In most cases it won't make any difference. But the cost of the code change is small. From cournape at gmail.com Sun Dec 27 20:47:45 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 28 Dec 2009 10:47:45 +0900 Subject: [SciPy-User] kmeans and initial centroid guesses In-Reply-To: References: Message-ID: <5b8d13220912271747y621f3882s1b23e486f492aa7f@mail.gmail.com> On Mon, Dec 28, 2009 at 10:37 AM, Keith Goodman wrote: > The kmeans function has two modes. In one of the modes the initial > guesses for the centroids are randomly selected from the input data. > The selection is currently done with replacement: > > guess = take(obs, randint(0, No, k), 0) > > That means some of the centroids in the intial guess might be the > same. Wouldn't it be better to select without replacement? I think you are right, but random sampling without replacement for floating point values is a bit hard to use here: if two values are different but very close, you would see the same effect, right ? Generally, for clustering algorithms, I think you'd you want to start with centroids as far from each other as possible, so maybe the code could be improved taking this into account. cheers, David From kwgoodman at gmail.com Sun Dec 27 21:07:38 2009 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 27 Dec 2009 18:07:38 -0800 Subject: [SciPy-User] kmeans and initial centroid guesses In-Reply-To: <5b8d13220912271747y621f3882s1b23e486f492aa7f@mail.gmail.com> References: <5b8d13220912271747y621f3882s1b23e486f492aa7f@mail.gmail.com> Message-ID: On Sun, Dec 27, 2009 at 5:47 PM, David Cournapeau wrote: > On Mon, Dec 28, 2009 at 10:37 AM, Keith Goodman wrote: >> The kmeans function has two modes. In one of the modes the initial >> guesses for the centroids are randomly selected from the input data. >> The selection is currently done with replacement: >> >> guess = take(obs, randint(0, No, k), 0) >> >> That means some of the centroids in the intial guess might be the >> same. Wouldn't it be better to select without replacement? > > I think you are right, but random sampling without replacement for > floating point values is a bit hard to use here: if two values are > different but very close, you would see the same effect, right ? > > Generally, for clustering algorithms, I think you'd you want to start > with centroids as far from each other as possible, so maybe the code > could be improved taking this into account. That's a good point. And it sounds like an interesting problem to find the set of k points that are farthest apart. But the time it takes to find far apart points could be used to do more iterations of randomly selected points. I see that kmeans2 has a few different methods for selecting the initial points. A method like the one you suggest could be added there. (Maybe use pdist to select the two farthest apart points and then select the point that is furthest away from the first two points and so on. It would get slow if a lot of points were needed.) For kmeans I'll file a ticket to draw without replacement. From ferrell at diablotech.com Mon Dec 28 00:00:32 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Sun, 27 Dec 2009 22:00:32 -0700 Subject: [SciPy-User] Scikits.timeseries, bug converting list to DateArray Message-ID: <5F04837C-BE4E-4230-8483-AD56B4E69260@diablotech.com> I've encountered a possible bug converting a list of datetime numbers to a DateArray. What's odd is I'm pretty sure this worked until recently, but I don't recall changing anything on my system. So, before I file a bug report, maybe somebody can confirm this is a timeseries problem and not my own install problem. I'm on OS X 10.6, Python 2.6.1, numpy 1.4.0.dev7542, timeseries 0.91.3. In [1]: import scikits.timeseries as ts In [2]: x = [731694.0] In [3]: [ts.Date('d', dt) for dt in x] Out[3]: [] In [4]: ts.date_array([ts.Date('d', y) for y in x]) Out[4]: DateArray([23-Apr-2004], freq='D') That all works fine. But, if I hand x to date_array as the dlist argument, I get a date_array, but I can't print it. In [11]: y = ts.date_array(dlist=x, freq='d') In [13]: type(y) Out[13]: In [14]: y Out[14]: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/Shared/Develop/Financial/Lakshmi/ in () /Library/Python/2.6/site-packages/IPython/Prompts.pyc in __call__(self, arg) 550 551 # and now call a possibly user-defined print mechanism --> 552 manipulated_val = self.display(arg) 553 554 # user display hooks can change the variable to be stored in /Library/Python/2.6/site-packages/IPython/Prompts.pyc in _display(self, arg) 576 return IPython.generics.result_display(arg) 577 except TryNext: --> 578 return self.shell.hooks.result_display(arg) 579 580 # Assign the default display method: /Library/Python/2.6/site-packages/IPython/hooks.pyc in __call__(self, *args, **kw) 139 #print "prio",prio,"cmd",cmd #dbg 140 try: --> 141 ret = cmd(*args, **kw) 142 return ret 143 except ipapi.TryNext, exc: /Library/Python/2.6/site-packages/IPython/hooks.pyc in result_display(self, arg) 169 170 if self.rc.pprint: --> 171 out = pformat(arg) 172 if '\n' in out: 173 # So that multi-line strings line up with the left column of /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ pprint.pyc in pformat(self, object) 109 def pformat(self, object): 110 sio = _StringIO() --> 111 self._format(object, sio, 0, 0, {}, 0) 112 return sio.getvalue() 113 /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ pprint.pyc in _format(self, object, stream, indent, allowance, context, level) 127 self._readable = False 128 return --> 129 rep = self._repr(object, context, level - 1) 130 typ = _type(object) 131 sepLines = _len(rep) > (self._width - 1 - indent - allowance) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ pprint.pyc in _repr(self, object, context, level) 221 def _repr(self, object, context, level): 222 repr, readable, recursive = self.format(object, context.copy(), --> 223 self._depth, level) 224 if not readable: 225 self._readable = False /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ pprint.pyc in format(self, object, context, maxlevels, level) 233 and whether the object represents a recursive construct. 234 """ --> 235 return _safe_repr(object, context, maxlevels, level) 236 237 /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ pprint.pyc in _safe_repr(object, context, maxlevels, level) 318 return format % _commajoin(components), readable, recursive 319 --> 320 rep = repr(object) 321 return rep, (rep and not rep.startswith('<')), False 322 /Library/Python/2.6/site-packages/scikits.timeseries-0.91.3-py2.6- macosx-10.6-universal.egg/scikits/timeseries/tdates.pyc in __repr__(self) 334 335 def __repr__(self): --> 336 return ndarray.__repr__(self)[:-1] + \ 337 ",\n freq='%s')" % self.freqstr 338 /Library/Python/2.6/site-packages/numpy-1.4.0.dev7542-py2.6- macosx-10.6-universal.egg/numpy/core/numeric.pyc in array_repr(arr, max_line_width, precision, suppress_small) 1334 if arr.size > 0 or arr.shape==(0,): 1335 lst = array2string(arr, max_line_width, precision, suppress_small, -> 1336 ', ', "array(") 1337 else: # show zero-length shape unless it is (0,) 1338 lst = "[], shape=%s" % (repr(arr.shape),) /Library/Python/2.6/site-packages/numpy-1.4.0.dev7542-py2.6- macosx-10.6-universal.egg/numpy/core/arrayprint.pyc in array2string(a, max_line_width, precision, suppress_small, separator, prefix, style) 298 else: 299 lst = _array2string(a, max_line_width, precision, suppress_small, --> 300 separator, prefix) 301 return lst 302 /Library/Python/2.6/site-packages/numpy-1.4.0.dev7542-py2.6- macosx-10.6-universal.egg/numpy/core/arrayprint.pyc in _array2string(a, max_line_width, precision, suppress_small, separator, prefix) 220 lst = _formatArray(a, format_function, len(a.shape), max_line_width, 221 next_line_prefix, separator, --> 222 _summaryEdgeItems, summary_insert)[:-1] 223 224 return lst /Library/Python/2.6/site-packages/numpy-1.4.0.dev7542-py2.6- macosx-10.6-universal.egg/numpy/core/arrayprint.pyc in _formatArray(a, format_function, rank, max_line_len, next_line_prefix, separator, edge_items, summary_insert) 344 s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) 345 --> 346 word = format_function(a[-1]) 347 s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) 348 s += line + "]\n" /Library/Python/2.6/site-packages/numpy-1.4.0.dev7542-py2.6- macosx-10.6-universal.egg/numpy/core/arrayprint.pyc in __call__(self, x, strip_zeros) 436 437 def __call__(self, x, strip_zeros=True): --> 438 if isnan(x): 439 return self.special_fmt % (_nan_str,) 440 elif isinf(x): TypeError: function not supported for these types, and can't coerce safely to supported types From schut at sarvision.nl Mon Dec 28 05:36:09 2009 From: schut at sarvision.nl (Vincent Schut) Date: Mon, 28 Dec 2009 11:36:09 +0100 Subject: [SciPy-User] [SciPy-user] 3D interpolation In-Reply-To: <26919717.post@talk.nabble.com> References: <26919717.post@talk.nabble.com> Message-ID: On 12/27/2009 12:29 AM, Burak1327 wrote: > > Hi, > > I have a 3D array of data with dimensions (40x40x40). The majority of data > consists of zeros > with a relative handful of negative numbers (all between -3115 to -3020). > I've been attempting to interpolate this data to a grid of 80x80x80 > dimensions. > I want to interpolate the existing data for points in between the existing > points. > The newly interpolated results seem to be incorect. There are now large > positive numbers and > the new minimum value is much lower than the from the coarser grid dataset. > I probably have > misunderstood the purpose of the coordinates variable in map_coordinates(). IIRC map_coordinates by default uses 3rd order splines. This may lead to under/overshoots, which is probably what you're seeing. Try adding 'order=1' to your map_coordinates call, it will then use linear interpolation and values should never be lower than pes40.min() or higher than pes40.max(). Your results will be a bit less smooth, though, than with higher order splines. What I usually do to eliminate this is something like the following pseudocode: spline3 = map_coordinates(data, coords, order=3) # 3rd order spline spline1 = map_coordinates(data, coords, order=1) # linear spline0 = map_coordinates(data, coords, order=0) # nearest-neighbour undershoots = spline3 < spline0 overshoots = spline3 > spline0 result = numpy.where(undershoots | overshoots, spline1, spline3) this will give you 3rd order spline where there are no under/overshoots, and linear interpolated replacements where 3rd order gives under/overshoots. Vincent. > > I've listed a snippet of the code below: > > # Read the PES data > pes40 = ReadPES("pes-0.5-noopt.xsf", 40) > > # Interpolation > newx,newy,newz = mgrid[0:40:0.5, 0:40:0.5, 0:40:0.5] > coords = array([newx, newy, newz]) > pes80 = ndimage.map_coordinates(pes40, coords) > > Thanks > Burak From denis-bz-gg at t-online.de Mon Dec 28 13:26:31 2009 From: denis-bz-gg at t-online.de (denis) Date: Mon, 28 Dec 2009 10:26:31 -0800 (PST) Subject: [SciPy-User] [SciPy-user] 3D interpolation In-Reply-To: References: <26919717.post@talk.nabble.com> Message-ID: <51ecc5c2-21d6-4686-86cf-5cbd20ea78ff@j4g2000yqe.googlegroups.com> Vincent, you're right, splines can overshoot, though not by a lot: 0 1 1 0 overshoots by 20 % in the code snippet below. Burak, does order=1, just averaging 8 neighbors, work ? There are many kinds of cubic splines, including -- global cubic, which map_coordinates seems to use -- local B-spline: does not interpolate, go through the input points, but can't overshoot, stays in the convex hull of the input points -- local Catmull-Rom: interpolates, but can overshoot (over shoots and leaves :) ("Local" means that e.g. in 1d, out[ 3 to 4 ] is calculated from the 4 nearest input points at 2 3 4 5, or equivalently that in[3] affects only out[ 1 to 5 ]: a desirable property.) Different corners of Scipy have different spline routines, some with doc. To see exactly what they do, you can only plot their impulse response like so. cheers -- denis """ plot map_coordinates( 1d spike ) """ from __future__ import division import sys import numpy as np from scipy.ndimage import map_coordinates import pylab as pl N = 10 exec( "\n".join( sys.argv[1:] )) # N= ... np.set_printoptions( 2, threshold=100, suppress=True ) # .2f a = np.r_[ 5*[0], 1., 5*[0], 1,1, 5*[0], 5*[1] ] # 0 1 1 0 -> 1.2 na = len(a) - 1 x = np.linspace( 0, na, na*N + 1 ) print "a:", a.astype(int) for order in (3,): z = map_coordinates( a, [x], order=order ) print "z[%d] halfint:" % order, z[N//2::N] pl.plot( x, z ) pl.show() From rob.clewley at gmail.com Mon Dec 28 16:23:35 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 28 Dec 2009 16:23:35 -0500 Subject: [SciPy-User] [ANN] PyDSTool 0.88 -- dynamical systems modeling tools Message-ID: A new release of the dynamical systems modeling toolbox PyDSTool is available from Sourceforge: http://www.sourceforge.net/projects/pydstool/ Highlights from the release notes: * Cleanup of global imports, especially: entire numpy.random and linalg namespaces no longer imported by default * Added support for 'min' and 'max' keywords in functional specifications (for ODE right-hand sides, for instance) * Optimization tools from third-party genericOpt (included with permission) and improved parameter estimation examples making use of this code * Numerical phase-response calculations now possible in PRC toolbox * Fully-fledged DSSRT toolbox for neural modeling (see wiki page) * New tests/demonstrations in PyDSTool/tests * Major improvements to intelligent expr2func (symbolic -> python function conversion) * Improved compatibility with cross-platform use and with recent python versions and associated libraries * Added many minor features (see timeline on Trac http://jay.cam.cornell.edu/pydstool/timeline) * Fixed many bugs and quirks (see timeline on Trac http://jay.cam.cornell.edu/pydstool/timeline) This is mainly a bugfix release in preparation for a substantial upgrade at version 0.90, which will have a proper installer, unit testing, symbolic expression support via SymPy, and greatly improved interfacing to legacy ODE integrators. These features are being actively developed in 2009/2010. For installation and setting up, please carefully read the GettingStarted page at our wiki for platform-specific details: http://pydstool.sourceforge.net Please use the bug tracker and user discussion list at Sourceforge to report bugs or provide feedback. Code and documentation contributions are always welcome. Regards, Rob Clewley From burak.o.cankurtaran at alumni.uts.edu.au Tue Dec 29 08:26:49 2009 From: burak.o.cankurtaran at alumni.uts.edu.au (Burak1327) Date: Tue, 29 Dec 2009 05:26:49 -0800 (PST) Subject: [SciPy-User] [SciPy-user] 3D interpolation In-Reply-To: <51ecc5c2-21d6-4686-86cf-5cbd20ea78ff@j4g2000yqe.googlegroups.com> References: <26919717.post@talk.nabble.com> <51ecc5c2-21d6-4686-86cf-5cbd20ea78ff@j4g2000yqe.googlegroups.com> Message-ID: <26954588.post@talk.nabble.com> Thank you everyone for your replies. I tried Vincent's advice, and it seems to work. Though, the overshoot is much more than 20% in this case. Below I listed the minimum and maximum values of the data set, after interpolation at different orders: Raw Data Minimium Value : -3118.532224 Maximum Value : 0.0 (There next value is about -3115) Order = 0 Minimium Value : -3118.532224 Maximum Value : 0.0 Order = 1 Minimium Value : -3118.532224 Maximum Value : 0.0 Order = 2 Minimium Value : -3905.68081297 Maximum Value : 851.160340484 Order = 3 Minimium Value : -4077.10199215 Maximum Value : 1028.48609365 Thanks Burak denis-bz-gg wrote: > > Vincent, you're right, splines can overshoot, though not by a lot: > 0 1 1 0 overshoots by 20 % in the code snippet below. > Burak, does order=1, just averaging 8 neighbors, work ? > > There are many kinds of cubic splines, including > -- global cubic, which map_coordinates seems to use > -- local B-spline: does not interpolate, go through the input points, > but can't overshoot, stays in the convex hull of the input points > -- local Catmull-Rom: interpolates, but can overshoot (over shoots and > leaves :) > ("Local" means that e.g. in 1d, out[ 3 to 4 ] is calculated from > the 4 nearest input points at 2 3 4 5, > or equivalently that in[3] affects only out[ 1 to 5 ]: a desirable > property.) > > Different corners of Scipy have different spline routines, some with > doc. > To see exactly what they do, you can only plot their impulse response > like so. > > cheers > -- denis > > """ plot map_coordinates( 1d spike ) > """ > > from __future__ import division > import sys > import numpy as np > from scipy.ndimage import map_coordinates > import pylab as pl > > N = 10 > exec( "\n".join( sys.argv[1:] )) # N= ... > np.set_printoptions( 2, threshold=100, suppress=True ) # .2f > > a = np.r_[ 5*[0], 1., 5*[0], 1,1, 5*[0], 5*[1] ] # 0 1 1 0 -> 1.2 > na = len(a) - 1 > x = np.linspace( 0, na, na*N + 1 ) > print "a:", a.astype(int) > > for order in (3,): > z = map_coordinates( a, [x], order=order ) > print "z[%d] halfint:" % order, z[N//2::N] > pl.plot( x, z ) > > pl.show() > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/3D-interpolation-tp26919717p26954588.html Sent from the Scipy-User mailing list archive at Nabble.com. From dg.gmane at thesamovar.net Tue Dec 29 12:48:45 2009 From: dg.gmane at thesamovar.net (Dan Goodman) Date: Tue, 29 Dec 2009 17:48:45 +0000 Subject: [SciPy-User] scipy.weave bugs? Message-ID: Hi, I used to use scipy.weave.inline a lot, but recently I can't get it to work. It seems that the bugs I come across are documented, and in some cases even marked as fixed, but still recur in the latest versions of scipy. For example, with MSVC 2008 Express installed I try this: from scipy import weave weave.inline('int x=1;', []) and get compile errors about __attribute__ identifier. From what I've read online, this is a GCC specific thing that breaks MSVC. If I try: from numpy import zeros from scipy import weave x = zeros(10) weave.inline('int i=1;', ['x'], type_converters=weave.converters.blitz, compiler='msvc') I get a Python exception: AttributeError: 'array_info' object has no attribute 'msvc_msg' If I try GCC instead: from numpy import zeros from scipy import weave x = zeros(10) weave.inline('for(int i=0;i<10;i++) x(i)+=1.0;', ['x'], type_converters=weave.converters.blitz, compiler='gcc') I get errors about 'labs' is not a member of 'std'. This appears to be a known problem with scipy with gcc-4.3 and is marked as fixed in 0.7.1 but the problem is still there. As well as that, I have another problem on another computer (which I don't have access to right now) where I have a user name with a space in it, and that causes several problems (although I think these problems are the fault of distutils and nothing to do with scipy). I found a way to work around that though by changing the environment variable for USERNAME from "Dan Goodman" to "DanGoo~1". In other words, the whole situation with scipy.weave seems to be a mess, and in fact I can't get my code to work at all. I think downgrading to GCC version 4.2 would probably work, but I had some trouble working out how to do that on Windows with Python(x,y). My question is, are the scipy developers aware of these problems and is there a schedule for getting them fixed? Having a functional scipy.weave is quite important to me, so I'm happy to help on testing solutions. Dan Goodman Versions for reference: OS: WinXP SP3 MSVC: 2008 Express (CL ver 15.00.30729.01) Python(x,y): 2.6.2.0 including: Scipy: 0.7.1 (updated from 0.7.0 in Python(x,y) I think) Numpy: 1.3.0 Python: 2.6.2.0 GCC: 4.4.0 (included as part of MinGW 4.4.0) From kwgoodman at gmail.com Tue Dec 29 13:08:03 2009 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Dec 2009 10:08:03 -0800 Subject: [SciPy-User] scipy.weave bugs? In-Reply-To: References: Message-ID: On Tue, Dec 29, 2009 at 9:48 AM, Dan Goodman wrote: > If I try GCC instead: > > from numpy import zeros > from scipy import weave > x = zeros(10) > weave.inline('for(int i=0;i<10;i++) x(i)+=1.0;', ['x'], > type_converters=weave.converters.blitz, compiler='gcc') > > I get errors about 'labs' is not a member of 'std'. This appears to be a > known problem with scipy with gcc-4.3 and is marked as fixed in 0.7.1 > but the problem is still there. I get the same error on Ubuntu with numpy 1.4rc2, scipy 0.7.1, and gcc 4.4.1. From kwgoodman at gmail.com Tue Dec 29 13:09:53 2009 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 29 Dec 2009 10:09:53 -0800 Subject: [SciPy-User] kmeans and initial centroid guesses In-Reply-To: References: <5b8d13220912271747y621f3882s1b23e486f492aa7f@mail.gmail.com> Message-ID: On Sun, Dec 27, 2009 at 6:07 PM, Keith Goodman wrote: > On Sun, Dec 27, 2009 at 5:47 PM, David Cournapeau wrote: >> On Mon, Dec 28, 2009 at 10:37 AM, Keith Goodman wrote: >>> The kmeans function has two modes. In one of the modes the initial >>> guesses for the centroids are randomly selected from the input data. >>> The selection is currently done with replacement: >>> >>> guess = take(obs, randint(0, No, k), 0) >>> >>> That means some of the centroids in the intial guess might be the >>> same. Wouldn't it be better to select without replacement? >> >> I think you are right, but random sampling without replacement for >> floating point values is a bit hard to use here: if two values are >> different but very close, you would see the same effect, right ? >> >> Generally, for clustering algorithms, I think you'd you want to start >> with centroids as far from each other as possible, so maybe the code >> could be improved taking this into account. > > That's a good point. And it sounds like an interesting problem to find > the set of k points that are farthest apart. But the time it takes to > find far apart points could be used to do more iterations of randomly > selected points. I see that kmeans2 has a few different methods for > selecting the initial points. A method like the one you suggest could > be added there. (Maybe use pdist to select the two farthest apart > points and then select the point that is furthest away from the first > two points and so on. It would get slow if a lot of points were > needed.) For kmeans I'll file a ticket to draw without replacement. I opened a ticket and created a patch: http://projects.scipy.org/scipy/ticket/1078 From jdh2358 at gmail.com Tue Dec 29 13:47:30 2009 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 29 Dec 2009 12:47:30 -0600 Subject: [SciPy-User] modeling GARCH process Message-ID: <88e473830912291047o7cf3d5d3uebf62811d8f137de@mail.gmail.com> I would like to model a time series as a GARCH process (estimate the parameters of a model and then generate random samples with these parameters). There are numerous ways to do this in R, but I prefer python if possible. I notice that Enthought teaches this in their Finance for Quants lectures. Does anyone have some example code for modeling and simulating GARCH processes in python? Thanks, JDH From josef.pktd at gmail.com Tue Dec 29 14:34:37 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 29 Dec 2009 14:34:37 -0500 Subject: [SciPy-User] modeling GARCH process In-Reply-To: <88e473830912291047o7cf3d5d3uebf62811d8f137de@mail.gmail.com> References: <88e473830912291047o7cf3d5d3uebf62811d8f137de@mail.gmail.com> Message-ID: <1cd32cbb0912291134k35ffa84aracdef71b8efb384f@mail.gmail.com> On Tue, Dec 29, 2009 at 1:47 PM, John Hunter wrote: > I would like to model a time series as a GARCH process (estimate the > parameters of a model and then generate random samples with these > parameters). ?There are numerous ways to do this in R, but I prefer > python if possible. ?I notice that Enthought teaches this in their > Finance for Quants lectures. ?Does anyone have some example code for > modeling and simulating GARCH processes in python? I would also like to know, I haven't seen any python GARCH so far. I started to translate http://www.kevinsheppard.com/wiki/MFE_Toolbox (BSD license) into python, but it is slow going and I'm running out of Christmas break. The plan is to have a tested literal translation and then to numpify/scipify it. ETA: half a year. Josef > > Thanks, > JDH > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From jdh2358 at gmail.com Tue Dec 29 14:55:11 2009 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 29 Dec 2009 13:55:11 -0600 Subject: [SciPy-User] modeling GARCH process In-Reply-To: <1cd32cbb0912291134k35ffa84aracdef71b8efb384f@mail.gmail.com> References: <88e473830912291047o7cf3d5d3uebf62811d8f137de@mail.gmail.com> <1cd32cbb0912291134k35ffa84aracdef71b8efb384f@mail.gmail.com> Message-ID: <88e473830912291155i513fc8f7g7efa7323cfd603b4@mail.gmail.com> On Tue, Dec 29, 2009 at 1:34 PM, wrote: > On Tue, Dec 29, 2009 at 1:47 PM, John Hunter wrote: >> I would like to model a time series as a GARCH process (estimate the >> parameters of a model and then generate random samples with these >> parameters). ?There are numerous ways to do this in R, but I prefer >> python if possible. ?I notice that Enthought teaches this in their >> Finance for Quants lectures. ?Does anyone have some example code for >> modeling and simulating GARCH processes in python? > > I would also like to know, I haven't seen any python GARCH so far. > > I started to translate http://www.kevinsheppard.com/wiki/MFE_Toolbox > (BSD license) into python, but it is slow going and I'm running out of > Christmas break. The plan is to have a tested literal translation and > then to numpify/scipify it. > > ETA: half a year. Wow, that would be fantastic, but I don't envy you :-) That looks like quite a task. This would be a great google summer of code project if you can get it far enough along that a dedicated student or two could finish, polish, document and test. JDH From jsseabold at gmail.com Tue Dec 29 15:04:10 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 29 Dec 2009 15:04:10 -0500 Subject: [SciPy-User] modeling GARCH process In-Reply-To: <88e473830912291155i513fc8f7g7efa7323cfd603b4@mail.gmail.com> References: <88e473830912291047o7cf3d5d3uebf62811d8f137de@mail.gmail.com> <1cd32cbb0912291134k35ffa84aracdef71b8efb384f@mail.gmail.com> <88e473830912291155i513fc8f7g7efa7323cfd603b4@mail.gmail.com> Message-ID: On Tue, Dec 29, 2009 at 2:55 PM, John Hunter wrote: > On Tue, Dec 29, 2009 at 1:34 PM, ? wrote: >> On Tue, Dec 29, 2009 at 1:47 PM, John Hunter wrote: >>> I would like to model a time series as a GARCH process (estimate the >>> parameters of a model and then generate random samples with these >>> parameters). ?There are numerous ways to do this in R, but I prefer >>> python if possible. ?I notice that Enthought teaches this in their >>> Finance for Quants lectures. ?Does anyone have some example code for >>> modeling and simulating GARCH processes in python? >> >> I would also like to know, I haven't seen any python GARCH so far. >> >> I started to translate http://www.kevinsheppard.com/wiki/MFE_Toolbox >> (BSD license) into python, but it is slow going and I'm running out of >> Christmas break. The plan is to have a tested literal translation and >> then to numpify/scipify it. >> >> ETA: half a year. > Maybe faster for at least a rough pass, if you get some help with the time series stuff ;) > Wow, that would be fantastic, but I don't envy you :-) ?That looks > like quite a task. I'll also be helping out with this most likely in the coming months, as it looks like I'll be doing time series specific coursework. > > This would be a great google summer of code project if you can get it > far enough along that a dedicated student or two could finish, polish, > document and test. > Very much agree with this. There are more than few projects that could be good for interested students as far as stats and statistical modelling go. I know this has come up already, but do we have a wiki page for proposed GSoC 2010 projects? Better sooner than later probably. -Skipper From josef.pktd at gmail.com Tue Dec 29 15:34:39 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 29 Dec 2009 15:34:39 -0500 Subject: [SciPy-User] modeling GARCH process In-Reply-To: References: <88e473830912291047o7cf3d5d3uebf62811d8f137de@mail.gmail.com> <1cd32cbb0912291134k35ffa84aracdef71b8efb384f@mail.gmail.com> <88e473830912291155i513fc8f7g7efa7323cfd603b4@mail.gmail.com> Message-ID: <1cd32cbb0912291234k7eb8175j2b72affb8ec3be8f@mail.gmail.com> On Tue, Dec 29, 2009 at 3:04 PM, Skipper Seabold wrote: > On Tue, Dec 29, 2009 at 2:55 PM, John Hunter wrote: >> On Tue, Dec 29, 2009 at 1:34 PM, ? wrote: >>> On Tue, Dec 29, 2009 at 1:47 PM, John Hunter wrote: >>>> I would like to model a time series as a GARCH process (estimate the >>>> parameters of a model and then generate random samples with these >>>> parameters). ?There are numerous ways to do this in R, but I prefer >>>> python if possible. ?I notice that Enthought teaches this in their >>>> Finance for Quants lectures. ?Does anyone have some example code for >>>> modeling and simulating GARCH processes in python? >>> >>> I would also like to know, I haven't seen any python GARCH so far. >>> >>> I started to translate http://www.kevinsheppard.com/wiki/MFE_Toolbox >>> (BSD license) into python, but it is slow going and I'm running out of >>> Christmas break. The plan is to have a tested literal translation and >>> then to numpify/scipify it. >>> >>> ETA: half a year. >> > > Maybe faster for at least a rough pass, if you get some help with the > time series stuff ;) > >> Wow, that would be fantastic, but I don't envy you :-) ?That looks >> like quite a task. > > I'll also be helping out with this most likely in the coming months, > as it looks like I'll be doing time series specific coursework. > >> >> This would be a great google summer of code project if you can get it >> far enough along that a dedicated student or two could finish, polish, >> document and test. >> > > Very much agree with this. ?There are more than few projects that > could be good for interested students as far as stats and statistical > modelling go. > > I know this has come up already, but do we have a wiki page for > proposed GSoC 2010 projects? ?Better sooner than later probably. Checking the license again for the MFE toolbox, it seems I was mistaken that MFE is BSD licensed. The previous version, UCSD Garch has BSD style license http://www.kevinsheppard.com/wiki/License but not MFE. So I might have to drop MFE and switch to UCSD Garch, which has many functions very similar but not as clean as the MFE version. 5 days of fun partially down the drain, oh well. Josef > -Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From jsseabold at gmail.com Tue Dec 29 15:48:19 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 29 Dec 2009 15:48:19 -0500 Subject: [SciPy-User] modeling GARCH process In-Reply-To: <1cd32cbb0912291234k7eb8175j2b72affb8ec3be8f@mail.gmail.com> References: <88e473830912291047o7cf3d5d3uebf62811d8f137de@mail.gmail.com> <1cd32cbb0912291134k35ffa84aracdef71b8efb384f@mail.gmail.com> <88e473830912291155i513fc8f7g7efa7323cfd603b4@mail.gmail.com> <1cd32cbb0912291234k7eb8175j2b72affb8ec3be8f@mail.gmail.com> Message-ID: On Tue, Dec 29, 2009 at 3:34 PM, wrote: > On Tue, Dec 29, 2009 at 3:04 PM, Skipper Seabold wrote: >> On Tue, Dec 29, 2009 at 2:55 PM, John Hunter wrote: >>> On Tue, Dec 29, 2009 at 1:34 PM, ? wrote: >>>> On Tue, Dec 29, 2009 at 1:47 PM, John Hunter wrote: >>>>> I would like to model a time series as a GARCH process (estimate the >>>>> parameters of a model and then generate random samples with these >>>>> parameters). ?There are numerous ways to do this in R, but I prefer >>>>> python if possible. ?I notice that Enthought teaches this in their >>>>> Finance for Quants lectures. ?Does anyone have some example code for >>>>> modeling and simulating GARCH processes in python? >>>> >>>> I would also like to know, I haven't seen any python GARCH so far. >>>> >>>> I started to translate http://www.kevinsheppard.com/wiki/MFE_Toolbox >>>> (BSD license) into python, but it is slow going and I'm running out of >>>> Christmas break. The plan is to have a tested literal translation and >>>> then to numpify/scipify it. >>>> >>>> ETA: half a year. >>> >> >> Maybe faster for at least a rough pass, if you get some help with the >> time series stuff ;) >> >>> Wow, that would be fantastic, but I don't envy you :-) ?That looks >>> like quite a task. >> >> I'll also be helping out with this most likely in the coming months, >> as it looks like I'll be doing time series specific coursework. >> >>> >>> This would be a great google summer of code project if you can get it >>> far enough along that a dedicated student or two could finish, polish, >>> document and test. >>> >> >> Very much agree with this. ?There are more than few projects that >> could be good for interested students as far as stats and statistical >> modelling go. >> >> I know this has come up already, but do we have a wiki page for >> proposed GSoC 2010 projects? ?Better sooner than later probably. > > Checking the license again for the MFE toolbox, it seems I was > mistaken that MFE is BSD licensed. The previous version, UCSD Garch > has BSD style license http://www.kevinsheppard.com/wiki/License but > not MFE. > So I might have to drop MFE and switch to UCSD Garch, which has many > functions very similar but not as clean as the MFE version. > > 5 days of fun partially down the drain, oh well. > Maybe he would make an expressed exception? And come to think of it, is this enough for inclusion under BSD-style? Though I understand the basics and the spirit behind the more popular licenses, a lot of the licensing stuff is beyond me wrt what it actually means, practically speaking. I definitely err on the side of caution though, and I've had pretty good luck getting expressed permission or licenses changed at least for the stuff I've wanted to use for statsmodels. Easier to write an email than start over... Skipper From timmichelsen at gmx-topmail.de Wed Dec 30 08:36:41 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Wed, 30 Dec 2009 14:36:41 +0100 Subject: [SciPy-User] [ANN] pandas 0.1, a new NumPy-based data analysis library In-Reply-To: <6c476c8a0912260744p649b6c82s1289fe434f7ace84@mail.gmail.com> References: <6c476c8a0912260744p649b6c82s1289fe434f7ace84@mail.gmail.com> Message-ID: Hello, thanks for the announcement. > * Date tools: objects for expressing date offsets or generating date > ranges; some functionality similar to scikits.timeseries Why do you create data structures similar to scikits.timeseries? Couldn't you reuse the functionality from scikits.timeseries? Could you see chances to design a interface between both packages? I have a lot of timeseries code. I would love to reuse that together with your package. Thanks in advance for calrifications, Timmie From wesmckinn at gmail.com Wed Dec 30 10:45:56 2009 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 30 Dec 2009 10:45:56 -0500 Subject: [SciPy-User] [ANN] pandas 0.1, a new NumPy-based data analysis library In-Reply-To: References: <6c476c8a0912260744p649b6c82s1289fe434f7ace84@mail.gmail.com> Message-ID: <6c476c8a0912300745g7d7736a9hc4e84445a888c883@mail.gmail.com> On Wed, Dec 30, 2009 at 8:36 AM, Tim Michelsen wrote: > Hello, > thanks for the announcement. > >> * Date tools: objects for expressing date offsets or generating date >> ?ranges; some functionality similar to scikits.timeseries > Why do you create data structures similar to scikits.timeseries? > Couldn't you reuse the functionality from scikits.timeseries? I think there are two relevant questions here: - Why don't I use scikits.timeseries's Date and DateArray objects In pandas I wanted to stick with working with python datetime objects, and I needed objects to encapsulate generic date shifts (like "add 5 business days"). The idea was to extend the dateutil.relativedelta concept to handle business days, last business day of month, etc. Once you've done that, generating date ranges is a fairly trivial (albeit not super efficient) next step. The DateRange class is also a valid Index for a Series or DataFrame object and requires no conversion (plan to write more about this in the docs when I get a chance) - Why don't I use the scikits.timeseries for time series data itself I don't think you were asking this, but I have gotten this question from others. We should probably have a broader discussion about handling time series data particularly given the recent datetime dtype addition to NumPy. In any case, there are many reasons why I didn't use it-- the main one is that I wanted to have a unified data model (i.e. use the same basic class) for both time series and cross-sectional data. The scikits.timeseries TimeSeries object behaves too differently. Here's one example: http://pytseries.sourceforge.net/core.timeseries.operations.html#binary-operations for adding two scikits.timeseries.TimeSeries " When the second input is another TimeSeries object, the two series must satisfy the following conditions: * they must have the same frequency; * they must be sorted in chronological order; * they must have matching dates; * they must have the same shape. " pandas does not know or care about the frequency, shape, or sortedness of the two TimeSeries. If the above conditions are met, it will bypass the "matching logic" and go at NumPy vectorized binary op speed. But if you break one of the above conditions, it will still match dates and produce a TimeSeries result. If you break the conditions above with scikits.timeseries, you will get a MaskedArray result and lose all of your date information (correct me if I'm wrong). pandas's TimeSeries-specific functionality could definitely be much improved, but I think the easier option for now would be to provide an interface between the two libraries, as you suggest: > Could you see chances to design a interface between both packages? > I have a lot of timeseries code. I would love to reuse that together > with your package. Designing a bridge interface between the two packages would probably be pretty easy and fairly desirable. If you could give me some examples of what you're doing in your time series code that would be helpful to know. > Thanks in advance for calrifications, > Timmie > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Wed Dec 30 12:10:44 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 30 Dec 2009 12:10:44 -0500 Subject: [SciPy-User] Skellam, fft and characteristic functions Message-ID: <1cd32cbb0912300910wafae9d1p9d91f3db36169177@mail.gmail.com> This is just an example for some one-liners that it took me a long time to figure out (because I don't know enough about fft, and I still have one missing), related to the question I asked recently about fourier transforms. cfdiscrete_ex.txt are the results for the example file, cfdiscrete.py has the actual functions (and more examples and tests.) Essentially these are just some tools to work with (discrete) distributions. I used it to verify results for the Skellam distribution in the scipy trac ticket. Josef -------------- next part -------------- Using fast fourier transform to move between pmf/pdf and characteristic function part 1: discrete distribution ================================================================================ (part 2: continuous distribution and compound poisson in preparation) >>> import numpy as np >>> from scipy import stats >>> from cfdiscrete import * #the last command imports #__all__ = ['cf2cdfdiscrete', 'cf2cdfdiscretequad', 'cfn2pmf', # 'momentfromcfn', 'pmf2cf', 'Test_CFDiscrete', 'skellam'] # pmf2cf : pmf to characteristic function using fft # cf2pmf : characteristic function to pmf using fft # momentfromcfn : non-central moment from cf using fft differentiation # cf2cdfdiscrete : cf to cdf using numerical integration with sum # cf2cdfdiscretequad : cf to cdf using integrate.quad # no fft version of cf2cdf, I didn't manage to figure out the integration # cfn2pmf: characteristic function to probability mass function # ------------------------------------------------------------- # Example: Skellam from scipy ticket >>> def skellam_cf(w, mu1, mu2): ... poisscf = stats.poisson._cf ... return poisscf(w, mu1) * poisscf(-w, mu2) ... >>> mu1, mu2 = 10, 5 >>> k = np.arange(-5,20) >>> print skellam.pmf(k, mu1, mu2) [ 0.00324123 0.00633737 0.01155235 0.01960615 0.03094716 0.04540174 0.06189433 0.07842461 0.09241881 0.10139793 0.10371928 0.09907658 0.08854666 0.07418784 0.05839277 0.04326869 0.03024816 0.01999143 0.01251688 0.00743899 0.00420459 0.00226421 0.00116371 0.00057179 0.000269 ] >>> skpmf, k2 = cfn2pmf(lambda w: skellam_cf(w, mu1, mu2)) >>> print skpmf[k.astype(int)] [ 0.00324123 0.00633737 0.01155235 0.01960615 0.03094716 0.04540174 0.06189433 0.07842461 0.09241881 0.10139793 0.10371928 0.09907658 0.08854666 0.07418784 0.05839277 0.04326869 0.03024816 0.01999143 0.01251688 0.00743899 0.00420459 0.00226421 0.00116371 0.00057179 0.000269 ] >>> print skellam.pmf(k, mu1, mu2) - skpmf[k.astype(int)] [ 1.47451495e-17 2.08166817e-17 2.77555756e-17 4.16333634e-17 6.24500451e-17 7.63278329e-17 1.17961196e-16 1.38777878e-16 1.38777878e-16 1.66533454e-16 1.52655666e-16 1.24900090e-16 1.24900090e-16 6.93889390e-17 5.55111512e-17 1.38777878e-17 -3.46944695e-18 -1.73472348e-17 -1.73472348e-17 -2.08166817e-17 -1.82145965e-17 -2.34187669e-17 -1.97324795e-17 -2.03830008e-17 -1.68051337e-17] #Example 2: x+y-z , where x,y,z are Poisson distributed #^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #Similar to the story for Skellam but two teams against one, #(independent rvs only) >>> def poiss2versus1_cf(w, mu1, mu2, mu3): ... poisscf = stats.poisson._cf ... return poisscf(w, mu1) * poisscf(w, mu2) * poisscf(-w, mu3) ... >>> mu1a, mu2a, mu3a = 5, 5, 10 >>> cfn2v1 = lambda w: poiss2versus1_cf(w, mu1a, mu2a, mu3a) >>> sk2pmf, k2 = cfn2pmf(cfn2v1) >>> np.random.seed(9876345) >>> nobs = 1000 >>> rvs = stats.poisson.rvs(mu1a, size=nobs) + stats.poisson.rvs(mu2a, size=nobs) -\ ... stats.poisson.rvs(mu3a ... , size=nobs) >>> rvsmin = rvs.min() >>> freq = np.bincount(rvs-rvsmin) >>> print (rvs == rvsmin).sum() 1 >>> kn = np.arange(rvsmin,rvs.max()+1).astype(int) >>> print 'comparing sample frequency and theoretical frequency out of 1000' comparing sample frequency and theoretical frequency out of 1000 >>> print np.column_stack((kn, freq, sk2pmf[kn]*1000)).astype(int) [[-16 1 0] [-15 1 0] [-14 1 0] [-13 0 1] [-12 3 2] [-11 6 4] [-10 11 7] [ -9 5 11] [ -8 17 17] [ -7 22 25] [ -6 46 35] [ -5 43 47] [ -4 48 59] [ -3 83 71] [ -2 65 81] [ -1 93 87] [ 0 91 89] [ 1 91 87] [ 2 75 81] [ 3 66 71] [ 4 62 59] [ 5 60 47] [ 6 29 35] [ 7 42 25] [ 8 13 17] [ 9 14 11] [ 10 5 7] [ 11 5 4] [ 12 0 2] [ 13 2 1]] #moments of the 2 versus 1 Poisson # first seven uncenterd moments >>> mncn = [momentfromcfn(i, cfn2v1) for i in range(7)] >>> print mncn [1.0, 3.1143029770981705e-014, 20.000000000022936, 2.0039346238461752e-009, 1220.0000014978923, 0.0001034474399337382, 126020.08746095549] >>> print 'mean sample %f, theory %f' % (rvs.mean(), mncn[1]) mean sample 0.069000, theory 0.000000 >>> print 'var sample %f, theory %f' % (rvs.var(), mncn[2]-mncn[1]**2) var sample 20.088239, theory 20.000000 # cf2cdf: check accuracy # ---------------------- #Poisson >>> k = np.arange(50) >>> cdfp = stats.poisson.cdf(k,mu1) >>> cdfpiq = cf2cdfdiscretequad(k, lambda k:stats.poisson._cf(k,mu1)) >>> print np.max(np.abs(cdfp - cdfpiq)) 1.90958360236e-014 >>> cdfpi = cf2cdfdiscrete(k, lambda k:stats.poisson._cf(k,mu1)) >>> print np.max(np.abs(cdfp - cdfpi)) 2.02288774382e-012 #Skellam >>> k = np.arange(-20,60) >>> cdfs = skellam.cdf(k, mu1, mu2) >>> cdfsiq = cf2cdfdiscretequad(k, lambda k_:skellam._cf(k_,mu1, mu2)) >>> print np.max(np.abs(cdfs - cdfsiq)) 1.56812197777e-006 >>> cdfsi = cf2cdfdiscrete(k, lambda k_:skellam._cf(k_,mu1, mu2)) >>> print np.max(np.abs(cdfs - cdfsi)) 1.56812197588e-006 >>> # skellam error is 1.568e-006 but that's maybe a problem with skellam._cdf -------------- next part -------------- import numpy as np from scipy import stats from cfdiscrete import * #the last command imports #__all__ = ['cf2cdfdiscrete', 'cf2cdfdiscretequad', 'cfn2pmf', # 'momentfromcfn', 'pmf2cf', 'Test_CFDiscrete', 'skellam'] # pmf2cf : pmf to characteristic function using fft # cf2pmf : characteristic function to pmf using fft # momentfromcfn : non-central moment from cf using fft differentiation # cf2cdfdiscrete : cf to cdf using numerical integration with sum # cf2cdfdiscretequad : cf to cdf using integrate.quad # no fft version of cf2cdf, I didn't manage to figure out the integration # cfn2pmf: characteristic function to probability mass function # ------------------------------------------------------------- # Example: Skellam from scipy ticket def skellam_cf(w, mu1, mu2): # characteristic function poisscf = stats.poisson._cf return poisscf(w, mu1) * poisscf(-w, mu2) mu1, mu2 = 10, 5 k = np.arange(-5,20) #pmf of skellam (uses ncx2) and pmf by inversion of the characteristic function print skellam.pmf(k, mu1, mu2) skpmf, k2 = cfn2pmf(lambda w: skellam_cf(w, mu1, mu2)) #print skpmf[:10] print skpmf[k.astype(int)] #Error print skellam.pmf(k, mu1, mu2) - skpmf[k.astype(int)] #Example 2: x+y-z , where x,y,z are Poisson distributed #^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #Similar to the story for Skellam but two teams against one, (independent rvs only) def poiss2versus1_cf(w, mu1, mu2, mu3): # characteristic function poisscf = stats.poisson._cf return poisscf(w, mu1) * poisscf(w, mu2) * poisscf(-w, mu3) mu1a, mu2a, mu3a = 5, 5, 10 cfn2v1 = lambda w: poiss2versus1_cf(w, mu1a, mu2a, mu3a) sk2pmf, k2 = cfn2pmf(cfn2v1) np.random.seed(9876345) nobs = 1000 rvs = stats.poisson.rvs(mu1a, size=nobs) + stats.poisson.rvs(mu2a, size=nobs) -\ stats.poisson.rvs(mu3a , size=nobs) rvsmin = rvs.min() freq = np.bincount(rvs-rvsmin) #check that bincount is correct print (rvs == rvsmin).sum() kn = np.arange(rvsmin,rvs.max()+1).astype(int) print 'comparing sample frequency and theoretical frequency out of 1000' print np.column_stack((kn, freq, sk2pmf[kn]*1000)).astype(int) #moments of the 2 versus 1 Poisson # first seven uncenterd moments mncn = [momentfromcfn(i, cfn2v1) for i in range(7)] print mncn print 'mean sample %f, theory %f' % (rvs.mean(), mncn[1]) print 'var sample %f, theory %f' % (rvs.var(), mncn[2]-mncn[1]**2) # cf2cdf: check accuracy # ---------------------- k = np.arange(50) cdfp = stats.poisson.cdf(k,mu1) cdfpiq = cf2cdfdiscretequad(k, lambda k:stats.poisson._cf(k,mu1)) print np.max(np.abs(cdfp - cdfpiq)) cdfpi = cf2cdfdiscrete(k, lambda k:stats.poisson._cf(k,mu1)) print np.max(np.abs(cdfp - cdfpi)) k = np.arange(-20,60) cdfs = skellam.cdf(k, mu1, mu2) cdfsiq = cf2cdfdiscretequad(k, lambda k_:skellam._cf(k_,mu1, mu2)) print np.max(np.abs(cdfs - cdfsiq)) cdfsi = cf2cdfdiscrete(k, lambda k_:skellam._cf(k_,mu1, mu2)) print np.max(np.abs(cdfs - cdfsi)) # skellam error is 1.568e-006 but that's maybe a problem with skellam._cdf -------------- next part -------------- # -*- coding: utf-8 -*- """ Created on Sun Dec 13 11:24:06 2009 Author: josef-pktd mostly written from scratch (based on definitions and with some trial and error for fft arguments) cf2cdf is based on: Numerical Inversion of a Characteristic Function Author(s): R. B. Davies Source: Biometrika, Vol. 60, No. 2 (Aug., 1973), pp. 415-417 Published by: Biometrika Trust Stable URL: http://www.jstor.org/stable/2334555 Accessed: 13/11/2009 14:02 TODOs ----- * create frozen distribution factory, with characteristic function, npoints and bounds as inputs to constructor * check consistency of npoints, make it consistently a choice variable * convert arguments to use functions/distribution instead of arrays for cf,pmf,... * possible extension choose error bounds instead of npoints for precision * fit method, MLE or estimation based on empirical characteristic function(?) for MLE: fast way of getting loglike from cf for different parameters * find characteristic functions for other discrete distributions * add probability generating function, pdf, to distributions * loc and scale (?): affine transformation of w in cf (check) * helper functions for linear combinations of random variables (weighted sum) (following example skellam) * connection to compound distribution (discrete mixture of distributions) * are there any other useful tricks with fft ? * try to get help with cf2cdf using fft * check cf2cdf integration limits (0,pi) versus (-pi, pi). Is real part always symmetric around zero? * clean up analogous version for continuous distributions * examples for sum of independent random variable try sum of log-normals for discrete: simple case X+Y-Z with each independently Poisson distributed (2 teams/players against 1 instead of 1on1 as in skellam """ import numpy as np import scipy from scipy import stats, integrate import scipy.fftpack as fft def maxabs(x,y): return np.max(np.abs(x-y)) def maxabsr(x,y): return np.max(np.abs(x.real-y.real)) def cfpoissmeth(self, w, mu): return np.exp(mu * (np.exp(1j * w) - 1.)) #patch poisson stats.distributions.poisson_gen._cf = cfpoissmeth # check #print stats.poisson._cf(np.linspace(-0.5,0.5,6)[:,None],[1,10]) def pmf2cf(pmf, k): '''convert probability mass point to characteristic function checked only for positive k so far use pmf function instead of pmf array Parameters ---------- pmf : array probabiltiy mass function k : array support points of random variable in array pmf required equidistant (? for fft) (integer, what if not integer values, e.g. 1.5?) Returns ------- cf : array characteristic function w : array points at which characteristic function is evaluated ''' npoints = len(k) w = k * 2.*np.pi/npoints cf = fft.fft(pmf).conj() #no idea why I need conj to reverse sign of imag return cf, w def cf2pmf(cfv, w): '''convert characteristic function to probability mass point return only real part of inversion, maybe not correct for all distributions? checked only for positive k so far, ok for Skellam use cf function instead of cf array Parameters ---------- cfv : array characteristic function w : array points at which characteristic function is evaluated Returns ------- pmf : array probabiltiy mass function k : array support points of random variable in array pmf not tested, what if there are negative integers ''' k = w / 2./np.pi * npoints pmf = fft.ifft(cfv.conj()).real #need conj to reverse sign of imag return pmf, k def cfn2pmf(cfn, npoints=2**9): '''convert characteristic function to probability mass point return only real part of inversion, maybe not correct for all distributions? checked only for positive k so far, ok for Skellam, but pmf for negative k are at end, recenter with fftshift use cf function instead of cf array Parameters ---------- cfn : function characteristic function npoints : integer number of points for which characteristic function is evaluated Returns ------- pmf : array probabiltiy mass function k : array support points of random variable in array pmf not tested, what if there are negative integers (those are at end) ''' kw = np.arange(npoints) w = kw * 2.*np.pi/npoints cfv = cfn(w) k = w / 2./np.pi * npoints pmf = fft.ifft(cfv.conj()).real #need conj to reverse sign of imag return pmf, k def cf2cdfdiscrete(k, cfn, npoints=2**9): '''inversion of characteristic function for cdf w needs to cover intervall (0, pi) uses simple fixed grid integration I didn't manage to figure out how to use fft for this Parameters ---------- k array support points of random variable for which cdf is calculated (need not be equidistant, not related to precision) cfn : function characteristic function, only one argument allowed, `cfn(w)` npoints : integer number of points at which characteristic function is evaluated, determines the precision of the numerical integration Returns ------- cdf : array cumulative distribution function for points of input k ''' if np.ndim(k) == 1: k = k[:,None] k = k + 1 #uses symmetrie of real part around 0 to integrate over (0,pi] delta = np.pi/(npoints-1) w = (np.arange(npoints)+0.5)*delta #print k.shape, w.shape cf = cfn(w) #print w.min(), w.max() cdf = 0.5-(cf/(1-np.exp(-1j*w))/2./np.pi * np.exp(-1j*w*k)).real.sum(1)*(w[1]-w[0])*2 return cdf def cf2cdfdiscretequad(k, cfn): '''cf -> cdf inversion using quad from equation in section 3 of Davis (1973) Parameters ---------- k : array support points of random variable for which cdf is calculated (need not be equidistant, not related to precision, calculations in loop) cfn : function characteristic function, only one argument allowed, `cfn(w)` Returns ------- cdf : array cumulative distribution function for points of input k ''' # def fnp(w): # return cfn(w)/(1-np.exp(-1j*w))/2./np.pi # # def fnintegA(m, A): # n = 1.*np.arange(len(A)) # return (A*np.exp(-1j*2*np.pi*n/len(n)*k)) def fn(w, k): w = max(w, 1e-10) A = cfn(w)/(1-np.exp(-1j*w))/2./np.pi A *= np.exp(-1j*w*k) return A.real cdfv = np.zeros(len(k)) for ii,kk in enumerate(k): #cdfv[ii] = 0.5-integrate.quad(fn, -np.pi, np.pi, args=(kk+1,))[0] cdfv[ii] = 0.5-2*integrate.quad(fn, 0., np.pi, args=(kk+1,))[0] return cdfv def momentfromcf(k, cf): '''calculates k-th non-central moment by differentiation of cf possible sign error Parameters ---------- k : integer determines which moment is calculated. cf : array characteristic function values used by fft Returns ------- non-central moment : float ''' return np.abs(fft.diff(cf, k))[0] # without abs: # # no idea why odd even distinction # if k%2: # odd # return fft.diff(cf, k).imag[0] # else: # return fft.diff(cf, k).real[0] def momentfromcfn(k, cfn, npoints = 2**9): '''calculates k-th non-central moment by differentiation of cf possible sign error Parameters ---------- k : array support points of random variable for which cdf is calculated (need not be equidistant, not related to precision, calculations in loop) cfn : function characteristic function, only one argument allowed, `cfn(w)` npoints : integer number of points at which characteristic function is evaluated, determines the precision of the moment calculation by fft Returns ------- non-central moment : float ''' kw = np.arange(npoints) w = kw * 2.*np.pi/npoints cfv = cfn(w) #print cfv[:10] return np.abs(fft.diff(cfv, k))[0] ######## from scipy track ticket, plus I added _cf, not the latest version poisson = scipy.stats.distributions.poisson ncx2 = scipy.stats.distributions.ncx2 # Skellam distribution class skellam_gen(scipy.stats.distributions.rv_discrete): def _rvs(self, mu1, mu2): n = self._size return np.random.poisson(mu1, n)-np.random.poisson(mu2, n) def _pmf(self, x, mu1, mu2): px = np.where(x < 0, ncx2.pdf(2*mu2, 2*(1-x), 2*mu1)*2, ncx2.pdf(2*mu1, 2*(x+1), 2*mu2)*2) return px def _cdf(self, x, mu1, mu2): x = np.floor(x) px = np.where(x < 0, ncx2.cdf(2*mu2, -2*x, 2*mu1), 1-ncx2.cdf(2*mu1, 2*(x+1), 2*mu2)) return px def _cf(self, w, mu1, mu2): # characteristic function poisscf = scipy.stats.distributions.poisson._cf return poisscf(w, mu1) * poisscf(-w, mu2) def _stats(self, mu1, mu2): mean = mu1 - mu2 var = mu1 + mu2 g1 = mean / np.sqrt((var)**3) g2 = 1 / var return mean, var, g1, g2 skellam = skellam_gen(a=-np.inf, name="skellam", longname='A Skellam', shapes="mu1,mu2", extradoc=""" Skellam distribution Probability distribution of the difference of two correlated or uncorrelated Poisson random variables. If k1 and k2 are two Poisson variables with expected values lam1 and lam2, then k1-k2 follows a Skellam distribution with parameters m1 = lam1 - rho*sqrt(lam1*lam2) and m2 = lam2 - rho*sqrt(lam1*lam2), rho being the correlation coefficient between k1 and k2. Parameters m1 and m2 must be strictly positive. For details see: http://en.wikipedia.org/wiki/Skellam_distribution """ ) ###### end ticket # and now for a test class class Test_CFDiscrete(object): # class for testing, currently prints results def __init__(self, dist, args): self.dist = dist self.args = args def test_cf(self): # this fails for Skellam, need fftshift lambd = 10 npoints = 2**8 k = np.arange(npoints) w = k * 2.*np.pi/npoints pmf = self.dist.pmf(k, *self.args) cfv = self.dist._cf(w, *self.args) k2 = np.arange(-npoints,npoints) w2 = k2 * 2.*np.pi/npoints pmf = self.dist.pmf(k2, *self.args) #print 'nan in skellam.pmf', np.isnan(pmf).sum() pmf[np.isnan(pmf)] = 0.0 #print 'check cdf for nans' #print np.isnan(self.dist.cdf(k2, lambd1, lambd2)).sum() #check error 5.95059410299e-015 print 'cf error' cfi = pmf2cf(fft.fftshift(pmf),k2)[0][::2] ncf = min(len(cfv), len(cfi)) # check shape mismatch print maxabs(cfv[:ncf], cfi[:ncf]) def test_pmf(self): npoints = 2**8 k2 = np.arange(-npoints,npoints) w2 = k2 * 2.*np.pi/npoints cfv = self.dist._cf(w, *self.args) print 'pmfi error' #Note: cf2pmf returns pmf of negative integers at end - use fftshift kcheck = np.arange(-20,60) print maxabs(fft.fftshift(pmf)[kcheck], cf2pmf(cfv, w2)[0][kcheck]) def test_cdf(self): d=0 #k = np.arange(d+0,d+10) k = np.arange(-20,60) #kcheck cdfi = cf2cdfdiscrete(k, lambda k_:self.dist._cf(k_,*self.args)) cdf = self.dist.cdf(k,*self.args) #print 'cdfi' #print cdfi print 'cdfi error' # 1.56812197671e-006 print maxabs(cdf, cdfi) #print cdfi-skellam.cdf(k,lambd1, lambd2) def test_cdfquad(self): d=0 k = np.arange(d+0,d+10) k = kcheck cdfi = cf2cdfdiscretequad(k, lambda k_:self.dist._cf(k_,*self.args)) cdf = self.dist.cdf(k,*self.args) #print 'cdfi' #print cdfi print 'cdfi quad error' # 1.56812197671e-006 print maxabs(cdf, cdfi) #print cdfi-skellam.cdf(k,lambd1, lambd2) def test_moment(self): npoints = 2**8 k = np.arange(npoints) w = k * 2.*np.pi/npoints cfv = self.dist._cf(w, *self.args) mean = momentfromcf(1, cfv) mnc2 = momentfromcf(2, cfv) m, v = self.dist.stats(*self.args) print 'mean', m, mean print 'var', v, mnc2 - mean**2 for mi in range(7): # error in scipy.stats for mi>6 mm = momentfromcf(mi, cfv) try: print mi, self.dist.moment(mi,*self.args), mm, mm-np.round(mm) except: # catch all possible problems with scipy.stats.distributions print 'exception in moment calculation', mi, mm #__all__ = [cf2cdfdiscrete, cf2cdfdiscretequad, cfn2pmf, # momentfromcfn, pmf2cf, Test_CFDiscrete, skellam] __all__ = ['cf2cdfdiscrete', 'cf2cdfdiscretequad', 'cfn2pmf', 'momentfromcfn', 'pmf2cf', 'Test_CFDiscrete', 'skellam'] if __name__ == '__main__': lambd = 10 npoints = 2**8 k = np.arange(npoints) w = k * 2.*np.pi/npoints pmf = stats.poisson.pmf(k,lambd) cfv = stats.poisson._cf(w, lambd) #check error 5.95059410299e-015 print 'cf error' print maxabs(cfv, pmf2cf(pmf,k)[0]) #cf2pmf #------ #check diff 5.96744875736e-016 print 'pmfi error' print maxabs(pmf, cf2pmf(cfv, w)[0]) #cf2cdfdiscrete #^^^^^^^^^^^^^^ d = 0 k2 = np.arange(d+0,d+10) cdfi = cf2cdfdiscrete(k2, lambda k:stats.poisson._cf(k,lambd)) cdf = stats.poisson.cdf(k2,lambd) print 'cdfi' print cdfi print 'cdfi error' print maxabs(cdf, cdfi) print cdfi-stats.poisson.cdf(k2,lambd) #cf2cdfdiscretequad #^^^^^^^^^^^^^^^^^^ d=0 k2 = np.arange(d+0,d+10) cdfi2 = cf2cdfdiscretequad(k2, lambda k:stats.poisson._cf(k,lambd)) cdf = stats.poisson.cdf(k2,lambd) print 'cdfi' print cdfi2 print 'cdfi2 error' print maxabs(cdf, cdfi2) #print cdfi2-stats.poisson.cdf(k2,lambd) #momentfromcf momentfromcfn #^^^^^^^^^^^^^^^^^^^^^^^^^^ #npoints = 2**8 #k = np.arange(npoints) #w = k * 2.*np.pi/npoints #pmf = stats.poisson.pmf(k,lambd) #cfv = stats.poisson._cf(w, lambd) mnc = [momentfromcf(i, cfv) for i in range(7)] cfnpoiss = lambda w: stats.poisson._cf(w, lambd) mncn = [momentfromcfn(i, cfnpoiss) for i in range(7)] for mi in range(7): # error in scipy.stats for mi>6 mm = momentfromcf(mi, cfv) print mi, stats.poisson.moment(mi,lambd), mm, mm-np.round(mm) examples = ['skellam'] if 'skellam' in examples: lambd1 = 10 lambd2 = 5 kcheck = np.arange(-20,60) lambd = 10 npoints = 2**8 k = np.arange(npoints) w = k * 2.*np.pi/npoints pmf = skellam.pmf(k, lambd1, lambd2) cfv = skellam._cf(w, lambd1, lambd2) k2 = np.arange(-npoints,npoints) w2 = k2 * 2.*np.pi/npoints pmf = skellam.pmf(k2, lambd1, lambd2) print 'nan in skellam.pmf', np.isnan(pmf).sum() pmf[np.isnan(pmf)] = 0.0 print 'check cdf for nans' print np.isnan(skellam.cdf(k2, lambd1, lambd2)).sum() #check error 5.95059410299e-015 print 'cf error' #cfi = pmf2cf(pmf,k2)[0] cfi = pmf2cf(fft.fftshift(pmf),k2)[0][::2] # I don't know why [::2] ncf = min(len(cfv), len(cfi)) # check shape mismatch print maxabs(cfv[:ncf], cfi[:ncf]) print 'pmfi error' #Note: cf2pmf returns pmf of negative integers at end - use fftshift kcheck = np.arange(-20,60) print maxabs(fft.fftshift(pmf)[kcheck], cf2pmf(cfv, w2)[0][kcheck]) d=0 k = np.arange(d+0,d+10) k = kcheck cdfi = cf2cdfdiscrete(k, lambda k_:skellam._cf(k_,lambd1, lambd2)) cdf = skellam.cdf(k,lambd1, lambd2) print 'cdfi' print cdfi print 'cdfi error' # 1.56812197671e-006 print maxabs(cdf, cdfi) print cdfi-skellam.cdf(k,lambd1, lambd2) #using the testclass #^^^^^^^^^^^^^^^^^^^ print '\nTesting Skellam' td = Test_CFDiscrete(skellam, (lambd1, lambd2)) td.test_cf() td.test_pmf() td.test_cdf() # error is large 1.56812197671e-006, problem with skellam.cdf or cf2cdf ? td.test_cdfquad() # why is this almost identical to td.test_cdf() td.test_moment() print '\nTesting Poisson using version including negative integers' td = Test_CFDiscrete(stats.poisson, (lambd1,)) import time t0 = time.time() td.test_cf() t1 = time.time() print 'elapsed', t1-t0 td.test_pmf() t2 = time.time() print 'elapsed', t2-t1 td.test_cdf() t3 = time.time() print 'elapsed', t3-t2 td.test_cdfquad() t4 = time.time() print 'elapsed', t4-t3 td.test_moment() t5 = time.time() print 'elapsed', t5-t4 #Note integrate.quad is very slow, up to 55 times array sum #times with fft don't register (0.0) without repetition From mattknox.ca at gmail.com Wed Dec 30 18:03:07 2009 From: mattknox.ca at gmail.com (Matt Knox) Date: Wed, 30 Dec 2009 23:03:07 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?=5BANN=5D_pandas_0=2E1=2C=09a_new_NumPy-ba?= =?utf-8?q?sed_data_analysis_=09library?= References: <6c476c8a0912260744p649b6c82s1289fe434f7ace84@mail.gmail.com> <6c476c8a0912300745g7d7736a9hc4e84445a888c883@mail.gmail.com> Message-ID: Wes McKinney gmail.com> writes: > I don't think you were asking this, but I have gotten this question > from others. We should probably have a broader discussion about > handling time series data particularly given the recent datetime dtype > addition to NumPy. Agreed. I think once the numpy datetime dtype matures a bit, it would be worthwhile to have a "meeting of the minds" on the future of time series data in python in general. In the mean time, I think it is very healthy to have some different approaches out in the wild (scikits.timeseries, pandas, nipy timeseries) to allow people to flesh out ideas, see what works, what doesn't, where there is overlap, etc. Hopefully we can then unite the efforts and not end up with a confusing landscape of multiple time series packages like R has. However, I think any specific interoperability work between the packages is a bit premature at this point until the final vision is a bit clearer. > for adding two scikits.timeseries.TimeSeries > " > When the second input is another TimeSeries object, the two series > must satisfy the following conditions: > > * they must have the same frequency; > * they must be sorted in chronological order; > * they must have matching dates; > * they must have the same shape. > " > > pandas does not know or care about the frequency, shape, or sortedness > of the two TimeSeries. If the above conditions are met, it will bypass > the "matching logic" and go at NumPy vectorized binary op speed. But > if you break one of the above conditions, it will still match dates > and produce a TimeSeries result. Believe it or not, what you just described is along the lines of how the original scikits.timeseries prototype behaved. It drew inspiration from the "FAME 4GL" time series language. FAME does all of the frequency / shape matching implicitly. It was decided (by the two person comittee of Pierre and I) that this behaviour felt a little to alien relative to the standard numpy array objects so we went back to the drawing board and used a more conservative approach. That is to say, frequency conversion and alignment must be done explicitly in the scikits.timeseries module. In practice, I don't find this to be a burden and like the extra clarity in the code, but it really depends what kind of problems you are solving, and certainly personal preference and experience plays a big role. At any rate, looking forward to seeing how the pandas module evolves and hopefully we can collaborate at some point in the future. - Matt From wesmckinn at gmail.com Wed Dec 30 18:25:13 2009 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 30 Dec 2009 18:25:13 -0500 Subject: [SciPy-User] modeling GARCH process In-Reply-To: References: <88e473830912291047o7cf3d5d3uebf62811d8f137de@mail.gmail.com> <1cd32cbb0912291134k35ffa84aracdef71b8efb384f@mail.gmail.com> <88e473830912291155i513fc8f7g7efa7323cfd603b4@mail.gmail.com> <1cd32cbb0912291234k7eb8175j2b72affb8ec3be8f@mail.gmail.com> Message-ID: <6c476c8a0912301525u19aa2b19n7c7c34101aee417@mail.gmail.com> On Tue, Dec 29, 2009 at 3:48 PM, Skipper Seabold wrote: > On Tue, Dec 29, 2009 at 3:34 PM, ? wrote: >> On Tue, Dec 29, 2009 at 3:04 PM, Skipper Seabold wrote: >>> On Tue, Dec 29, 2009 at 2:55 PM, John Hunter wrote: >>>> On Tue, Dec 29, 2009 at 1:34 PM, ? wrote: >>>>> On Tue, Dec 29, 2009 at 1:47 PM, John Hunter wrote: >>>>>> I would like to model a time series as a GARCH process (estimate the >>>>>> parameters of a model and then generate random samples with these >>>>>> parameters). ?There are numerous ways to do this in R, but I prefer >>>>>> python if possible. ?I notice that Enthought teaches this in their >>>>>> Finance for Quants lectures. ?Does anyone have some example code for >>>>>> modeling and simulating GARCH processes in python? >>>>> >>>>> I would also like to know, I haven't seen any python GARCH so far. >>>>> >>>>> I started to translate http://www.kevinsheppard.com/wiki/MFE_Toolbox >>>>> (BSD license) into python, but it is slow going and I'm running out of >>>>> Christmas break. The plan is to have a tested literal translation and >>>>> then to numpify/scipify it. >>>>> >>>>> ETA: half a year. >>>> >>> >>> Maybe faster for at least a rough pass, if you get some help with the >>> time series stuff ;) >>> >>>> Wow, that would be fantastic, but I don't envy you :-) ?That looks >>>> like quite a task. >>> >>> I'll also be helping out with this most likely in the coming months, >>> as it looks like I'll be doing time series specific coursework. >>> >>>> >>>> This would be a great google summer of code project if you can get it >>>> far enough along that a dedicated student or two could finish, polish, >>>> document and test. >>>> >>> >>> Very much agree with this. ?There are more than few projects that >>> could be good for interested students as far as stats and statistical >>> modelling go. >>> >>> I know this has come up already, but do we have a wiki page for >>> proposed GSoC 2010 projects? ?Better sooner than later probably. >> >> Checking the license again for the MFE toolbox, it seems I was >> mistaken that MFE is BSD licensed. The previous version, UCSD Garch >> has BSD style license http://www.kevinsheppard.com/wiki/License but >> not MFE. >> So I might have to drop MFE and switch to UCSD Garch, which has many >> functions very similar but not as clean as the MFE version. >> >> 5 days of fun partially down the drain, oh well. >> > > Maybe he would make an expressed exception? ?And come to think of it, > is this enough for inclusion under BSD-style? ?Though I understand the > basics and the spirit behind the more popular licenses, a lot of the > licensing stuff is beyond me wrt what it actually means, practically > speaking. ?I definitely err on the side of caution though, and I've > had pretty good luck getting expressed permission or licenses changed > at least for the stuff I've wanted to use for statsmodels. ?Easier to > write an email than start over... > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > + 1, I would be interested in contributing to the porting effort in that case that the author will agree to a BSD-style license. From njs at pobox.com Wed Dec 30 20:26:59 2009 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 30 Dec 2009 17:26:59 -0800 Subject: [SciPy-User] modeling GARCH process In-Reply-To: <1cd32cbb0912291234k7eb8175j2b72affb8ec3be8f@mail.gmail.com> References: <88e473830912291047o7cf3d5d3uebf62811d8f137de@mail.gmail.com> <1cd32cbb0912291134k35ffa84aracdef71b8efb384f@mail.gmail.com> <88e473830912291155i513fc8f7g7efa7323cfd603b4@mail.gmail.com> <1cd32cbb0912291234k7eb8175j2b72affb8ec3be8f@mail.gmail.com> Message-ID: <961fa2b40912301726x62db8433l2868700133ab6232@mail.gmail.com> On Tue, Dec 29, 2009 at 12:34 PM, wrote: > Checking the license again for the MFE toolbox, it seems I was > mistaken that MFE is BSD licensed. The previous version, UCSD Garch > has BSD style license http://www.kevinsheppard.com/wiki/License but That license (for UCSD Garch) has an advertising clause; it's not GPL compatible. -- Nathaniel From gael.varoquaux at normalesup.org Sun Dec 13 18:47:46 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 13 Dec 2009 23:47:46 -0000 Subject: [SciPy-User] [Ann] EuroScipy 2010 Message-ID: <20091213234742.GA27356@phare.normalesup.org> ========================== Announcing EuroScipy 2010 ========================== --------------------------------------------------- The 3rd European meeting on Python in Science --------------------------------------------------- **Paris, Ecole Normale Sup?rieure, July 8-11 2010** We are happy to announce the 3rd EuroScipy meeting, in Paris, July 2010. The EuroSciPy meeting is a cross-disciplinary gathering focused on the use and development of the Python language in scientific research. This event strives to bring together both users and developers of scientific tools, as well as academic research and state of the art industry. Important dates ================== ====================================== =================================== **Registration opens** Sunday March 29 **Paper submission deadline** Sunday May 9 **Program announced** Sunday May 22 **Tutorials tracks** Thursday July 8 - Friday July 9 **Conference track** Saturday July 10 - Sunday July 11 ====================================== =================================== Tutorial ========= There will be two tutorial tracks at the conference, an introductory one, to bring up to speed with the Python language as a scientific tool, and an advanced track, during which experts of the field will lecture on specific advanced topics such as advanced use of numpy, scientific visualization, software engineering... Main conference topics ======================== We will be soliciting talks on the follow topics: - Presentations of scientific tools and libraries using the Python language, including but not limited to: - Vector and array manipulation - Parallel computing - Scientific visualization - Scientific data flow and persistence - Algorithms implemented or exposed in Python - Web applications and portals for science and engineering - Reports on the use of Python in scientific achievements or ongoing projects. - General-purpose Python tools that can be of special interest to the scientific community. Keynote Speaker: Hans Petter Langtangen ========================================== We are excited to welcome Hans Petter Langtangen as our keynote speaker. - Director of scientific computing and bio-medical research at Simula labs, Oslo - Author of the famous book Python scripting for computational science http://www.springer.com/math/cse/book/978-3-540-73915-9 -- Ga?l Varoquaux, conference co-chair Nicolas Chauvat, conference co-chair Program committee ................. Romain Brette (ENS Paris, DEC) Mike M?ller (Python Academy) Christophe Pradal (CIRAD/INRIA, DigiPlantes team) Pierre Raybault (CEA, DAM) Jarrod Millman (UC Berkeley, Helen Wills NeuroScience institute)