From prabhu at aero.iitb.ac.in Sat Dec 1 11:48:05 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Sat, 01 Dec 2007 22:18:05 +0530 Subject: [SciPy-user] surface plotting In-Reply-To: <1196452829.6047.387.camel@glup.physics.ucf.edu> References: <1196452829.6047.387.camel@glup.physics.ucf.edu> Message-ID: <47519045.4040800@aero.iitb.ac.in> Joe Harrington wrote: > Well, I was hoping for a lighter solution, but I gave it a one-hour try. > It didn't help that when I did the link to the user docs was broken > (it's fixed now). I got so far as installing mayavi2 and trying to > figure it out without the doc, but ran out of time and wasn't able to > figure out surface plotting my existing numpy array. This was for use > in a class and I decided that if I couldn't figure out how to do it in > an hour I probably couldn't expect the students to do it at all. Sorry about the mess with installation instructions etc. I'm trying to get some help regarding this but I've been too busy otherwise to do much about it -- end of semester here. I'm hoping to try and get something done about it over the next couple of weeks. One of the easiest ways to get things installed without eggs right now is to first install the dependencies -- wxPython 2.6, python-vtk and numpy and then grab the tarballs here: http://code.enthought.com/downloads/source/ets2.6/ Untar them and build them each cd enthought.traits ./build.sh vi/emacs install.sh ./install.sh for all the tarballs and you should have mayavi installed. The script is broken in that it does not install the mayavi executable though -- need to fix that. > It would be good to have a cookbook recipe about how to do surface > plotting of a 2D numpy array in mayavi2. mayavi.tools.mlab.surf should help here. > When will mayavi2 be available as a deb file? This looks like software > that everyone should be able to get in the easiest fashion possible. Ondrej Certik has been very helpful in making preliminary debs that we hope will soon get into debian. You can find a older versions here: http://debian.certik.cz/ I sent Ondrej patches to fix some of the issues but don't see new versions yet. Ondrej did say he was busy... cheers, prabhu From prabhu at aero.iitb.ac.in Sat Dec 1 11:54:42 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Sat, 01 Dec 2007 22:24:42 +0530 Subject: [SciPy-user] surface plotting In-Reply-To: <47506E9A.4090705@gmail.com> References: <1196452829.6047.387.camel@glup.physics.ucf.edu> <47506E9A.4090705@gmail.com> Message-ID: <475191D2.3050305@aero.iitb.ac.in> fred wrote: > Joe Harrington a ?crit : >> It would be good to have a cookbook recipe about how to do surface >> plotting of a 2D numpy array in mayavi2. >> > http://www.scipy.org/Cookbook/MayaVi/Surf > http://www.scipy.org/Cookbook/MayaVi/mlab Unfortunately, these still use tvtk's mlab which is lower level and not as powerful as mayavi's new mlab. The best docs for that are here: https://svn.enthought.com/enthought/attachment/wiki/MayaVi/mlab.pdf cheers, prabhu From lbolla at gmail.com Sat Dec 1 14:46:13 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Sat, 1 Dec 2007 20:46:13 +0100 Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux In-Reply-To: <338357900711180250u1fd08b86ide9a504d664e0cc1@mail.gmail.com> References: <338357900711180250u1fd08b86ide9a504d664e0cc1@mail.gmail.com> Message-ID: <80c99e790712011146l1db7827dje3af20c5f01b635f@mail.gmail.com> I had the same problem in Linux. But in Linux there are no .pyd files. I noticed that there were two shared libraries: sparsetools.so and _sparsetools.so. The first one was much smaller than the second, so I moved it to a backup directory. Then, everything worked fine. hth, L. On Nov 18, 2007 11:50 AM, youngsu park wrote: > Hi. Im Youngsu-park ,in POSTECH in south korea. > > I had same problem on scipy importing like: > > from scipy import * > > . > > So I looked at the scipy/sparse folder, > > "C:\Python25\Lib\site-packages\scipy\sparse ". > > To find the reason of error, I open the > > "C:\Python25\Lib\site-packages\scipy\sparse\sparse.py", file. > > From line 21 to 26, there was import statement for "sparsetools" > > > > from scipy.sparse.sparsetools import cscmux, csrmux, \ > > cootocsr, csrtocoo, cootocsc, csctocoo, csctocsr, csrtocsc, \ > > densetocsr, csrtodense, \ > > csrmucsr, cscmucsc, \ > > csr_plus_csr, csc_plus_csc, csr_minus_csr, csc_minus_csc, \ > > csr_elmul_csr, csc_elmul_csc, csr_eldiv_csr, csc_eldiv_csc > > > > I change the working directory to "C:\Python25\Lib\site-packages\scipy\sparse > ", > > import sparestools and use dir function to see the members of sparsetools > > > > In [16]: import sparsetoo > > > > In [17]: dir(sparsetools) > > Out[17]: > > ['__doc__', '__file__', '__name__', '__version__', 'ccootocsc', > 'ccscadd', 'ccscextract', > > 'ccscgetel', 'ccscmucsc', 'ccscmucsr', 'ccscmul', 'ccscmux', 'ccscsetel', > 'ccsctocoo', > > 'ccsctofull', 'ccsrmucsc', 'ccsrmux', 'cdiatocsc', 'cfulltocsc', > 'ctransp', 'dcootocsc', > > 'dcscadd', 'dcscextract', 'dcscgetel', 'dcscmucsc', 'dcscmucsr', > 'dcscmul', 'dcscmux', > > 'dcscsetel', 'dcsctocoo', 'dcsctofull', 'dcsrmucsc', 'dcsrmux', > 'ddiatocsc', 'dfulltocsc', > > 'dtransp', 'scootocsc', 'scscadd', 'scscextract', 'scscgetel', > 'scscmucsc', 'scscmucsr', > > 'scscmul', 'scscmux', 'scscsetel', 'scsctocoo', 'scsctofull', > 'scsrmucsc', 'scsrmux', 'sdiatocsc', > > 'sfulltocsc', 'stransp', 'zcootocsc', 'zcscadd', 'zcscextract', > 'zcscgetel', 'zcscmucsc', 'zcscmucsr', > > 'zcscmul', 'zcscmux', 'zcscsetel', 'zcsctocoo', 'zcsctofull', > 'zcsrmucsc', 'zcsrmux', 'zdiatocsc', > > 'zfulltocsc', 'ztransp'] > > > > But it is not the members of sparsetools.py > > I think it is the members of sparsetools.pyd. > > > > Move sparsetools.pyd to some backup directory. > > Then it will works well. > > > > In [21]: from scipy import * > > > > In [22]: > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wizzard028wise at gmail.com Sat Dec 1 15:37:42 2007 From: wizzard028wise at gmail.com (Dorian) Date: Sat, 1 Dec 2007 21:37:42 +0100 Subject: [SciPy-user] Algorithm of Viterbi Message-ID: <674a602a0712011237q2dfe3ae2oe3334c7eb53a78ae@mail.gmail.com> Hi all, Is there any tools about the algorithm of Viterbi using scypi ?. Any link on a clear tutorial will be very helpful. Thank in advance -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuelez at gmail.com Sat Dec 1 15:57:29 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Sat, 1 Dec 2007 21:57:29 +0100 Subject: [SciPy-user] Algorithm of Viterbi In-Reply-To: <674a602a0712011237q2dfe3ae2oe3334c7eb53a78ae@mail.gmail.com> References: <674a602a0712011237q2dfe3ae2oe3334c7eb53a78ae@mail.gmail.com> Message-ID: Well... it seems that Wikipedia has some Python code about it. It shouldn't be too difficult to modify it in order to use Numpy. Just my 0.02$ On Dec 1, 2007 9:37 PM, Dorian wrote: > Hi all, > > Is there any tools about the algorithm of Viterbi using scypi ?. > Any link on a clear tutorial will be very helpful. > > Thank in advance > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Emanuele Zattin --------------------------------------------------- -I don't have to know an answer. I don't feel frightened by not knowing things; by being lost in a mysterious universe without any purpose ? which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.- Richard Feynman From jelleferinga at gmail.com Sat Dec 1 17:27:22 2007 From: jelleferinga at gmail.com (jelle) Date: Sat, 1 Dec 2007 22:27:22 +0000 (UTC) Subject: [SciPy-user] Algorithm of Viterbi References: <674a602a0712011237q2dfe3ae2oe3334c7eb53a78ae@mail.gmail.com> Message-ID: check out this article, that should the thing your looking for http://www.biais.org/blog/index.php/?q=viterbi From wizzard028wise at gmail.com Sat Dec 1 18:41:21 2007 From: wizzard028wise at gmail.com (Dorian) Date: Sun, 2 Dec 2007 00:41:21 +0100 Subject: [SciPy-user] Algorithm of Viterbi In-Reply-To: References: <674a602a0712011237q2dfe3ae2oe3334c7eb53a78ae@mail.gmail.com> Message-ID: <674a602a0712011541l4e17d87aodeecae99f001c563@mail.gmail.com> thank you very much On 01/12/2007, jelle wrote: > > check out this article, that should the thing your looking for > > http://www.biais.org/blog/index.php/?q=viterbi > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Sun Dec 2 10:24:50 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sun, 02 Dec 2007 16:24:50 +0100 Subject: [SciPy-user] surface plotting In-Reply-To: References: Message-ID: <4752CE42.60406@gmail.com> Dear All, I refer to the text quoted below. I am interested in data representation and 3D plots in particular (something matplotlib is not very suitable for). >One of the easiest ways to get things installed without eggs right now >is to first install the dependencies -- wxPython 2.6, python-vtk and >numpy and then grab the tarballs here: http://code.enthought.com/downloads/source/ets2.6/ > >Untar them and build them each > cd enthought.traits > ./build.sh > vi/emacs install.sh > ./install.sh >for all the tarballs and you should have mayavi installed. The script >is broken in that it does not install the mayavi executable though -- >need to fix that. In case it matters, I am running Debian testing on my box. I tried first using eggs (with little success) and then the tarballs + ./build.sh + ./install.sh. Now, I am having 2 problems: (1) if I try to re-run some old codes of mine(relying on SciPy), I get tons of these warnings: /usr/lib/python2.4/site-packages/scipy/misc/__init__.py:25: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/linalg/__init__.py:32: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/optimize/__init__.py:17: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/ndimage/__init__.py:40: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/sparse/__init__.py:9: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/io/__init__.py:20: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/lib/__init__.py:5: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/linsolve/umfpack/__init__.py:7: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/linsolve/__init__.py:13: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/interpolate/__init__.py:15: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/special/__init__.py:22: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/stats/__init__.py:15: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/fftpack/__init__.py:21: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/integrate/__init__.py:16: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/signal/__init__.py:17: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test /usr/lib/python2.4/site-packages/scipy/maxentropy/__init__.py:12: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test Basically, ScipyTest is now, in some sense, NumpyTest. Other than that, the codes seem to run fine, but I hope I have not harmed my system. Does it have anything to do with the different versions of Python I have installed? Should I be worried to end up with a broken Python? That would be a disaster for me. What is giving rise to this new warning? Any suggestions here are welcome. (2)This is less critical to me: when I try running the code snippet on http://www.scipy.org/Cookbook/MayaVi/mlab import scipy # prepare some interesting function: def f(x, y): return 3.0*scipy.sin(x*y+1e-4)/(x*y+1e-4) x = scipy.arange(-7., 7.05, 0.1) y = scipy.arange(-5., 5.05, 0.1) # 3D visualization of f: from enthought.tvtk.tools import mlab fig = mlab.figure() s = mlab.SurfRegular(x, y, f) fig.add(s) this is what I get: /usr/lib/python2.4/site-packages/scipy/misc/__init__.py:25: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test (python:13574): Gtk-CRITICAL **: gtk_widget_set_colormap: assertion `!GTK_WIDGET_REALIZED (widget)' failed and no window opens. Any idea of what is going on? BTW, I actually tried replacing ScipyTest with NumpyTest in /usr/lib/python2.4/site-packages/scipy/misc/__init__.py:25, but then the code crashed badly (can give more info if needed). So, one error and the same warning as before. The other example I want to discuss is (http://www.scipy.org/Cookbook/MayaVi/Surf) import numpy def f(x, y): return numpy.sin(x*y)/(x*y) x = numpy.arange(-7., 7.05, 0.1) y = numpy.arange(-5., 5.05, 0.05) from enthought.tvtk.tools import mlab s = mlab.SurfRegular(x, y, f) from enthought.mayavi.sources.vtk_data_source import VTKDataSource d = VTKDataSource() d.data = s.data mayavi.add_source(d) from enthought.mayavi.filters.warp_scalar import WarpScalar w = WarpScalar() mayavi.add_filter(w) from enthought.mayavi.modules.outline import Outline from enthought.mayavi.modules.surface import Surface o = Outline() s = Surface() mayavi.add_module(o) mayavi.add_module(s) but again, this is what I get: /usr/lib/python2.4/site-packages/scipy/misc/__init__.py:25: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test So far so good Traceback (most recent call last): File "3D-enthought.py", line 13, in ? mayavi.add_source(d) NameError: name 'mayavi' is not defined So, what is going on here? Mayavi is actually installed on my system... Sorry for the long post. My main interest is point (1), i.e. to be sure that my python installation is still OK (and know what to do in case it is not). As to point (2), in the end of the day, these modules will make it into debian packages, won't they? So at some point installing them should be straightforward (at least for me; I am not very good at handling source and installing myself). Many thanks Lorenzo From gael.varoquaux at normalesup.org Sun Dec 2 11:40:17 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 2 Dec 2007 17:40:17 +0100 Subject: [SciPy-user] surface plotting In-Reply-To: <4752CE42.60406@gmail.com> References: <4752CE42.60406@gmail.com> Message-ID: <20071202164017.GE15280@clipper.ens.fr> On Sun, Dec 02, 2007 at 04:24:50PM +0100, Lorenzo Isella wrote: > I refer to the text quoted below. I am interested in data representation > and 3D plots in particular (something matplotlib is not very suitable for). Sure. Tell us a bit more about your usecase. Do you want to produce figures for print/publication, do you want to interact with your data, or do you want simply to do some 3D plotting in a program of yours. You can do all this with Mayavi, but the solutions will depend a bit of your usecase. >The other example I want to discuss is > (http://www.scipy.org/Cookbook/MayaVi/Surf) > import numpy > def f(x, y): > return numpy.sin(x*y)/(x*y) > x = numpy.arange(-7., 7.05, 0.1) > y = numpy.arange(-5., 5.05, 0.05) > from enthought.tvtk.tools import mlab > s = mlab.SurfRegular(x, y, f) > from enthought.mayavi.sources.vtk_data_source import VTKDataSource > d = VTKDataSource() > d.data = s.data > mayavi.add_source(d) > from enthought.mayavi.filters.warp_scalar import WarpScalar > w = WarpScalar() > mayavi.add_filter(w) > from enthought.mayavi.modules.outline import Outline > from enthought.mayavi.modules.surface import Surface > o = Outline() > s = Surface() > mayavi.add_module(o) > mayavi.add_module(s) > but again, this is what I get: > /usr/lib/python2.4/site-packages/scipy/misc/__init__.py:25: > DeprecationWarning: ScipyTest is now called NumpyTest; please update > your code > test = ScipyTest().test > So far so good > Traceback (most recent call last): > File "3D-enthought.py", line 13, in ? > mayavi.add_source(d) > NameError: name 'mayavi' is not defined > So, what is going on here? Mayavi is actually installed on my system... This code has to be run through mayavi. It is suited only for interactive visualisation or producing figures, and not to run as part of a Python program. You have to run it in mayavi, either by running "mayavi2 -n -x script.py", loading it through the menu (File -> Open File), and pressing Ctrl+R, or entering "execfile('script.py') in the python shell. > Sorry for the long post. No problem, that's what mailing lists are for. > My main interest is point (1), i.e. to be sure that my python > installation is still OK (and know what to do in case it is not). I think it is OK. What happened is that scipy got updated, and these are just harmless, albeit annoying, warnings. > As to point (2), in the end of the day, these modules will make > it into debian packages, won't they? Yes, we are working on them. I had to finish my PhD thesis, so I was lagging behind on my work on packaging and Co. I am mostly done with that (!!), so things will start to move a bit faster soon. > So at some point installing them should be straightforward (at least > for me; I am not very good at handling source and installing myself). Yes, we are working on that. We know it is very important and will focus effort on ease of instal. Cheers, Ga?l From gael.varoquaux at normalesup.org Sun Dec 2 11:49:44 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 2 Dec 2007 17:49:44 +0100 Subject: [SciPy-user] surface plotting In-Reply-To: <475191D2.3050305@aero.iitb.ac.in> References: <1196452829.6047.387.camel@glup.physics.ucf.edu> <47506E9A.4090705@gmail.com> <475191D2.3050305@aero.iitb.ac.in> Message-ID: <20071202164944.GF15280@clipper.ens.fr> On Sat, Dec 01, 2007 at 10:24:42PM +0530, Prabhu Ramachandran wrote: > Unfortunately, these still use tvtk's mlab which is lower level and not > as powerful as mayavi's new mlab. > The best docs for that are here: > https://svn.enthought.com/enthought/attachment/wiki/MayaVi/mlab.pdf Quickly, you can have a feeling of how to plot surface using mlab by folliwing these steps: run "ipython -wthread" execute the following python code: +++++++++++++++++++++++++++++++++++++++ In [1]: from enthought.mayavi.tools import mlab as M Set Envisage to use the workbench UI: True Set Envisage to use the workbench UI: True In [2]: from scipy import * In [3]: X, Y = mgrid[-1:1:50j, -1:1:50j] In [4]: Z = cos(4*(X**2+Y**2)) In [5]: M.surf(X, Y, Z) +++++++++++++++++++++++++++++++++++++++ This will bring up a Mayavi2 window with the surface plotted. The mlab API is similar to matplotlib's pylab, so you should feel quite familiar with it. It is still young and I haven't had time to put as much work as I would like in it. Documentation is lacking, but the docstrings are quite good. Just reading "help(M)" in the example above should hopefully give you an idea of the possbilities. The docs pointed to by Prabhu should also be helpful. HTH, Ga?l From william.ratcliff at gmail.com Sun Dec 2 12:56:47 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Sun, 2 Dec 2007 12:56:47 -0500 Subject: [SciPy-user] surface plotting In-Reply-To: <20071202164944.GF15280@clipper.ens.fr> References: <1196452829.6047.387.camel@glup.physics.ucf.edu> <47506E9A.4090705@gmail.com> <475191D2.3050305@aero.iitb.ac.in> <20071202164944.GF15280@clipper.ens.fr> Message-ID: <827183970712020956j4652119m658d2901abcac5ee@mail.gmail.com> Can Mayavi2 be called from within a wxpython application for 3D plotting? At one point I did this with tvtk (using pyface, etc), but not with mayavi. How different is the mlab within tvtk from that in mayavi2? Thanks, William On Dec 2, 2007 11:49 AM, Gael Varoquaux wrote: > On Sat, Dec 01, 2007 at 10:24:42PM +0530, Prabhu Ramachandran wrote: > > Unfortunately, these still use tvtk's mlab which is lower level and not > > as powerful as mayavi's new mlab. > > > The best docs for that are here: > > > https://svn.enthought.com/enthought/attachment/wiki/MayaVi/mlab.pdf > > > Quickly, you can have a feeling of how to plot surface using mlab by > folliwing these steps: > > run "ipython -wthread" > > execute the following python code: > > +++++++++++++++++++++++++++++++++++++++ > In [1]: from enthought.mayavi.tools import mlab as M > Set Envisage to use the workbench UI: True > Set Envisage to use the workbench UI: True > > In [2]: from scipy import * > > In [3]: X, Y = mgrid[-1:1:50j, -1:1:50j] > > In [4]: Z = cos(4*(X**2+Y**2)) > > In [5]: M.surf(X, Y, Z) > +++++++++++++++++++++++++++++++++++++++ > > > This will bring up a Mayavi2 window with the surface plotted. The mlab > API is similar to matplotlib's pylab, so you should feel quite familiar > with it. It is still young and I haven't had time to put as much work as > I would like in it. Documentation is lacking, but the docstrings are > quite good. Just reading "help(M)" in the example above should hopefully > give you an idea of the possbilities. The docs pointed to by Prabhu > should also be helpful. > > HTH, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sun Dec 2 13:07:48 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 2 Dec 2007 19:07:48 +0100 Subject: [SciPy-user] surface plotting In-Reply-To: <827183970712020956j4652119m658d2901abcac5ee@mail.gmail.com> References: <1196452829.6047.387.camel@glup.physics.ucf.edu> <47506E9A.4090705@gmail.com> <475191D2.3050305@aero.iitb.ac.in> <20071202164944.GF15280@clipper.ens.fr> <827183970712020956j4652119m658d2901abcac5ee@mail.gmail.com> Message-ID: <20071202180748.GL15280@clipper.ens.fr> On Sun, Dec 02, 2007 at 12:56:47PM -0500, william ratcliff wrote: > Can Mayavi2 be called from within a wxpython application for 3D plotting? Yes. This has been one of the great additions of this summer. Have a look at https://svn.enthought.com/enthought/browser/branches/enthought.mayavi_2.0.3a1/examples/standalone.py for a demo of how mayavi is integrated in a traits application. As traits relies on WxPython (althought, there is now a Qt front end), it is trivial to plug this in a WxPython applictation (ask if you need help). > At one point I did this with tvtk (using pyface, etc), but not with > mayavi. How different is the mlab within tvtk from that in mayavi2? I more and more think of the mayavi engine as an abstraction over tvtk that allows one to think in terms of data and visualisation rather than in terms of VTK objects. The mayavi2 application is a GUI around the mayavi2 engine. The example above exposes the engine without the GUI, allowing you to use its nice API and features in you own app. Mlab is a simplied API for Mayavi that tries to mimic pylab/matlab. The calls are really simple and easy to learn. However, due to a design error on my side, mlab cannot be used in a standalone way, ie to integrate in an existing WxPython app. This will be changed, just give me a month or two (not that there is a huge amount of work, but that I am very busy). It is much easier to add features to mayavi's mlab than to tvtk's. Moreover, with mayavi's mlab you can pop up the UI to edit the pipeline in your own app, you also have the engine API that you can access, and this is great to update/modify a visualization. I think the big difference between the two is the presence of the mayavi2 engine, which give a nice abstraction on the visulization. As a result, mayavi's mlab already has more features than tvtk's, altought some features of tvtk's mlab are missing (mainly the ability to run in standalone). HTH, Ga?l From odonnems at yahoo.com Mon Dec 3 00:15:13 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Sun, 2 Dec 2007 21:15:13 -0800 (PST) Subject: [SciPy-user] Weave inline errors further insight Message-ID: <808613.94047.qm@web58004.mail.re3.yahoo.com> I have posted a couple questions regarding the use of weave.inline. Because I have not received any feed back I thought I might provide some more clues to the problem that I am having. As a refresher, I have the following versions installed: numpy-1.0.4.win32-py2.4.exe scipy-0.6.0.win32-py2.4.exe python-2.4.4.msi wxPython2.8-win32-unicode-2.8.6.1-py24.exe pywin32-210.win32-py2.4.exe I have also installed SDK toolkit and other related modifications (This appears to work) I also installed MinGW and this appears to work. MS XP SP2 First problem: There appears to be a problem with numpy/distutils/exec_command.py. I had to comment out lines 67-70 (there about) in order to properly assign the python executable. Once I do this I get the next error. Second problem: Will not complete the weave.inline. Example of code here: a = 1 weave.inline('printf("%d\\n",a);',['a'], verbose=2, type_converters=converters.blitz) The c++ source file is created; however, the object file is not created. This would make me think there is a problem with the comiler but I have tried both SDK and MinGW and they both cause the same error. Does anyone have thoughts on why this might be happening. I get the same problem if I insall enthought as well. I will cross-post on numpy too. Thank you for your assistance, michael (new user) ____________________________________________________________________________________ Get easy, one-click access to your favorites. Make Yahoo! your homepage. http://www.yahoo.com/r/hs -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Mon Dec 3 00:54:14 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 2 Dec 2007 22:54:14 -0700 Subject: [SciPy-user] Weave inline errors further insight In-Reply-To: <808613.94047.qm@web58004.mail.re3.yahoo.com> References: <808613.94047.qm@web58004.mail.re3.yahoo.com> Message-ID: On Dec 2, 2007 10:15 PM, Michael ODonnell wrote: > > I have posted a couple questions regarding the use of weave.inline. Because > I have not received any feed back I thought I might provide some more clues > to the problem that I am having. > > As a refresher, I have the following versions installed: > numpy-1.0.4.win32-py2.4.exe > scipy-0.6.0.win32-py2.4.exe > python-2.4.4.msi > wxPython2.8-win32-unicode-2.8.6.1-py24.exe > pywin32-210.win32-py2.4.exe > I have also installed SDK toolkit and other related modifications (This > appears to work) > I also installed MinGW and this appears to work. > MS XP SP2 > > First problem: There appears to be a problem with > numpy/distutils/exec_command.py. I had to comment out lines 67-70 (there > about) in order to properly assign the python executable. Once I do this I > get the next error. > > Second problem: Will not complete the weave.inline. Example of code here: > a = 1 > weave.inline('printf("%d\\n",a);',['a'], verbose=2, > type_converters=converters.blitz) > > The c++ source file is created; however, the object file is not created. > This would make me think there is a problem with the comiler but I have > tried both SDK and MinGW and they both cause the same error. > > Does anyone have thoughts on why this might be happening. I get the same > problem if I insall enthought as well. I will cross-post on numpy too. > > > Thank you for your assistance, > michael (new user) > > ________________________________ > Be a better pen pal. Text or chat with friends inside Yahoo! Mail. See how. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From fperez.net at gmail.com Mon Dec 3 01:06:00 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 2 Dec 2007 23:06:00 -0700 Subject: [SciPy-user] Weave inline errors further insight In-Reply-To: <808613.94047.qm@web58004.mail.re3.yahoo.com> References: <808613.94047.qm@web58004.mail.re3.yahoo.com> Message-ID: [ sorry for the previous empty repy. Slow javascript in gmail swallowed two clicks...] On Dec 2, 2007 10:15 PM, Michael ODonnell wrote: > > I have posted a couple questions regarding the use of weave.inline. Because > I have not received any feed back I thought I might provide some more clues > to the problem that I am having. Could I please pester you to open a ticket on the weave problems you may be having, with *actual example code* that fails, including the explicit (copy/paste) error messages from the code exactly as you ran it? This will allow us to specifically address those problems, if we can reproduce them. Thanks, f From fperez.net at gmail.com Mon Dec 3 01:09:32 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 2 Dec 2007 23:09:32 -0700 Subject: [SciPy-user] weave and compiler install help In-Reply-To: <271728.69580.qm@web58012.mail.re3.yahoo.com> References: <271728.69580.qm@web58012.mail.re3.yahoo.com> Message-ID: On Nov 19, 2007 9:06 AM, Michael ODonnell wrote: > > I am trying to compile some inline c++ code inside python using weave. I > always get a similar problem where the compiled file cannot be found (see > below). I am not sure if the problem is with the compiler or something else. > I am a new user of scipy and a novice with python so I would appreciate any > direction someone can give me because I have not been able to figure out a > work around. > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > When I try to test the following script or any other script I get the > following message: > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > def prod(m, v): > #C++ version > nrows, ncolumns = m.shape > > res = numpy.zeros((nrows, ncolumns), float) > code = r""" > for (int i=0; i { > for (int j=0; j { > res(i) += m(i,j)*v(j); > } > } > """ > > #err = weave.inline(code,['nrows', 'ncolumns', 'res', 'm', 'v'], > type_converters=converters.blitz, compiler='mingw32', verbose=2) > err = weave.inline(code,['nrows', 'ncolumns', 'res', 'm', 'v'], > verbose=2) There may be windows-specific problems (I'm on linux), but your code as above simply can't work because you're assuming blitz behavior (parens on m(i,j) calls) with the blitz call commented out. On Linux, this version: ########################################################## import numpy from scipy import weave from scipy.weave import converters def prod(m, v): #C++ version nrows, ncolumns = m.shape assert v.ndim==1 and ncolumns==v.shape[0],"Shape mismatch in prod" res = numpy.zeros(nrows, float) code = r""" for (int i=0; i References: <808613.94047.qm@web58004.mail.re3.yahoo.com> Message-ID: <20071203063527.GD10287@clipper.ens.fr> On Sun, Dec 02, 2007 at 09:15:13PM -0800, Michael ODonnell wrote: > Second problem: Will not complete the weave.inline. Example of code here: > a = 1 > weave.inline('printf("%d\\n",a);',['a'], verbose=2, > type_converters=converters.blitz) > The c++ source file is created; however, the object file is not created. > This would make me think there is a problem with the comiler but I have > tried both SDK and MinGW and they both cause the same error. Hum. Works here (that doesn't help you sorry). That might be a platform-specific problem, in which case I am at lost. Ga?l From discerptor at gmail.com Mon Dec 3 05:03:16 2007 From: discerptor at gmail.com (Joshua Lippai) Date: Mon, 3 Dec 2007 02:03:16 -0800 Subject: [SciPy-user] Error in scipy.test(10) even after compiling successfully- Mac OS X 10.4.11 Apple GCC 4.0.1 In-Reply-To: <474F6E3B.1020209@ar.media.kyoto-u.ac.jp> References: <9911419a0711291017r22d48580q2c1652855d253c86@mail.gmail.com> <474F6E3B.1020209@ar.media.kyoto-u.ac.jp> Message-ID: <9911419a0712030203r7d631be8qdd67e2f3acef5038@mail.gmail.com> On Nov 29, 2007 5:58 PM, David Cournapeau wrote: > > Joshua Lippai wrote: > > Hello all, > > > > After building scipy, I ran the scipy tests and got this error in check_integer: > > > > ====================================================================== > > ERROR: check_integer (scipy.io.tests.test_array_import.TestReadArray) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", > > line 55, in check_integer > > from scipy import stats > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/__init__.py", > > line 7, in > > from stats import * > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/stats.py", > > line 191, in > > import scipy.special as special > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/__init__.py", > > line 8, in > > from basic import * > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/basic.py", > > line 8, in > > from _cephes import * > > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, > > 2): Symbol not found: __gfortran_pow_r8_i4 > > Referenced from: > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so > > Expected in: dynamic lookup > > > > It would appear it's not finding a symbol for gfrotran in one of the > > files, and this in turn makes it impossible to import _cephes. How > > would I go about remedying this? > > > This symbol is in the gfortran runtime. We need the complete build log > to be sure about the exact problem, but I expect some mismatch between > fortran compilers (do you have several fortran compilers on your machine ?). > > cheers, > > David > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > That sounds about right... looking at the build dialog, certain commands seem to want to use gnu95 while the others want gfortran. unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options commands have different --fcompiler options: ['gfortran', 'gnu95'], using first in list as default I tried setting the --fcompiler option for all of the commands mentioned there to gfortran (at least it changed the default from gnu95 to gfortran like I wanted), but it still gives that message. Anything else I can do to make this confusion for the compiler go away? Josh From dmitrey.kroshko at scipy.org Mon Dec 3 05:14:28 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 03 Dec 2007 12:14:28 +0200 Subject: [SciPy-user] [optimization] BSD-licensed constrained solver for NLP/NSP Message-ID: <4753D704.1090702@scipy.org> Hi all, if anyone is interested - here is one more constrained NLP/NSP solver (OpenOpt ralg). 1st derivatives df, dc, dh can be handled as well. Currently it has no difference of providing some constraints (c[i]<=0, h[j]=0) or single: max_i,j_(c[i], abs(h[j]))=0. But in future I intend to add a little bit better constraints handling in case of several funcs. See the NLP example published some days ago with ralg: http://openopt.blogspot.com/2007/12/constraints-handling-for-nlpnsp-solver.html Regards, D. From prabhu at aero.iitb.ac.in Mon Dec 3 06:48:28 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Mon, 03 Dec 2007 17:18:28 +0530 Subject: [SciPy-user] surface plotting In-Reply-To: <20071202180748.GL15280@clipper.ens.fr> References: <1196452829.6047.387.camel@glup.physics.ucf.edu> <47506E9A.4090705@gmail.com> <475191D2.3050305@aero.iitb.ac.in> <20071202164944.GF15280@clipper.ens.fr> <827183970712020956j4652119m658d2901abcac5ee@mail.gmail.com> <20071202180748.GL15280@clipper.ens.fr> Message-ID: <4753ED0C.6090406@aero.iitb.ac.in> Gael Varoquaux wrote: > On Sun, Dec 02, 2007 at 12:56:47PM -0500, william ratcliff wrote: >> Can Mayavi2 be called from within a wxpython application for 3D plotting? > > Yes. This has been one of the great additions of this summer. Have a look > at > https://svn.enthought.com/enthought/browser/branches/enthought.mayavi_2.0.3a1/examples/standalone.py > for a demo of how mayavi is integrated in a traits application. As traits > relies on WxPython (althought, there is now a Qt front end), it is trivial to > plug this in a WxPython applictation (ask if you need help). I wouldn't say it is "trivial" but certainly relatively easy to do. The best place to ask would be on the enthought-dev list. >> At one point I did this with tvtk (using pyface, etc), but not with >> mayavi. How different is the mlab within tvtk from that in mayavi2? [...] > allowing you to use its nice API and features in you own app. Mlab is a > simplied API for Mayavi that tries to mimic pylab/matlab. The calls are > really simple and easy to learn. However, due to a design error on my > side, mlab cannot be used in a standalone way, ie to integrate in an > existing WxPython app. This will be changed, just give me a month or two > (not that there is a huge amount of work, but that I am very busy). Right, I don't think it should be too hard to port mlab to use the raw engine. There are a few design issues Gael and I will need to sort out about how best to do this though. As Gael said, both of us are currently tied up it may take a month or two for this to actually happen. cheers, prabhu From lbolla at gmail.com Mon Dec 3 07:10:50 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Mon, 3 Dec 2007 13:10:50 +0100 Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux In-Reply-To: <80c99e790712011146l1db7827dje3af20c5f01b635f@mail.gmail.com> References: <338357900711180250u1fd08b86ide9a504d664e0cc1@mail.gmail.com> <80c99e790712011146l1db7827dje3af20c5f01b635f@mail.gmail.com> Message-ID: <80c99e790712030410h3375343ejf41bc33375dc8506@mail.gmail.com> ops. I was wrong. it still doesn't work... sorry for the noise. L. On 12/1/07, lorenzo bolla wrote: > > I had the same problem in Linux. > But in Linux there are no .pyd files. > > I noticed that there were two shared libraries: sparsetools.so and > _sparsetools.so. > The first one was much smaller than the second, so I moved it to a backup > directory. > > Then, everything worked fine. > > hth, > L. > > > On Nov 18, 2007 11:50 AM, youngsu park wrote: > > > Hi. Im Youngsu-park ,in POSTECH in south korea. > > > > I had same problem on scipy importing like: > > > > from scipy import * > > > > . > > > > So I looked at the scipy/sparse folder, > > > > "C:\Python25\Lib\site-packages\scipy\sparse ". > > > > To find the reason of error, I open the > > > > "C:\Python25\Lib\site-packages\scipy\sparse\sparse.py", file. > > > > From line 21 to 26, there was import statement for "sparsetools" > > > > > > > > from scipy.sparse.sparsetools import cscmux, csrmux, \ > > > > cootocsr, csrtocoo, cootocsc, csctocoo, csctocsr, csrtocsc, \ > > > > densetocsr, csrtodense, \ > > > > csrmucsr, cscmucsc, \ > > > > csr_plus_csr, csc_plus_csc, csr_minus_csr, csc_minus_csc, \ > > > > csr_elmul_csr, csc_elmul_csc, csr_eldiv_csr, csc_eldiv_csc > > > > > > > > I change the working directory to "C:\Python25\Lib\site-packages\scipy\sparse > > ", > > > > import sparestools and use dir function to see the members of > > sparsetools > > > > > > > > In [16]: import sparsetoo > > > > > > > > In [17]: dir(sparsetools) > > > > Out[17]: > > > > ['__doc__', '__file__', '__name__', '__version__', 'ccootocsc', > > 'ccscadd', 'ccscextract', > > > > 'ccscgetel', 'ccscmucsc', 'ccscmucsr', 'ccscmul', 'ccscmux', > > 'ccscsetel', 'ccsctocoo', > > > > 'ccsctofull', 'ccsrmucsc', 'ccsrmux', 'cdiatocsc', 'cfulltocsc', > > 'ctransp', 'dcootocsc', > > > > 'dcscadd', 'dcscextract', 'dcscgetel', 'dcscmucsc', 'dcscmucsr', > > 'dcscmul', 'dcscmux', > > > > 'dcscsetel', 'dcsctocoo', 'dcsctofull', 'dcsrmucsc', 'dcsrmux', > > 'ddiatocsc', 'dfulltocsc', > > > > 'dtransp', 'scootocsc', 'scscadd', 'scscextract', 'scscgetel', > > 'scscmucsc', 'scscmucsr', > > > > 'scscmul', 'scscmux', 'scscsetel', 'scsctocoo', 'scsctofull', > > 'scsrmucsc', 'scsrmux', 'sdiatocsc', > > > > 'sfulltocsc', 'stransp', 'zcootocsc', 'zcscadd', 'zcscextract', > > 'zcscgetel', 'zcscmucsc', 'zcscmucsr', > > > > 'zcscmul', 'zcscmux', 'zcscsetel', 'zcsctocoo', 'zcsctofull', > > 'zcsrmucsc', 'zcsrmux', 'zdiatocsc', > > > > 'zfulltocsc', 'ztransp'] > > > > > > > > But it is not the members of sparsetools.py > > > > I think it is the members of sparsetools.pyd. > > > > > > > > Move sparsetools.pyd to some backup directory. > > > > Then it will works well. > > > > > > > > In [21]: from scipy import * > > > > > > > > In [22]: > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon Dec 3 07:31:50 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 3 Dec 2007 13:31:50 +0100 Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux In-Reply-To: <338357900711180250u1fd08b86ide9a504d664e0cc1@mail.gmail.com> References: <338357900711180250u1fd08b86ide9a504d664e0cc1@mail.gmail.com> Message-ID: How did you install scipy ? Which Python version ? ... ? 2007/11/18, youngsu park : > > Hi. Im Youngsu-park ,in POSTECH in south korea. > > I had same problem on scipy importing like: > > from scipy import * > > . > > So I looked at the scipy/sparse folder, > > "C:\Python25\Lib\site-packages\scipy\sparse ". > > To find the reason of error, I open the > > "C:\Python25\Lib\site-packages\scipy\sparse\sparse.py", file. > > From line 21 to 26, there was import statement for "sparsetools" > > > > from scipy.sparse.sparsetools import cscmux, csrmux, \ > > cootocsr, csrtocoo, cootocsc, csctocoo, csctocsr, csrtocsc, \ > > densetocsr, csrtodense, \ > > csrmucsr, cscmucsc, \ > > csr_plus_csr, csc_plus_csc, csr_minus_csr, csc_minus_csc, \ > > csr_elmul_csr, csc_elmul_csc, csr_eldiv_csr, csc_eldiv_csc > > > > I change the working directory to "C:\Python25\Lib\site-packages\scipy\sparse > ", > > import sparestools and use dir function to see the members of sparsetools > > > > In [16]: import sparsetoo > > > > In [17]: dir(sparsetools) > > Out[17]: > > ['__doc__', '__file__', '__name__', '__version__', 'ccootocsc', > 'ccscadd', 'ccscextract', > > 'ccscgetel', 'ccscmucsc', 'ccscmucsr', 'ccscmul', 'ccscmux', 'ccscsetel', > 'ccsctocoo', > > 'ccsctofull', 'ccsrmucsc', 'ccsrmux', 'cdiatocsc', 'cfulltocsc', > 'ctransp', 'dcootocsc', > > 'dcscadd', 'dcscextract', 'dcscgetel', 'dcscmucsc', 'dcscmucsr', > 'dcscmul', 'dcscmux', > > 'dcscsetel', 'dcsctocoo', 'dcsctofull', 'dcsrmucsc', 'dcsrmux', > 'ddiatocsc', 'dfulltocsc', > > 'dtransp', 'scootocsc', 'scscadd', 'scscextract', 'scscgetel', > 'scscmucsc', 'scscmucsr', > > 'scscmul', 'scscmux', 'scscsetel', 'scsctocoo', 'scsctofull', > 'scsrmucsc', 'scsrmux', 'sdiatocsc', > > 'sfulltocsc', 'stransp', 'zcootocsc', 'zcscadd', 'zcscextract', > 'zcscgetel', 'zcscmucsc', 'zcscmucsr', > > 'zcscmul', 'zcscmux', 'zcscsetel', 'zcsctocoo', 'zcsctofull', > 'zcsrmucsc', 'zcsrmux', 'zdiatocsc', > > 'zfulltocsc', 'ztransp'] > > > > But it is not the members of sparsetools.py > > I think it is the members of sparsetools.pyd. > > > > Move sparsetools.pyd to some backup directory. > > Then it will works well. > > > > In [21]: from scipy import * > > > > In [22]: > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Mon Dec 3 07:41:27 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Mon, 3 Dec 2007 13:41:27 +0100 Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux In-Reply-To: References: <338357900711180250u1fd08b86ide9a504d664e0cc1@mail.gmail.com> Message-ID: <80c99e790712030441n50e96096pb7e764303f592178@mail.gmail.com> python 2.4 last numpy (from svn) last scipy (from svn) the strange thing is: it works on cygwin on a windows box, but not on ubuntu! On 12/3/07, Matthieu Brucher wrote: > > How did you install scipy ? Which Python version ? ... ? > > 2007/11/18, youngsu park : > > > > Hi. Im Youngsu-park ,in POSTECH in south korea. > > > > I had same problem on scipy importing like: > > > > from scipy import * > > > > . > > > > So I looked at the scipy/sparse folder, > > > > "C:\Python25\Lib\site-packages\scipy\sparse ". > > > > To find the reason of error, I open the > > > > "C:\Python25\Lib\site-packages\scipy\sparse\sparse.py", file. > > > > From line 21 to 26, there was import statement for "sparsetools" > > > > > > > > from scipy.sparse.sparsetools import cscmux, csrmux, \ > > > > cootocsr, csrtocoo, cootocsc, csctocoo, csctocsr, csrtocsc, \ > > > > densetocsr, csrtodense, \ > > > > csrmucsr, cscmucsc, \ > > > > csr_plus_csr, csc_plus_csc, csr_minus_csr, csc_minus_csc, \ > > > > csr_elmul_csr, csc_elmul_csc, csr_eldiv_csr, csc_eldiv_csc > > > > > > > > I change the working directory to "C:\Python25\Lib\site-packages\scipy\sparse > > ", > > > > import sparestools and use dir function to see the members of > > sparsetools > > > > > > > > In [16]: import sparsetoo > > > > > > > > In [17]: dir(sparsetools) > > > > Out[17]: > > > > ['__doc__', '__file__', '__name__', '__version__', 'ccootocsc', > > 'ccscadd', 'ccscextract', > > > > 'ccscgetel', 'ccscmucsc', 'ccscmucsr', 'ccscmul', 'ccscmux', > > 'ccscsetel', 'ccsctocoo', > > > > 'ccsctofull', 'ccsrmucsc', 'ccsrmux', 'cdiatocsc', 'cfulltocsc', > > 'ctransp', 'dcootocsc', > > > > 'dcscadd', 'dcscextract', 'dcscgetel', 'dcscmucsc', 'dcscmucsr', > > 'dcscmul', 'dcscmux', > > > > 'dcscsetel', 'dcsctocoo', 'dcsctofull', 'dcsrmucsc', 'dcsrmux', > > 'ddiatocsc', 'dfulltocsc', > > > > 'dtransp', 'scootocsc', 'scscadd', 'scscextract', 'scscgetel', > > 'scscmucsc', 'scscmucsr', > > > > 'scscmul', 'scscmux', 'scscsetel', 'scsctocoo', 'scsctofull', > > 'scsrmucsc', 'scsrmux', 'sdiatocsc', > > > > 'sfulltocsc', 'stransp', 'zcootocsc', 'zcscadd', 'zcscextract', > > 'zcscgetel', 'zcscmucsc', 'zcscmucsr', > > > > 'zcscmul', 'zcscmux', 'zcscsetel', 'zcsctocoo', 'zcsctofull', > > 'zcsrmucsc', 'zcsrmux', 'zdiatocsc', > > > > 'zfulltocsc', 'ztransp'] > > > > > > > > But it is not the members of sparsetools.py > > > > I think it is the members of sparsetools.pyd. > > > > > > > > Move sparsetools.pyd to some backup directory. > > > > Then it will works well. > > > > > > > > In [21]: from scipy import * > > > > > > > > In [22]: > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > French PhD student > Website : http://miles.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Mon Dec 3 07:52:42 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 03 Dec 2007 13:52:42 +0100 Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux In-Reply-To: <80c99e790712030441n50e96096pb7e764303f592178@mail.gmail.com> References: <338357900711180250u1fd08b86ide9a504d664e0cc1@mail.gmail.com> <80c99e790712030441n50e96096pb7e764303f592178@mail.gmail.com> Message-ID: <4753FC1A.9080900@ntc.zcu.cz> The newly added README in scipy/sparse/sparsetools/README.txt says """ Before regenerating sparsetools_wrap.cxx with SWIG, ensure that you are using SWIG Version 1.3.33 (released on Nov. 23 2007) or newer. You can check your version with: 'swig -version' The wrappers are generated with: swig -c++ -python multigridtools.i """ maybe your problems are related to this? btw. Out[1]: on Gentoo linux, updated just now. r. lorenzo bolla wrote: > python 2.4 > last numpy (from svn) > last scipy (from svn) > > the strange thing is: it works on cygwin on a windows box, but not on > ubuntu! > > > On 12/3/07, Matthieu Brucher wrote: >> How did you install scipy ? Which Python version ? ... ? >> >> 2007/11/18, youngsu park : >>> Hi. Im Youngsu-park ,in POSTECH in south korea. >>> >>> I had same problem on scipy importing like: >>> >>> from scipy import * >>> >>> . >>> >>> So I looked at the scipy/sparse folder, >>> >>> "C:\Python25\Lib\site-packages\scipy\sparse ". >>> >>> To find the reason of error, I open the >>> >>> "C:\Python25\Lib\site-packages\scipy\sparse\sparse.py", file. >>> >>> From line 21 to 26, there was import statement for "sparsetools" >>> >>> >>> >>> from scipy.sparse.sparsetools import cscmux, csrmux, \ >>> >>> cootocsr, csrtocoo, cootocsc, csctocoo, csctocsr, csrtocsc, \ >>> >>> densetocsr, csrtodense, \ >>> >>> csrmucsr, cscmucsc, \ >>> >>> csr_plus_csr, csc_plus_csc, csr_minus_csr, csc_minus_csc, \ >>> >>> csr_elmul_csr, csc_elmul_csc, csr_eldiv_csr, csc_eldiv_csc >>> >>> >>> >>> I change the working directory to "C:\Python25\Lib\site-packages\scipy\sparse >>> ", >>> >>> import sparestools and use dir function to see the members of >>> sparsetools >>> >>> >>> >>> In [16]: import sparsetoo >>> >>> >>> >>> In [17]: dir(sparsetools) >>> >>> Out[17]: >>> >>> ['__doc__', '__file__', '__name__', '__version__', 'ccootocsc', >>> 'ccscadd', 'ccscextract', >>> >>> 'ccscgetel', 'ccscmucsc', 'ccscmucsr', 'ccscmul', 'ccscmux', >>> 'ccscsetel', 'ccsctocoo', >>> >>> 'ccsctofull', 'ccsrmucsc', 'ccsrmux', 'cdiatocsc', 'cfulltocsc', >>> 'ctransp', 'dcootocsc', >>> >>> 'dcscadd', 'dcscextract', 'dcscgetel', 'dcscmucsc', 'dcscmucsr', >>> 'dcscmul', 'dcscmux', >>> >>> 'dcscsetel', 'dcsctocoo', 'dcsctofull', 'dcsrmucsc', 'dcsrmux', >>> 'ddiatocsc', 'dfulltocsc', >>> >>> 'dtransp', 'scootocsc', 'scscadd', 'scscextract', 'scscgetel', >>> 'scscmucsc', 'scscmucsr', >>> >>> 'scscmul', 'scscmux', 'scscsetel', 'scsctocoo', 'scsctofull', >>> 'scsrmucsc', 'scsrmux', 'sdiatocsc', >>> >>> 'sfulltocsc', 'stransp', 'zcootocsc', 'zcscadd', 'zcscextract', >>> 'zcscgetel', 'zcscmucsc', 'zcscmucsr', >>> >>> 'zcscmul', 'zcscmux', 'zcscsetel', 'zcsctocoo', 'zcsctofull', >>> 'zcsrmucsc', 'zcsrmux', 'zdiatocsc', >>> >>> 'zfulltocsc', 'ztransp'] >>> >>> >>> >>> But it is not the members of sparsetools.py >>> >>> I think it is the members of sparsetools.pyd. >>> >>> >>> >>> Move sparsetools.pyd to some backup directory. >>> >>> Then it will works well. >>> >>> >>> >>> In [21]: from scipy import * >>> >>> >>> >>> In [22]: >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> -- >> French PhD student >> Website : http://miles.developpez.com/ >> Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 >> LinkedIn : http://www.linkedin.com/in/matthieubrucher >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From stefan at sun.ac.za Mon Dec 3 10:28:24 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 3 Dec 2007 17:28:24 +0200 Subject: [SciPy-user] Algorithm of Viterbi In-Reply-To: <674a602a0712011237q2dfe3ae2oe3334c7eb53a78ae@mail.gmail.com> References: <674a602a0712011237q2dfe3ae2oe3334c7eb53a78ae@mail.gmail.com> Message-ID: <20071203152823.GK13844@mentat.za.net> Hi Dorian On Sat, Dec 01, 2007 at 09:37:42PM +0100, Dorian wrote: > Hi all, > > Is there any tools about the algorithm of Viterbi using scypi ?. > Any link on a clear tutorial will be very helpful. You can find fast dynamic programming / shortest path search code here: http://mentat.za.net/source/shortest_path.tar.bz2 Compile using python setup.py build_ext -i and run the tests to make sure it works. Regards St?fan From arthur00 at gmail.com Mon Dec 3 11:02:26 2007 From: arthur00 at gmail.com (Arthur Valadares) Date: Mon, 3 Dec 2007 13:02:26 -0300 Subject: [SciPy-user] Using optimize.fmin_cobyla Message-ID: Hi, I have the following problem I'm trying to solve: I have a calculation where, given some parameters, I calculate a vector 'A' of values and based on this vector, I calculate a final vector 'B' which is the return of my calculation. What I want to do is maximize one specific value on 'A' while keeping a constraint (let C be a constant, the constraint is: C - A[x] = B[x]). I'm trying to use fmin_cobyla, but I'm a little lost on how to use it to solve this problem.. Does anyone have any example on using cobyla or especifically what I should do? I have tried using it for quite a while, but I'm just not understanding it right. On Excel you can just put on Solver to set target A[x], by changing A[x] with constraint C - A[x] = B[x]. Any tips? This is my first post, so I wanted to thank the scipy community for all the great work. -- Arthur Valadares -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon Dec 3 11:43:56 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 3 Dec 2007 11:43:56 -0500 Subject: [SciPy-user] =?utf-8?q?=5Boptimization=5D_BSD-licensed_constraine?= =?utf-8?q?d_solver_for=09NLP/NSP?= In-Reply-To: <4753D704.1090702@scipy.org> References: <4753D704.1090702@scipy.org> Message-ID: On Mon, 03 Dec 2007, dmitrey apparently wrote: > http://openopt.blogspot.com/2007/12/constraints-handling-for-nlpnsp-solver.html It is great that you are succeeding to add constrained problems to ralg! Cheers, Alan Isaac From odonnems at yahoo.com Mon Dec 3 14:06:34 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Mon, 3 Dec 2007 11:06:34 -0800 (PST) Subject: [SciPy-user] weave and compiler install help Message-ID: <916245.2008.qm@web58004.mail.re3.yahoo.com> Thank you for your response Fernando. I am a new python user so this is likely out of my league. I am not sure why the code below is commented out, but I am actually using the blitz. Here is a very simple test that I cannot get to work: a = 1 weave.inline('printf("%d\\n",a);',['a'], verbose=2, type_converters=converters.blitz) I just added a ticket, as you suggested, under scipy.weave component which will hopefully shed some light. I have tried numerous compilers and versions of scipy/python/enthought. For some reason they all seem to give a similar error. Ticket #550 I have attached a script denoting all the tests that I have done to get weave.inline to work (this is a .zip file of a py script). The problem for me seems to be that the compiler is creating the source file but does not create the object file. Therefore, the script written by exec_command.py crashes when trying to read the output from this script. I am not sure if the problem lies with the content of the script file or with the compiler. I ran the weave.test() and receive no errors, but I am not sure if it is testing what it really needs to be testing. Anyhow, thank you for the time and effort and maybe I can figure this out with some additional effort. Michael O'Donnell ----- Original Message ---- From: Fernando Perez To: SciPy Users List Sent: Sunday, December 2, 2007 11:09:32 PM Subject: Re: [SciPy-user] weave and compiler install help On Nov 19, 2007 9:06 AM, Michael ODonnell wrote: > > I am trying to compile some inline c++ code inside python using weave. I > always get a similar problem where the compiled file cannot be found (see > below). I am not sure if the problem is with the compiler or something else. > I am a new user of scipy and a novice with python so I would appreciate any > direction someone can give me because I have not been able to figure out a > work around. > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > When I try to test the following script or any other script I get the > following message: > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > def prod(m, v): > #C++ version > nrows, ncolumns = m.shape > > res = numpy.zeros((nrows, ncolumns), float) > code = r""" > for (int i=0; i { > for (int j=0; j { > res(i) += m(i,j)*v(j); > } > } > """ > > #err = weave.inline(code,['nrows', 'ncolumns', 'res', 'm', 'v'], > type_converters=converters.blitz, compiler='mingw32', verbose=2) > err = weave.inline(code,['nrows', 'ncolumns', 'res', 'm', 'v'], > verbose=2) There may be windows-specific problems (I'm on linux), but your code as above simply can't work because you're assuming blitz behavior (parens on m(i,j) calls) with the blitz call commented out. On Linux, this version: ########################################################## import numpy from scipy import weave from scipy.weave import converters def prod(m, v): #C++ version nrows, ncolumns = m.shape assert v.ndim==1 and ncolumns==v.shape[0],"Shape mismatch in prod" res = numpy.zeros(nrows, float) code = r""" for (int i=0; i -------------- next part -------------- A non-text attachment was scrubbed... Name: test_weave.zip Type: application/zip Size: 3694 bytes Desc: not available URL: From wizzard028wise at gmail.com Mon Dec 3 14:26:52 2007 From: wizzard028wise at gmail.com (Dorian) Date: Mon, 3 Dec 2007 20:26:52 +0100 Subject: [SciPy-user] Algorithm of Viterbi In-Reply-To: <20071203152823.GK13844@mentat.za.net> References: <674a602a0712011237q2dfe3ae2oe3334c7eb53a78ae@mail.gmail.com> <20071203152823.GK13844@mentat.za.net> Message-ID: <674a602a0712031126x14f00f61icbc2fb89d110fe63@mail.gmail.com> Hi Stefan, I downloaded the files but I don't understand how to compile and for which example this algorithm works. Could you please give me a clear explanation step by step how it works ? I'm using python on Windows . Thanks in advance Best regards, Dorian PS: I was at Stellenbosch last year, now I'm in Paris ... On 03/12/2007, Stefan van der Walt wrote: > > Hi Dorian > > On Sat, Dec 01, 2007 at 09:37:42PM +0100, Dorian wrote: > > Hi all, > > > > Is there any tools about the algorithm of Viterbi using scypi ?. > > Any link on a clear tutorial will be very helpful. > > You can find fast dynamic programming / shortest path search code > here: > > http://mentat.za.net/source/shortest_path.tar.bz2 > > Compile using > > python setup.py build_ext -i > > and run the tests to make sure it works. > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Mon Dec 3 15:43:21 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Mon, 03 Dec 2007 21:43:21 +0100 Subject: [SciPy-user] surface plotting In-Reply-To: References: Message-ID: <47546A69.4010700@gmail.com> Hello, > Sure. Tell us a bit more about your usecase. Do you want to produce > figures for print/publication, do you want to interact with your data, or > do you want simply to do some 3D plotting in a program of yours. You can > do all this with Mayavi, but the solutions will depend a bit of your usecase. > > Well, seen that you ask, I often run thermo-hydraulic simulations. I usually end up having a set of numerical values to plot on a non-trivial domain rather than an analytical function. This can sometimes be done using a contour plot in matplotlib, but I am also looking for alternatives. For instance, let us say that I want to plot some scalar (say a component of the velocity) on the cross section of a tube (I simply generate random numbers here; in reality I would read it from a file). With pylab, this is what I would do: #! /usr/bin/env python from scipy import * import pylab import numpy nt=20 nr=20 r=linspace(0.,10.,nr) theta=linspace(0.,2.*pi,nt) #print "theta is ", theta sin_t=sin(theta) cos_t=cos(theta) rsin_t=r[newaxis,:]*sin_t[:,newaxis] rcos_t=r[newaxis,:]*cos_t[:,newaxis] rsin_t=ravel(rsin_t) rcos_t=ravel(rcos_t) rsin_t.shape=(nt,nr) rcos_t.shape=(nt,nr) vel_section=numpy.random.normal(0.,5.,(nr*nt)) vel_section=reshape(vel_section,(nt,nr)) print 'OK up to here' #pylab.colorbar() #pylab.clf() pylab.figure() X = rsin_t.transpose() Y = rcos_t.transpose() Z = vel_section.transpose() velmin = vel_section.min() velmax = vel_section.max() print velmin, velmax levels = arange(velmin, velmax+0.01, 0.1) pylab.contourf(X, Y, Z, levels, cmap=pylab.cm.jet) pylab.colorbar() #pylab.show() pylab.savefig("velocity_on_section_DNS") #pylab.hold(False) Can I do something similar with mayavi? And how? > /usr/lib/python2.4/site-packages/scipy/misc/__init__.py:25: > > DeprecationWarning: ScipyTest is now called NumpyTest; please update > > your code >> My main interest is point (1), i.e. to be sure that my python >> installation is still OK (and know what to do in case it is not). >> > > I think it is OK. What happened is that scipy got updated, and these are > just harmless, albeit annoying, warnings. > > Here I am a bit puzzled. True, SciPy 0.6.x has recently been released, but on my box (Debian testing) I am still running 0.5.2 and I do not recall it having been updated recently...or have I missed it? Many thanks and good luck with the defense of your PhD thesis. Lorenzo From odonnems at yahoo.com Mon Dec 3 16:30:54 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Mon, 3 Dec 2007 13:30:54 -0800 (PST) Subject: [SciPy-user] weave and compiler install help Message-ID: <833211.36710.qm@web58002.mail.re3.yahoo.com> This may help give someone greater insight into my problem. If I run the command created by python: cl.exe /c /nologo /Ox /MD /W3 /GX /DNDEBUG -IC:\Python24\lib\site-packages\scipy\weave -IC:\Python24\lib\site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python24\include -IC:\Python24\PC /Tpc:\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860521.cpp /Foc:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860521.obj /Zm1000 at the cmd prompt I get the following error: c:\Python24\include\pyconfig.h(30) : fatal error C1083: Cannot open include file: 'io.h': No such file or directory This command was generated by weave.inline, but I copied it into the cmd window. Does anyone have any ideas what I should/can do? Thank you, michael ----- Original Message ---- From: Fernando Perez To: SciPy Users List Sent: Sunday, December 2, 2007 11:09:32 PM Subject: Re: [SciPy-user] weave and compiler install help On Nov 19, 2007 9:06 AM, Michael ODonnell wrote: > > I am trying to compile some inline c++ code inside python using weave. I > always get a similar problem where the compiled file cannot be found (see > below). I am not sure if the problem is with the compiler or something else. > I am a new user of scipy and a novice with python so I would appreciate any > direction someone can give me because I have not been able to figure out a > work around. > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > When I try to test the following script or any other script I get the > following message: > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > def prod(m, v): > #C++ version > nrows, ncolumns = m.shape > > res = numpy.zeros((nrows, ncolumns), float) > code = r""" > for (int i=0; i { > for (int j=0; j { > res(i) += m(i,j)*v(j); > } > } > """ > > #err = weave.inline(code,['nrows', 'ncolumns', 'res', 'm', 'v'], > type_converters=converters.blitz, compiler='mingw32', verbose=2) > err = weave.inline(code,['nrows', 'ncolumns', 'res', 'm', 'v'], > verbose=2) There may be windows-specific problems (I'm on linux), but your code as above simply can't work because you're assuming blitz behavior (parens on m(i,j) calls) with the blitz call commented out. On Linux, this version: ########################################################## import numpy from scipy import weave from scipy.weave import converters def prod(m, v): #C++ version nrows, ncolumns = m.shape assert v.ndim==1 and ncolumns==v.shape[0],"Shape mismatch in prod" res = numpy.zeros(nrows, float) code = r""" for (int i=0; i From discerptor at gmail.com Mon Dec 3 17:22:02 2007 From: discerptor at gmail.com (Joshua Lippai) Date: Mon, 3 Dec 2007 14:22:02 -0800 Subject: [SciPy-user] Error in scipy.test(10) even after compiling successfully- Mac OS X 10.4.11 Apple GCC 4.0.1 In-Reply-To: <9911419a0712030203r7d631be8qdd67e2f3acef5038@mail.gmail.com> References: <9911419a0711291017r22d48580q2c1652855d253c86@mail.gmail.com> <474F6E3B.1020209@ar.media.kyoto-u.ac.jp> <9911419a0712030203r7d631be8qdd67e2f3acef5038@mail.gmail.com> Message-ID: <9911419a0712031422y4b96c0d7ke57d5d8177837d5c@mail.gmail.com> On Dec 3, 2007 2:03 AM, Joshua Lippai wrote: > > On Nov 29, 2007 5:58 PM, David Cournapeau wrote: > > > > Joshua Lippai wrote: > > > Hello all, > > > > > > After building scipy, I ran the scipy tests and got this error in check_integer: > > > > > > ====================================================================== > > > ERROR: check_integer (scipy.io.tests.test_array_import.TestReadArray) > > > ---------------------------------------------------------------------- > > > Traceback (most recent call last): > > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", > > > line 55, in check_integer > > > from scipy import stats > > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/__init__.py", > > > line 7, in > > > from stats import * > > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/stats.py", > > > line 191, in > > > import scipy.special as special > > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/__init__.py", > > > line 8, in > > > from basic import * > > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/basic.py", > > > line 8, in > > > from _cephes import * > > > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, > > > 2): Symbol not found: __gfortran_pow_r8_i4 > > > Referenced from: > > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so > > > Expected in: dynamic lookup > > > > > > It would appear it's not finding a symbol for gfrotran in one of the > > > files, and this in turn makes it impossible to import _cephes. How > > > would I go about remedying this? > > > > > This symbol is in the gfortran runtime. We need the complete build log > > to be sure about the exact problem, but I expect some mismatch between > > fortran compilers (do you have several fortran compilers on your machine ?). > > > > cheers, > > > > David > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > That sounds about right... looking at the build dialog, certain > commands seem to want to use gnu95 while the others want gfortran. > > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler options > commands have different --fcompiler options: ['gfortran', 'gnu95'], > using first in list as default > > I tried setting the --fcompiler option for all of the commands > mentioned there to gfortran (at least it changed the default from > gnu95 to gfortran like I wanted), but it still gives that message. > Anything else I can do to make this confusion for the compiler go > away? > > Josh > Here's the full dialog for build and install, btw (and it seems to not give me that message anymore, but the error still shows up in scipy.test(1,10)): Computer:~/python_sources/scipy Josh$ CC=/usr/bin/gcc CXX=/usr/bin/g++ python setup.py build_src config_fc --fcompiler=gfortran config --fcompiler=gfortran build_clib --fcompiler=gfortran build_ext --fcompiler=gfortran build --fcompiler=gfortran mkl_info: libraries mkl,vml,guide not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /Library/Frameworks/Python.framework/Versions/2.5/lib FOUND: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] djbfft_info: NOT AVAILABLE blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] non-existing path in 'scipy/linsolve': 'tests' umfpack_info: libraries umfpack not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/distutils/system_info.py:414: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE /Users/Josh/python_sources/scipy/scipy/__init__.py:18: UserWarning: Module scipy was already imported from /Users/Josh/python_sources/scipy/scipy/__init__.pyc, but /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages is being added to sys.path import pkg_resources as _pr # activate namespace packages (manipulates __path__) running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "superlu_src" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.4-ppc-2.5/scipy/interpolate/dfitpack-f2pywrappers.f' to sources. building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.4-ppc-2.5/build/src.macosx-10.4-ppc-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/lib/lapack/clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.4-ppc-2.5/build/src.macosx-10.4-ppc-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.4-ppc-2.5/build/src.macosx-10.4-ppc-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.linalg._iterative" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.linsolve._zsuperlu" sources building extension "scipy.linsolve._dsuperlu" sources building extension "scipy.linsolve._csuperlu" sources building extension "scipy.linsolve._ssuperlu" sources building extension "scipy.linsolve.umfpack.__umfpack" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse._sparsetools" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.4-ppc-2.5/scipy/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building extension "scipy.ndimage.segment._segmenter" sources building extension "scipy.stsci.convolve._correlate" sources building extension "scipy.stsci.convolve._lineshape" sources building extension "scipy.stsci.image._combine" sources building data_files sources running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running config running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext library 'mach' defined more than once, overwriting build_info {'sources': ['scipy/integrate/mach/d1mach.f', 'scipy/integrate/mach/i1mach.f', 'scipy/integrate/mach/r1mach.f', 'scipy/integrate/mach/xerror.f'], 'config_fc': {'noopt': ('scipy/integrate/setup.pyc', 1)}, 'source_languages': ['f77']}... with {'sources': ['scipy/special/mach/d1mach.f', 'scipy/special/mach/i1mach.f', 'scipy/special/mach/r1mach.f', 'scipy/special/mach/xerror.f'], 'config_fc': {'noopt': ('scipy/special/setup.pyc', 1)}, 'source_languages': ['f77']}... extending extension 'scipy.linsolve._zsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._dsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._csuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._ssuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler customize Gnu95FCompiler using build_ext running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running build_py copying scipy/__svn_version__.py -> build/lib.macosx-10.4-ppc-2.5/scipy copying build/src.macosx-10.4-ppc-2.5/scipy/__config__.py -> build/lib.macosx-10.4-ppc-2.5/scipy copying scipy/io/mmio.py -> build/lib.macosx-10.4-ppc-2.5/scipy/io Computer:~/python_sources/scipy Josh$ sudo python setup.py install Password: mkl_info: libraries mkl,vml,guide not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /Library/Frameworks/Python.framework/Versions/2.5/lib FOUND: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] djbfft_info: NOT AVAILABLE blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] non-existing path in 'scipy/linsolve': 'tests' umfpack_info: libraries umfpack not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/distutils/system_info.py:414: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE /Users/Josh/python_sources/scipy/scipy/__init__.py:18: UserWarning: Module scipy was already imported from /Users/Josh/python_sources/scipy/scipy/__init__.pyc, but /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages is being added to sys.path import pkg_resources as _pr # activate namespace packages (manipulates __path__) running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "superlu_src" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.4-ppc-2.5/scipy/interpolate/dfitpack-f2pywrappers.f' to sources. building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.4-ppc-2.5/build/src.macosx-10.4-ppc-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/lib/lapack/clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.4-ppc-2.5/build/src.macosx-10.4-ppc-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.4-ppc-2.5/build/src.macosx-10.4-ppc-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.4-ppc-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.linalg._iterative" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.linsolve._zsuperlu" sources building extension "scipy.linsolve._dsuperlu" sources building extension "scipy.linsolve._csuperlu" sources building extension "scipy.linsolve._ssuperlu" sources building extension "scipy.linsolve.umfpack.__umfpack" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse._sparsetools" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.macosx-10.4-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.4-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.4-ppc-2.5/scipy/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building extension "scipy.ndimage.segment._segmenter" sources building extension "scipy.stsci.convolve._correlate" sources building extension "scipy.stsci.convolve._lineshape" sources building extension "scipy.stsci.image._combine" sources building data_files sources running build_py copying scipy/__svn_version__.py -> build/lib.macosx-10.4-ppc-2.5/scipy copying build/src.macosx-10.4-ppc-2.5/scipy/__config__.py -> build/lib.macosx-10.4-ppc-2.5/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext library 'mach' defined more than once, overwriting build_info {'sources': ['scipy/integrate/mach/d1mach.f', 'scipy/integrate/mach/i1mach.f', 'scipy/integrate/mach/r1mach.f', 'scipy/integrate/mach/xerror.f'], 'config_fc': {'noopt': ('scipy/integrate/setup.pyc', 1)}, 'source_languages': ['f77']}... with {'sources': ['scipy/special/mach/d1mach.f', 'scipy/special/mach/i1mach.f', 'scipy/special/mach/r1mach.f', 'scipy/special/mach/xerror.f'], 'config_fc': {'noopt': ('scipy/special/setup.pyc', 1)}, 'source_languages': ['f77']}... extending extension 'scipy.linsolve._zsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._dsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._csuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._ssuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] customize UnixCCompiler customize UnixCCompiler using build_ext customize NAGFCompiler customize AbsoftFCompiler customize IBMFCompiler customize IntelFCompiler customize GnuFCompiler customize Gnu95FCompiler customize Gnu95FCompiler customize Gnu95FCompiler using build_ext running install_lib copying build/lib.macosx-10.4-ppc-2.5/scipy/__config__.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy copying build/lib.macosx-10.4-ppc-2.5/scipy/__svn_version__.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy copying build/lib.macosx-10.4-ppc-2.5/scipy/io/mmio.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/__config__.py to __config__.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/__svn_version__.py to __svn_version__.pyc byte-compiling /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mmio.py to mmio.pyc running install_data copying scipy/io/tests/test_mmio.py -> /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/ running install_egg_info Writing /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy-0.7.0.dev3612-py2.5.egg-info From giulio.bottazzi at gmail.com Mon Dec 3 17:32:07 2007 From: giulio.bottazzi at gmail.com (Giulio Bottazzi) Date: Mon, 3 Dec 2007 23:32:07 +0100 Subject: [SciPy-user] strange interaction between scipy.stats and scipy.optimize.minpack Message-ID: <9fda5e550712031432w6da16666i27dd82b1c94624d2@mail.gmail.com> Hi all, I don't know if it has been already reported but I've noticed a strange behavior. Consider the following script #START of fit.py ---------------------- from numpy import array import scipy.stats from scipy.optimize.minpack import leastsq empfreq=array([.31,.57,.12]) def thempdiff(p): thfreq = array([(1-p[0])*(1-p[0]), 2*(1-p[0])*p[0],p[0]*p[0] ]) return thfreq - empfreq print leastsq(thempdiff,x0=[.5], full_output=1) #END of fit.py ------------------------- now running it I obtain a seg fault > python fit.py zsh: segmentation fault python fit.py If instead 1) I remove the line "import scipy.stats" or 2) I remove the option "full_output=1" from leastsq the script runs smoothly. I don't understand what's going on. Is somebody else able to reproduce it? Any help? Giulio. -- Giulio Bottazzi http://giulio.bottazzi.googlepages.com PGP Key ID:BAB0A33F From fperez.net at gmail.com Mon Dec 3 17:54:21 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 3 Dec 2007 15:54:21 -0700 Subject: [SciPy-user] weave and compiler install help In-Reply-To: <833211.36710.qm@web58002.mail.re3.yahoo.com> References: <833211.36710.qm@web58002.mail.re3.yahoo.com> Message-ID: On Dec 3, 2007 2:30 PM, Michael ODonnell wrote: > > This may help give someone greater insight into my problem. If I run the > command created by python: > > cl.exe /c /nologo /Ox /MD /W3 /GX /DNDEBUG > -IC:\Python24\lib\site-packages\scipy\weave > -IC:\Python24\lib\site-packages\scipy\weave\scxx > -IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python24\include > -IC:\Python24\PC > /Tpc:\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860521.cpp > /Foc:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860521.obj > /Zm1000 > > at the cmd prompt I get the following error: > c:\Python24\include\pyconfig.h(30) : fatal error C1083: Cannot open include > file: 'io.h': No such file or directory > > This command was generated by weave.inline, but I copied it into the cmd > window. > > Does anyone have any ideas what I should/can do? 1. Attach the intermediate files (like c:\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860521.cpp) to the ticket, so either I or someone else can later download them to debug the problem directly. Though I'm going to guess this might be a local configuration problem somewhere, since no one else is reporting similar issues (and we do have win32 weave users). Very strange. 2. At least on this list, please use plain text for email. Html makes reading your messages a huge pain. Thanks, f From odonnems at yahoo.com Mon Dec 3 18:27:47 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Mon, 3 Dec 2007 15:27:47 -0800 (PST) Subject: [SciPy-user] weave and compiler install help Message-ID: <951975.93965.qm@web58007.mail.re3.yahoo.com> Thanks Fernando for your suggestion. (Note: Please use the command below and not the command in the previous posting because the filename changed; obviously the paths will need to be altered as well). I am attaching the .cpp file that was created using weave.inline and the compiler. I mentioned that when I tried to run this at the command prompt, I got an error about not being able to find the io.h file. I did find this file in /Microsoft Visual C++ Toolkit 2003/include and I have this path set in the environment variables so this seems strange to me. cl.exe /c /nologo /Ox /MD /W3 /GX /DNDEBUG -IC:\Python24\lib\site-packages\scipy\weave -IC:\Python24\lib\site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python24\include -IC:\Python24\PC /Tpc:\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860520.cpp /Foc:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860520.obj /Zm1000 Thanks for the assistance, michael PS I noticed also that /DNDEBUG is not an option for the SDK window compiler. Could this be a problem or am I wrong. I am using this link: http://msdn2.microsoft.com/en-us/library/610ecb4h(VS.80).aspx to look at the options for SDK compiler. I looked at all the versions and did not see this as an option for any of them. ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sc_552cccf5dbf4f6eadd273cdcbd5860520.zip Type: application/zip Size: 3731 bytes Desc: not available URL: From odonnems at yahoo.com Mon Dec 3 18:50:59 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Mon, 3 Dec 2007 15:50:59 -0800 (PST) Subject: [SciPy-user] weave and compiler install help Message-ID: <389288.11012.qm@web58005.mail.re3.yahoo.com> Is there a chance I need to set an environment variable that points to the SDK includes. I noticed that if I copied over C:\Program Files\Microsoft Visual C++ Toolkit 2003\include\io.h and ran the compiler, an error occurred for no float.h files (also found in C:\Program Files\Microsoft Visual C++ Toolkit 2003\include). In the command line / syntax, which I posted, there is no include for this directory, but I would think the compiler would see this. And it still does not explain why mingw or cygwin is not working for me either. thanks and sorry for all the stupid postings, suggestions from me! mike ____________________________________________________________________________________ Be a better pen pal. Text or chat with friends inside Yahoo! Mail. See how. http://overview.mail.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Mon Dec 3 19:13:17 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 3 Dec 2007 17:13:17 -0700 Subject: [SciPy-user] weave and compiler install help In-Reply-To: <389288.11012.qm@web58005.mail.re3.yahoo.com> References: <389288.11012.qm@web58005.mail.re3.yahoo.com> Message-ID: On Dec 3, 2007 4:50 PM, Michael ODonnell wrote: > > Is there a chance I need to set an environment variable that points to the > SDK includes. I noticed that if I copied over C:\Program Files\Microsoft > Visual C++ Toolkit 2003\include\io.h and ran the compiler, an error occurred > for no float.h files (also found in C:\Program Files\Microsoft Visual C++ > Toolkit 2003\include). In the command line / syntax, which I posted, there > is no include for this directory, but I would think the compiler would see > this. And it still does not explain why mingw or cygwin is not working for > me either. > > > thanks and sorry for all the stupid postings, suggestions from me! If your compiler isn't finding float.h, then it really sounds like a configuration problem. I'm afraid on that front I don't know how to help, since I've never used a compiler under Windows, but perhaps someone else may be able to provide you with some guidance. Cheers, f From matthieu.brucher at gmail.com Tue Dec 4 01:42:05 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 4 Dec 2007 07:42:05 +0100 Subject: [SciPy-user] weave and compiler install help In-Reply-To: References: <389288.11012.qm@web58005.mail.re3.yahoo.com> Message-ID: Last time I tried to used another compiler than GCC with weave, it didn't work because the Blitz header are not included for another compiler in the repository. So even if the compiler is correctly installed, you won't be able to use weave because of this. Matthieu 2007/12/4, Fernando Perez : > > On Dec 3, 2007 4:50 PM, Michael ODonnell wrote: > > > > Is there a chance I need to set an environment variable that points to > the > > SDK includes. I noticed that if I copied over C:\Program Files\Microsoft > > Visual C++ Toolkit 2003\include\io.h and ran the compiler, an error > occurred > > for no float.h files (also found in C:\Program Files\Microsoft Visual > C++ > > Toolkit 2003\include). In the command line / syntax, which I posted, > there > > is no include for this directory, but I would think the compiler would > see > > this. And it still does not explain why mingw or cygwin is not working > for > > me either. > > > > > > thanks and sorry for all the stupid postings, suggestions from me! > > If your compiler isn't finding float.h, then it really sounds like a > configuration problem. I'm afraid on that front I don't know how to > help, since I've never used a compiler under Windows, but perhaps > someone else may be able to provide you with some guidance. > > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Dec 4 02:15:12 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 04 Dec 2007 08:15:12 +0100 Subject: [SciPy-user] strange interaction between scipy.stats and scipy.optimize.minpack In-Reply-To: <9fda5e550712031432w6da16666i27dd82b1c94624d2@mail.gmail.com> References: <9fda5e550712031432w6da16666i27dd82b1c94624d2@mail.gmail.com> Message-ID: On Mon, 3 Dec 2007 23:32:07 +0100 "Giulio Bottazzi" wrote: > Hi all, > I don't know if it has been already reported but I've >noticed a > strange behavior. Consider the following script > > #START of fit.py ---------------------- > > from numpy import array > import scipy.stats > from scipy.optimize.minpack import leastsq > > empfreq=array([.31,.57,.12]) > > def thempdiff(p): > thfreq = array([(1-p[0])*(1-p[0]), >2*(1-p[0])*p[0],p[0]*p[0] ]) > return thfreq - empfreq > > print leastsq(thempdiff,x0=[.5], full_output=1) > > #END of fit.py ------------------------- > > now running it I obtain a seg fault > >> python fit.py > zsh: segmentation fault python fit.py > > If instead > > 1) I remove the line "import scipy.stats" or > > 2) I remove the option "full_output=1" from leastsq > > the script runs smoothly. I don't understand what's >going on. Is > somebody else able to reproduce it? Any help? > > Giulio. > > -- > Giulio Bottazzi > http://giulio.bottazzi.googlepages.com > PGP Key ID:BAB0A33F > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user I cannot reproduce your problem. >>> numpy.__version__ '1.0.5.dev4528' >>> scipy.__version__ '0.7.0.dev3610' python fit.py (0.42369157713188349, array([[ 0.46734154]]), {'qtf': array([ -4.51744596e-06]), 'nfev': 17, 'fjac': array([[ 1.46279277, -0.20867347, -0.5792871 ]]), 'fvec': array([ 0.0221314 , -0.08164595, 0.05951455]), 'ipvt': array([1], dtype=int32)}, 'Both actual andpredicted relative reductions in the sum of squares\n are at most 0.000000', 1) Nils From giulio.bottazzi at gmail.com Tue Dec 4 03:35:01 2007 From: giulio.bottazzi at gmail.com (Giulio Bottazzi) Date: Tue, 4 Dec 2007 09:35:01 +0100 Subject: [SciPy-user] strange interaction between scipy.stats and scipy.optimize.minpack In-Reply-To: References: <9fda5e550712031432w6da16666i27dd82b1c94624d2@mail.gmail.com> Message-ID: <9fda5e550712040035t5b1711caj406a5c94a33cf26d@mail.gmail.com> I recompiled everything from scratch but I still get the seg fault. I'm using python 2.4.4, numpy 1.0.4 and scipy 0.6 compiled with GCC ver. 4.1.2 on an amd64 box. I'll wait until the packages get updated in my distro (gentoo) and I'll check it again. Thank you for the feedback;. Giulio. > > I cannot reproduce your problem. > > >>> numpy.__version__ > '1.0.5.dev4528' > >>> scipy.__version__ > '0.7.0.dev3610' > > > python fit.py > (0.42369157713188349, array([[ 0.46734154]]), {'qtf': > array([ -4.51744596e-06]), 'nfev': 17, 'fjac': array([[ > 1.46279277, -0.20867347, -0.5792871 ]]), 'fvec': array([ > 0.0221314 , -0.08164595, 0.05951455]), 'ipvt': array([1], > dtype=int32)}, 'Both actual andpredicted relative > reductions in the sum of squares\n are at most 0.000000', > 1) > > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Giulio Bottazzi http://giulio.bottazzi.googlepages.com PGP Key ID:BAB0A33F From giovanni.samaey at cs.kuleuven.be Tue Dec 4 06:05:14 2007 From: giovanni.samaey at cs.kuleuven.be (Giovanni Samaey) Date: Tue, 4 Dec 2007 12:05:14 +0100 Subject: [SciPy-user] lil_matrix In-Reply-To: References: Message-ID: <6750B3E0-DCA6-44E8-998E-D6E6DEA64099@cs.kuleuven.be> Hi all, I have "solved" the issue by writing B[:5,:5]=A.tocsr() This fills up B correctly. However, this approach appears to be very inefficient. (Obviously, I am using *much* larger values than 5.) It seems that this inefficiency is not due to the conversion but due to the filling up of B. This deduction is made based on the fact that adding two lil matrices returns a csr matrix in only a fraction of the time the above operation takes. I am using this approach because I have a very large matrix A in sparse format and I want to add a border (and extra row and column). Any help would be appreciated. Best, Giovanni On 30 Nov 2007, at 18:26, Giovanni Samaey wrote: > Hello all, > > I have a basic question concerning lil_matrix slice assignments. > The following code does not behave as intended : > > import scipy.sparse as S > > A = S.lil_matrix((5,5)) > A.setdiag(ones(5)) > B = S.lil_matrix((6,6)) > B[:5,:5]=A > B.getnnz() > > This returns 0 instead of the expected 5. Am I using the code > incorrectly ? > > Giovanni > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From david at ar.media.kyoto-u.ac.jp Tue Dec 4 06:21:39 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 04 Dec 2007 20:21:39 +0900 Subject: [SciPy-user] scipy.scons branch: building numpy and scipy with scons Message-ID: <47553843.7020709@ar.media.kyoto-u.ac.jp> Hi, I've just reached a first usable scipy.scons branch, so that scipy can be built entirely with scons (assuming you build numpy with scons too). You can get it from http://svn.scipy.org/svn/scipy/branches/scipy.scons. To build it, you just need to use numpy.scons branch instead of the trunk, and use setupscons.py instead of setup.py. Again, I would be happy to hear about failures, success (please report a ticket in this case), etc... Some of the most interesting things I can think of which work with scons: - you can control fortran and C flags from the command line: CFLAGS and FFLAGS won't override necessary flags, only optimization flags, so you can easily play with warning, optimization flags. For example: CFLAGS='-W -Wall -Wextra -DDEBUG' FFLAGS='-DDEBUG -W -Wall -Wextra' python setupscons build for debugging will work. No need to care about -fPIC and co, all this is handled automatically. - dependencies are handled correctly thanks to scons: for example, if you change a library (e.g. by using MKL=None to disable mkl), only link step will be redone. platforms known to work ----------------------- - linux with gcc/g77 or gcc/gfortran (both atlas and mkl 9 were tested). - linux with intel compilers (intel and gnu compilers can also be mixed, AFAIK). - solaris with sun compilers with sunperf, only tested on indiana. Notable non working things: --------------------------- - using netlib BLAS and LAPACK is not supported (only optimized ones are available: sunperf, atlas, mkl, and vecLib/Accelerate). - parallel build does NOT work (AFAICS, this is because f2py which do some things which are not thread-safe, but I have not yet found the exact problem). - I have not yet implemented umfpack checker, and as such umfpack cannot be built yet - I have not yet tweaked fortran compiler configurations for optimizations except for gnu compilers - c++ compilers configurations are not handled either. cheers, David From stefan at sun.ac.za Tue Dec 4 16:29:25 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 4 Dec 2007 23:29:25 +0200 Subject: [SciPy-user] Algorithm of Viterbi In-Reply-To: <674a602a0712031126x14f00f61icbc2fb89d110fe63@mail.gmail.com> References: <674a602a0712011237q2dfe3ae2oe3334c7eb53a78ae@mail.gmail.com> <20071203152823.GK13844@mentat.za.net> <674a602a0712031126x14f00f61icbc2fb89d110fe63@mail.gmail.com> Message-ID: <20071204212925.GF8346@mentat.za.net> Hi Dorian On Mon, Dec 03, 2007 at 08:26:52PM +0100, Dorian wrote: > I downloaded the files but I don't understand how to compile and for which > example this algorithm works. Sorry, I don't know how to compile things under Windows. Maybe some other people on the list have experience with ctypes on that platform. > Could you please give me a clear explanation step by step how it works ? > I'm using python on Windows . There are two functions, "find" and "remove". I wrote these to see how well the seam carving algorithm in http://www.shaiavidan.org/papers/imretFinal.pdf works. The first finds the shortest path through the image, and the last removes that path. It is a trivial matter to modify the algorithm to look for minimal path cost, instead of minimal difference cost. Now that I think about it, there is an approximation to Viterbi that is even easier to implement -- simply take the maximum position in each column. It doesn't give the best path (it is not even guaranteed to be connected), but often it is a good approximation, and it is easy to implement. See for example http://www.cs.colostate.edu/~hamiltom/hmm/hmm.txt I can also suggest the following article, which gives a very good overview of HMMs and Viterbi: http://dx.doi.org/10.1109/5.18626 Cheers St?fan From pablo_mitchell at yahoo.com Tue Dec 4 23:25:17 2007 From: pablo_mitchell at yahoo.com (pablo mitchell) Date: Tue, 4 Dec 2007 20:25:17 -0800 (PST) Subject: [SciPy-user] TimeSeries - Questions Re lib/moving_funcs.py Message-ID: <859102.72857.qm@web30805.mail.mud.yahoo.com> I have comments/questions regarding this module: * Why do the comments in the source code indicate that many of the functions are intended only for 1-D arrays (mov_var for example)? A very brief look at the source doesn't indicate why this should be. Also, throwing some quick example code together suing 2-D arrays generates results. The code does not seem to bomb. * The covariance related code is constructed in a way I would not expect. Specifically, most of the functionality appear as facade functions fed into _mov_var_stddev. This seems to be a multi-purpose function in the module. Wouldn't it be more natural to define a general covariance function (ignoring bias or span) with pseudo-code: cov(x,y) = [x - ave(x)]*[y - ave(y)]/n The rest of the functions fall out of this like: var(x) = cov(x,x) stddev(x) = sqrt[var(x)] corr(x,y) = cov(x,y)/stddev(x)/stddev(y) Then add-ons like regression coefficients and zscores become trivial. * This is a really nice package! Look forward to seeing it grow. ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Dec 5 00:07:56 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 5 Dec 2007 00:07:56 -0500 Subject: [SciPy-user] TimeSeries - Questions Re lib/moving_funcs.py In-Reply-To: <859102.72857.qm@web30805.mail.mud.yahoo.com> References: <859102.72857.qm@web30805.mail.mud.yahoo.com> Message-ID: <200712050007.57108.pgmdevlist@gmail.com> Hello Pablo, Yes ! Another user ! > * Why do the comments in the source code indicate that many of the > functions are intended only for 1-D arrays (mov_var for example)? Because we're confident it works fine with 1D, but haven't tested things thoroughly enough to be as confident with nD arrays. And we don't really want to advertise something we can't deliver. > A very brief look at the source doesn't indicate why this should be. Also, > throwing some quick example code together suing 2-D arrays generates > results. The code does not seem to bomb. Actually, I prefer a code that bombs than one that deceives me into believing it works when it does not. > * The covariance related code is constructed in a way I would not expect. All the moving functions of this module are wrappers to C functions. We use C for performance more than for portability (cf the regular problems with installation...). What you suggest sounds like a good idea, I'll kick the ball to my co-author, he's the C speaker of the duo. > * This is a really nice package! Look forward to seeing it grow. Thanks a lot for your support ! I wish I could spend more time on the code, and less time on writing papers, however... Your suggestions and comments are always welcome. From pablo_mitchell at yahoo.com Wed Dec 5 00:47:03 2007 From: pablo_mitchell at yahoo.com (pablo mitchell) Date: Tue, 4 Dec 2007 21:47:03 -0800 (PST) Subject: [SciPy-user] TimeSeries - Questions Re lib/moving_funcs.py Message-ID: <883344.95910.qm@web30811.mail.mud.yahoo.com> The C source code seems able to accommodate my suggestion of making the covariance the central function and the remaining ones the facades. Just a suggestion. Another suggestion: a new class similar to TimeSeriesRecords. For my purposes, I would prefer to be able to instantiate a 2-D time-series that can be indexed not just on dates (temporal) but also on variable names (spatial if you will). For example, a time-series structure like: var-1 var-2 ... var-n date-1 x-11 x-12 ... x-1n date-2 x-21 x-22 ... x-2n ... date-m x-m1 x-m2 ... x-mn could be sliced time-series[date-i:date-j, :] as well as time-series[:, var-i:var-j] Thanks again -- P ----- Original Message ---- From: Pierre GM To: pablo mitchell ; SciPy Users List Sent: Tuesday, December 4, 2007 9:07:56 PM Subject: Re: [SciPy-user] TimeSeries - Questions Re lib/moving_funcs.py Hello Pablo, Yes ! Another user ! > * Why do the comments in the source code indicate that many of the > functions are intended only for 1-D arrays (mov_var for example)? Because we're confident it works fine with 1D, but haven't tested things thoroughly enough to be as confident with nD arrays. And we don't really want to advertise something we can't deliver. > A very brief look at the source doesn't indicate why this should be. Also, > throwing some quick example code together suing 2-D arrays generates > results. The code does not seem to bomb. Actually, I prefer a code that bombs than one that deceives me into believing it works when it does not. > * The covariance related code is constructed in a way I would not expect. All the moving functions of this module are wrappers to C functions. We use C for performance more than for portability (cf the regular problems with installation...). What you suggest sounds like a good idea, I'll kick the ball to my co-author, he's the C speaker of the duo. > * This is a really nice package! Look forward to seeing it grow. Thanks a lot for your support ! I wish I could spend more time on the code, and less time on writing papers, however... Your suggestions and comments are always welcome. ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Dec 5 01:23:21 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 5 Dec 2007 01:23:21 -0500 Subject: [SciPy-user] TimeSeries - Questions Re lib/moving_funcs.py In-Reply-To: <883344.95910.qm@web30811.mail.mud.yahoo.com> References: <883344.95910.qm@web30811.mail.mud.yahoo.com> Message-ID: <200712050123.34031.pgmdevlist@gmail.com> On Wednesday 05 December 2007 00:47:03 pablo mitchell wrote: > The C source code seems able to accommodate my suggestion of making the > covariance the central function and the remaining ones the facades. Just a > suggestion. Mmh, you know, I can think about something that would be even better than a suggestion ;). Feel free to start implementing something, we could definitely use some more hands ! > > Another suggestion: a new class similar to TimeSeriesRecords. > > For my purposes, I would prefer to be able to instantiate a 2-D time-series > that can be indexed not just on dates (temporal) but also on variable names > (spatial if you will). Yeah, I'd like to have named columns on an homogeneous 2D array as well. I guess that could be doable with an extra dictionary (column position:field name), and a tailored __getslice__. But I'd see that more as a numpy extension, as an hybrid between a 2D ndarray and a record array. However, would it be really efficient ? I have the nagging feeling that the current method for slicing on dates can be improved. From josegomez at gmx.net Wed Dec 5 11:18:09 2007 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Wed, 05 Dec 2007 17:18:09 +0100 Subject: [SciPy-user] Efficient "windowing" Message-ID: <20071205161809.18420@gmx.net> Hi, I am trying to apply a simple algorithm to a 2D matrix (an image). What I want to do is, for each pixel, choose the highest (... lowest) value in its 8- or 4-connected neighbours. I have done this using weave.inline, and using a couple of loops, but I was curious if there's some way of doing this using numpy slice syntax? My (allegedly, unelegant) attempts have been versions of the following: b[1:-1,1:-1] = scipy.array([a[0:-2,1:-1] , a[2:,1:-1] , a[1:-1,0:-2] ,\ a[1:-1,2:],a[0:-2,0:-2], a[0:-2,2:], a[2:,0:-2], a[2:,2:]],'f').max() They don't work, because the max() call at the end refers to the whole array, so you are given a constant value array, equal to the max. value of a. Using for loops is very slow when dealing with large arrays. Thanks! Jose -- Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten Browser-Versionen downloaden: http://www.gmx.net/de/go/browser From n.martinsen-burrell at wartburg.edu Wed Dec 5 14:58:57 2007 From: n.martinsen-burrell at wartburg.edu (Neil Martinsen-Burrell) Date: Wed, 05 Dec 2007 13:58:57 -0600 Subject: [SciPy-user] Efficient "windowing" In-Reply-To: Message-ID: > Date: Wed, 05 Dec 2007 17:18:09 +0100 > From: "Jose Luis Gomez Dans" > Subject: [SciPy-user] Efficient "windowing" > I am trying to apply a simple algorithm to a 2D matrix (an image). What I want > to do is, for each pixel, choose the highest (... lowest) value in its 8- or > 4-connected neighbours. I have done this using weave.inline, and using a > couple of loops, but I was curious if there's some way of doing this using > numpy slice syntax? My (allegedly, unelegant) attempts have been versions of > the following: > > b[1:-1,1:-1] = scipy.array([a[0:-2,1:-1] , a[2:,1:-1] , a[1:-1,0:-2] ,\ > a[1:-1,2:],a[0:-2,0:-2], a[0:-2,2:], a[2:,0:-2], a[2:,2:]],'f').max() > > They don't work, because the max() call at the end refers to the whole array, > so you are given a constant value array, equal to the max. value of a. Using > for loops is very slow when dealing with large arrays. I use something like this: import numpy def _neighbors(i,j): return [(i-1,j),(i+1,j),(i,j-1),(i,j+1)] x=numpy.random.random((5,5)) print x[zip(*_neighbors(2,3))].max() Where the zip (* ) is the inverse of zip and converts the list of tuples from _neighbors into a tuple of lists. This uses numpy's fancy indexing to special case when the indices are themselves lists. -Neil -- "Your work is to discover your work and then with all your heart to give yourself to it." -- Buddha From robert.kern at gmail.com Wed Dec 5 15:11:44 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 05 Dec 2007 14:11:44 -0600 Subject: [SciPy-user] Efficient "windowing" In-Reply-To: <20071205161809.18420@gmx.net> References: <20071205161809.18420@gmx.net> Message-ID: <47570600.7080006@gmail.com> Jose Luis Gomez Dans wrote: > Hi, > I am trying to apply a simple algorithm to a 2D matrix (an image). What I want to do is, for each pixel, choose the highest (... lowest) value in its 8- or 4-connected neighbours. I have done this using weave.inline, and using a couple of loops, but I was curious if there's some way of doing this using numpy slice syntax? My (allegedly, unelegant) attempts have been versions of the following: > > b[1:-1,1:-1] = scipy.array([a[0:-2,1:-1] , a[2:,1:-1] , a[1:-1,0:-2] ,\ > a[1:-1,2:],a[0:-2,0:-2], a[0:-2,2:], a[2:,0:-2], a[2:,2:]],'f').max() > > They don't work, because the max() call at the end refers to the whole array, so you are given a constant value array, equal to the max. value of a. Using for loops is very slow when dealing with large arrays. .max() takes an `axis` argument. It also takes an `out` argument that will help you save you from making some large temporaries. In [1]: from numpy import * In [2]: n = 256 In [5]: a = random.random((n,n)).astype(float32) In [7]: b = zeros([n,n], dtype=float32) In [8]: neighbors = array([a[0:-2,1:-1], a[2:,1:-1], a[1:-1,0:-2], a[1:-1,2:], a[0:-2,0:-2], a[0:-2,2:], a[2:,0:-2], a[2:,2:]]) In [9]: neighbors.shape Out[9]: (8, 254, 254) In [10]: neighbors.max(axis=0, out=b[1:-1,1:-1]) Out[10]: array([[ 0.94350582, 0.94350582, 0.94350582, ..., 0.98800218, 0.97231197, 0.61088812], [ 0.94350582, 0.82977545, 0.94350582, ..., 0.61088812, 0.97231197, 0.97231197], [ 0.94350582, 0.94350582, 0.94350582, ..., 0.95455241, 0.95455241, 0.95455241], ..., [ 0.92812324, 0.92812324, 0.95367759, ..., 0.92305821, 0.92305821, 0.92305821], [ 0.95591497, 0.95591497, 0.95367759, ..., 0.96583498, 0.85454881, 0.92305821], [ 0.92812324, 0.95591497, 0.92812324, ..., 0.96583498, 0.92305821, 0.92305821]], dtype=float32) In [11]: b Out[11]: array([[ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0.94350582, 0.94350582, ..., 0.97231197, 0.61088812, 0. ], [ 0. , 0.94350582, 0.82977545, ..., 0.97231197, 0.97231197, 0. ], ..., [ 0. , 0.95591497, 0.95591497, ..., 0.85454881, 0.92305821, 0. ], [ 0. , 0.92812324, 0.95591497, ..., 0.92305821, 0.92305821, 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]], dtype=float32) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From odonnems at yahoo.com Wed Dec 5 18:02:30 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Wed, 5 Dec 2007 15:02:30 -0800 (PST) Subject: [SciPy-user] weave inline progress/questions for a new error Message-ID: <109803.32866.qm@web58014.mail.re3.yahoo.com> I have recently been posting questions about how to implement scipy.weave.inline on a windows XP machine. I have finally made some progress on how to do this. Sorry for the lack of correct terminology and ignorance, but this is outside of my expertise and I only just learned last week what a .cpp and .obj file was. First, I did not have my library, include and paths set up exactly right. These corrections allowed me to compile the object files. Second, there is an error in the numpy/distutils/exec_command.py file with regard to finding the correct python executable file on an NT machine. The code was not right, probably due to differneces in python versions or something. I have altered this code, which has allowed me to run weave, but it then crashes at a later point. The program creates the c++ source file, the object file and the weave object file but appears to crash when converting this to a python dll file. I get the following output using verbose (Please skip below this for additional comments and questions): +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ No module named msvccompiler in numpy.distutils; trying from distutils creating c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e running build_ext running build_src building extension "sc_552cccf5dbf4f6eadd273cdcbd5860520" sources No module named msvccompiler in numpy.distutils; trying from distutils customize MSVCCompiler customize MSVCCompiler using build_ext No module named msvccompiler in numpy.distutils; trying from distutils customize MSVCCompiler Missing compiler_cxx fix for MSVCCompiler customize MSVCCompiler using build_ext building 'sc_552cccf5dbf4f6eadd273cdcbd5860520' extension compiling C sources creating c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release creating c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\temp creating c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\temp\Michael creating c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\temp\Michael\python24_compiled creating c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\Python24 creating c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\Python24\lib creating c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\Python24\lib\site-packages creating c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\Python24\lib\site-packages\scipy creating c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\Python24\lib\site-packages\scipy\weave creating c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\Python24\lib\site-packages\scipy\weave\scxx C:\Program Files\Microsoft Visual C++ Toolkit 2003\bin\cl.exe /c /nologo /Ox /MD /W3 /GX /DNDEBUG -IC:\Python24\lib\site-packages\scipy\weave -IC:\Python24\lib\site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python24\include -IC:\Python24\PC /Tpc:\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860520.cpp /Foc:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860520.obj /Zm1000 C:\Program Files\Microsoft Visual C++ Toolkit 2003\bin\cl.exe /c /nologo /Ox /MD /W3 /GX /DNDEBUG -IC:\Python24\lib\site-packages\scipy\weave -IC:\Python24\lib\site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python24\include -IC:\Python24\PC /TpC:\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp /Foc:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.obj /Zm1000 C:\Program Files\Microsoft Visual C++ Toolkit 2003\bin\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Python24\libs /LIBPATH:C:\Python24\PCBuild /EXPORT:initsc_552cccf5dbf4f6eadd273cdcbd5860520 c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860520.obj c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.obj /OUT:c:\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860520.pyd /IMPLIB:c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860520.lib LINK : fatal error LNK1104: cannot open file 'msvcprt.lib' Traceback (most recent call last): File "C:\Python24\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py", line 310, in RunScript exec codeObject in __main__.__dict__ File "C:\Documents and Settings\Michael\Application Data\ESRI\ArcToolbox\scripts\test_weave.py", line 349, in ? main() File "C:\Documents and Settings\Michael\Application Data\ESRI\ArcToolbox\scripts\test_weave.py", line 227, in main weave.inline('printf("%d\\n",a);',['a'], verbose=2, type_converters=converters.blitz) #, compiler = 'msvc', verbose=2, type_converters=converters.blitz, auto_downcast=0) #'msvc' or 'gcc' or 'mingw32' File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", line 338, in inline auto_downcast = auto_downcast, File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line 365, in compile verbose = verbose, **kw) File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 176, in setup return old_setup(**new_attr) File "C:\Python24\Lib\distutils\core.py", line 166, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "C:\Program Files\Microsoft Visual C++ Toolkit 2003\bin\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Python24\libs /LIBPATH:C:\Python24\PCBuild /EXPORT:initsc_552cccf5dbf4f6eadd273cdcbd5860520 c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860520.obj c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.obj /OUT:c:\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860520.pyd /IMPLIB:c:\temp\Michael\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\temp\Michael\python24_compiled\sc_552cccf5dbf4f6eadd273cdcbd5860520.lib" failed with exit status 1104 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The error seems to be this: LINK : fatal error LNK1104: cannot open file 'msvcprt.lib Does anyone know what I may need to do to get this to work. I am writing a document that outlines all the details for what I have done in the event someone else is having these issues. I will post this once I get all this worked out. Thank you for those who have been corresponding with me as I muddle through this!! Michael ____________________________________________________________________________________ Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.volz at gmail.com Wed Dec 5 19:56:07 2007 From: erik.volz at gmail.com (Erik Volz) Date: Wed, 5 Dec 2007 16:56:07 -0800 Subject: [SciPy-user] += has strange behavior with multidimensional arrays Message-ID: Sorry if this issue is already well-known. It seems that += and perhaps *= operators do not have correct behavior with multidimensional arrays. Let's say I want to make a simple matrix symmetric across the diagonal. from pylab import * y = ones((10,10)) * arange(1,11) print y yy = y print 'This is incorrect:' y+=transpose(y); print y print 'But this still works:' yy = yy+transpose(yy); print yy Clearly python is modifying the array element by element, not simultaneously. You might not consider this a bug, but the behavior is so unintuitive (and unlike matlab) that it would trip up most people. From robert.kern at gmail.com Wed Dec 5 20:04:35 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 05 Dec 2007 19:04:35 -0600 Subject: [SciPy-user] += has strange behavior with multidimensional arrays In-Reply-To: References: Message-ID: <47574AA3.4000403@gmail.com> Erik Volz wrote: > Sorry if this issue is already well-known. > > It seems that += and perhaps *= operators do not have correct behavior > with multidimensional arrays. Well, let's be more clear. It does not have the desired behavior when the RHS contains a view of the array on the LHS. Otherwise, the behavior is correct. Multidimensionality has nothing to do with it. > Let's say I want to make a simple matrix symmetric across the diagonal. > > from pylab import * > y = ones((10,10)) * arange(1,11) > print y > yy = y > print 'This is incorrect:' > y+=transpose(y); print y > print 'But this still works:' > yy = yy+transpose(yy); print yy > > > Clearly python is modifying the array element by element, not > simultaneously. You might not consider this a bug, but the behavior is > so unintuitive (and unlike matlab) that it would trip up most people. It's a straightforward consequence of inplace modification and views. It's not what you want in this case, but there is no way for Python or numpy to know this. To get what you want, you have to give up one or the other. Above, you gave up inplace modification. The other approach is to make a copy: y += array(y.T) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From prabhu at aero.iitb.ac.in Thu Dec 6 01:53:42 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Thu, 06 Dec 2007 12:23:42 +0530 Subject: [SciPy-user] surface plotting In-Reply-To: <47546A69.4010700@gmail.com> References: <47546A69.4010700@gmail.com> Message-ID: <47579C76.7010108@aero.iitb.ac.in> Lorenzo Isella wrote: > Hello, >> Sure. Tell us a bit more about your usecase. Do you want to produce >> figures for print/publication, do you want to interact with your data, or >> do you want simply to do some 3D plotting in a program of yours. You can >> do all this with Mayavi, but the solutions will depend a bit of your usecase. >> >> > Well, seen that you ask, I often run thermo-hydraulic simulations. > I usually end up having a set of numerical values to plot on a > non-trivial domain rather than an analytical function. In such cases you might be better served by mayavi. The only difficulty is that for non-trivial domains you'll have to figure out how best to represent your data (structured/unstructured etc.). In fact if you really have velocity data in 3D it is nice to be able to see the flow in 3D using either velocity vectors or streamlines, both of which mayavi supports. > This can sometimes be done using a contour plot in matplotlib, but I am > also looking for alternatives. > For instance, let us say that I want to plot some scalar (say a > component of the velocity) on the cross section of a tube (I simply > generate random numbers here; in reality I would read it from a file). > With pylab, this is what I would do: [...] > > Can I do something similar with mayavi? And how? With the script you sent, you could do this: from enthought.mayavi.tools import mlab mlab.figure() s = mlab.surf(X, Y, Z, colormap='jet') This will give you a 3D surface. You can't set the contours directly but you certainly can modify the returned object or do this from the UI. For example, s.enable_contours = True s.contour.filled_contours = True s.contour.contours = range(-17, 16) If you have vector data you can use either the flow or quiver mlab functions. More on that in the mlab documentation. Anyway, attached is a script that does what you need for your example with mlab. It doesn't do anything special that your pylab version doesn't but it does give you 3D and opens up other possibilities. If you have other questions on mlab and datasets for non-trivial domains please don't hesitate to ask on the enthought-dev list. cheers, prabhu -------------- next part -------------- A non-text attachment was scrubbed... Name: tm.py Type: text/x-python Size: 974 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Thu Dec 6 03:48:40 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 6 Dec 2007 09:48:40 +0100 Subject: [SciPy-user] surface plotting In-Reply-To: <47579C76.7010108@aero.iitb.ac.in> References: <47546A69.4010700@gmail.com> <47579C76.7010108@aero.iitb.ac.in> Message-ID: <20071206084840.GB27595@clipper.ens.fr> On Thu, Dec 06, 2007 at 12:23:42PM +0530, Prabhu Ramachandran wrote: > With the script you sent, you could do this: > from enthought.mayavi.tools import mlab > mlab.figure() > s = mlab.surf(X, Y, Z, colormap='jet') > This will give you a 3D surface. You can't set the contours directly > but you certainly can modify the returned object or do this from the UI. > For example, > s.enable_contours = True > s.contour.filled_contours = True > s.contour.contours = range(-17, 16) See also contour_surf which will give you convenient keyword arguments for that. Ga?l From pieter.cogghe at gmail.com Thu Dec 6 04:56:29 2007 From: pieter.cogghe at gmail.com (Yuccaplant) Date: Thu, 6 Dec 2007 01:56:29 -0800 (PST) Subject: [SciPy-user] slice window out of an array of an size, origin and boundary mode given Message-ID: Hi, I've got an array A[y,x] from witch I want to slice a window size = (b,a) with the current pixel (y,x) as origin. Just like it's done in many of the filters of scipy.ndimage.filters. But I can't get to the code. Just slicing a window out of the array is not the problem, but for the boundaries I'm a bit lost in way too much for-loops. thanks a lot, Pieter From lorenzo.isella at gmail.com Thu Dec 6 09:20:21 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 6 Dec 2007 15:20:21 +0100 Subject: [SciPy-user] Question about Eggs and easy_install Message-ID: Dear All, This question is more about operating system than SciPy itself. I recently tried to install mayavi2 on my system (Debian testing). I think I did a bit of mess, though now scipy fine apart from a few annoying warnings. First I tried to use eggs and then the procedure described below: >One of the easiest ways to get things installed without eggs right now >is to first install the dependencies -- wxPython 2.6, python-vtk and >numpy and then grab the tarballs here: http://code.enthought.com/downloads/source/ets2.6/ > >Untar them and build them each > cd enthought.traits > ./build.sh > vi/emacs install.sh > ./install.sh I think that Numpy got updated to a version not yet available in the standard Debian testing repositories. My question is: as soon as newer versions of Numpy and SciPy make it into Debian testing, will I be able to install them automatically with the usual apt-get upgrade? In other words: suppose, by absurd, that tomorrow SciPy 1.0 and Numpy 1.7 are released simultaneously as Debian testing packages; will my system upgrade to them automatically? I am simply wondering if I am now "condemned" to use this non-standard (for me) procedure for updating SciPy & co. Of course, I am not blaming the list for the advice I got, just I realize I may not be experienced enough to delve into this at this stage. Many thanks Lorenzo From gael.varoquaux at normalesup.org Thu Dec 6 09:32:09 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 6 Dec 2007 15:32:09 +0100 Subject: [SciPy-user] Question about Eggs and easy_install In-Reply-To: References: Message-ID: <20071206143209.GG2075@clipper.ens.fr> On Thu, Dec 06, 2007 at 03:20:21PM +0100, Lorenzo Isella wrote: > I think that Numpy got updated to a version not yet available in the > standard Debian testing repositories. can you tell us the result of import numpy print numpy.__file__ print numpy.__version__ import scipy print scipy.__file__ print scipy.__version__ So that we have a better idea of what happened. You are under Debian testing, right? Cheers, Ga?l From matthieu.brucher at gmail.com Thu Dec 6 10:35:09 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 6 Dec 2007 16:35:09 +0100 Subject: [SciPy-user] Sparse eigensolver Message-ID: Hi, Is there a package for scipy (or a function in scipy directly) that can compute the eigen values and the eigen vectors of a sparse matrix, like the eigs function in Malab ? Matthieu -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Thu Dec 6 11:52:47 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 6 Dec 2007 17:52:47 +0100 Subject: [SciPy-user] Sparse eigensolver In-Reply-To: References: Message-ID: <80c99e790712060852r3a879ef5vf814ab78514e94ce@mail.gmail.com> sure: scipy.sandbox.arpack it's in the sandbox, so you have to enable it and recompile scipy. note that there is a patch for arpack.py (google for it). L. On 12/6/07, Matthieu Brucher wrote: > > Hi, > > Is there a package for scipy (or a function in scipy directly) that can > compute the eigen values and the eigen vectors of a sparse matrix, like the > eigs function in Malab ? > > Matthieu > -- > French PhD student > Website : http://miles.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zpincus at stanford.edu Thu Dec 6 11:58:43 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Thu, 6 Dec 2007 11:58:43 -0500 Subject: [SciPy-user] Efficient "windowing" In-Reply-To: <20071205161809.18420@gmx.net> References: <20071205161809.18420@gmx.net> Message-ID: <4061C3AE-F477-4DA2-9D57-C9089270A0F2@stanford.edu> Hello, You'll want to look into scipy.ndimage. Definition: scipy.ndimage.maximum_filter(input, size=None, footprint=None, output=None, mode='reflect', cval=0.0, origin=0) Docstring: Calculates a multi-dimensional maximum filter. Either a size or a footprint with the filter must be provided. An output array can optionally be provided. The origin parameter controls the placement of the filter. The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to 'constant'. It handles several different boundary conditions, and you can give it a "footprint" mask to just select 4- or 8-connected neighbors for each pixel. There's also a generic_filter that will call whatever function you want for each pixel+neighborhood. In the context of maximum filtering, if you want to just ignore the boundary condition, just use mode="constant", cval=-numpy.Infinity so that the filter ignores pixels outside of the image. Zach On Dec 5, 2007, at 11:18 AM, Jose Luis Gomez Dans wrote: > Hi, > I am trying to apply a simple algorithm to a 2D matrix (an image). > What I want to do is, for each pixel, choose the highest (... > lowest) value in its 8- or 4-connected neighbours. I have done this > using weave.inline, and using a couple of loops, but I was curious > if there's some way of doing this using numpy slice syntax? My > (allegedly, unelegant) attempts have been versions of the following: > > b[1:-1,1:-1] = scipy.array([a[0:-2,1:-1] , a[2:,1:-1] , a > [1:-1,0:-2] ,\ > a[1:-1,2:],a[0:-2,0:-2], a[0:-2,2:], a[2:,0:-2], a > [2:,2:]],'f').max() > > They don't work, because the max() call at the end refers to the > whole array, so you are given a constant value array, equal to the > max. value of a. Using for loops is very slow when dealing with > large arrays. > > Thanks! > Jose > -- > Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten > Browser-Versionen downloaden: http://www.gmx.net/de/go/browser > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From matthieu.brucher at gmail.com Thu Dec 6 12:01:17 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 6 Dec 2007 18:01:17 +0100 Subject: [SciPy-user] Sparse eigensolver In-Reply-To: <80c99e790712060852r3a879ef5vf814ab78514e94ce@mail.gmail.com> References: <80c99e790712060852r3a879ef5vf814ab78514e94ce@mail.gmail.com> Message-ID: Thank you for this. I hope it will be put in the trunk at some point (or in a scikit) as it would be needed for a manifold learning scikit. Matthieu 2007/12/6, lorenzo bolla : > > sure: scipy.sandbox.arpack > it's in the sandbox, so you have to enable it and recompile scipy. > note that there is a patch for arpack.py (google for it). > L. > > > On 12/6/07, Matthieu Brucher wrote: > > > Hi, > > > > Is there a package for scipy (or a function in scipy directly) that can > > compute the eigen values and the eigen vectors of a sparse matrix, like the > > eigs function in Malab ? > > > > Matthieu > > -- > > French PhD student > > Website : http://miles.developpez.com/ > > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > > LinkedIn : http://www.linkedin.com/in/matthieubrucher > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Thu Dec 6 12:49:09 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 6 Dec 2007 18:49:09 +0100 Subject: [SciPy-user] Question about Eggs and easy_install Message-ID: Hello, Here is the output of those commands: In [1]: import numpy In [2]: print numpy.__file__ /usr/lib/python2.4/site-packages/numpy-1.0.4-py2.4-linux-i686.egg/numpy/__init__.pyc In [3]: print numpy.__version__ 1.0.4 In [4]: import scipy /usr/lib/python2.4/site-packages/scipy/misc/__init__.py:25: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code test = ScipyTest().test In [5]: print scipy.__file__ /usr/lib/python2.4/site-packages/scipy/__init__.pyc In [6]: print scipy.__version__ 0.5.2 and yes, I am using Debian testing. Kind Regards Lorenzo > > Message: 5 > Date: Thu, 6 Dec 2007 15:32:09 +0100 > From: Gael Varoquaux > Subject: Re: [SciPy-user] Question about Eggs and easy_install > To: SciPy Users List > Message-ID: <20071206143209.GG2075 at clipper.ens.fr> > Content-Type: text/plain; charset=iso-8859-1 > > On Thu, Dec 06, 2007 at 03:20:21PM +0100, Lorenzo Isella wrote: > > I think that Numpy got updated to a version not yet available in the > > standard Debian testing repositories. > > can you tell us the result of > > import numpy > print numpy.__file__ > print numpy.__version__ > import scipy > print scipy.__file__ > print scipy.__version__ > > So that we have a better idea of what happened. > > You are under Debian testing, right? > > Cheers, > > Ga?l From lbolla at gmail.com Thu Dec 6 12:55:04 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 6 Dec 2007 18:55:04 +0100 Subject: [SciPy-user] Sparse eigensolver In-Reply-To: References: <80c99e790712060852r3a879ef5vf814ab78514e94ce@mail.gmail.com> Message-ID: <80c99e790712060955i21fa05f5w1e30957ca4ec6193@mail.gmail.com> I agree... On Dec 6, 2007 6:01 PM, Matthieu Brucher wrote: > Thank you for this. I hope it will be put in the trunk at some point (or > in a scikit) as it would be needed for a manifold learning scikit. > > Matthieu > > 2007/12/6, lorenzo bolla < lbolla at gmail.com>: > > > > sure: scipy.sandbox.arpack > > it's in the sandbox, so you have to enable it and recompile scipy. > > note that there is a patch for arpack.py (google for it). > > L. > > > > > > On 12/6/07, Matthieu Brucher < matthieu.brucher at gmail.com> wrote: > > > > > Hi, > > > > > > Is there a package for scipy (or a function in scipy directly) that > > > can compute the eigen values and the eigen vectors of a sparse matrix, like > > > the eigs function in Malab ? > > > > > > Matthieu > > > -- > > > French PhD student > > > Website : http://miles.developpez.com/ > > > Blogs : http://matt.eifelle.com and > > > http://blog.developpez.com/?blog=92 > > > LinkedIn : http://www.linkedin.com/in/matthieubrucher > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > French PhD student > Website : http://miles.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skraelings001 at gmail.com Thu Dec 6 13:04:55 2007 From: skraelings001 at gmail.com (Reynaldo Baquerizo) Date: Thu, 06 Dec 2007 13:04:55 -0500 Subject: [SciPy-user] Question about Eggs and easy_install In-Reply-To: References: Message-ID: <475839C7.701@gmail.com> > I think that Numpy got updated to a version not yet available in the > standard Debian testing repositories. > My question is: as soon as newer versions of Numpy and SciPy make it > into Debian testing, will I be able to install them automatically with > the usual apt-get upgrade? > In other words: suppose, by absurd, that tomorrow SciPy 1.0 and Numpy > 1.7 are released simultaneously as Debian testing packages; will my > system upgrade to them automatically? > In general, any package that you build yourself is not updated cause 'apt' isn't aware of its presence. So, if you want to update to a recent version you have to do it manually. Be aware of, if you are keeping too different versions of scipy, numpy, etc ..one maintained by 'apt' and the other by you , then only the one maintained by 'apt' will be updated. > I am simply wondering if I am now "condemned" to use this > non-standard (for me) procedure for updating SciPy & co. > Of course, I am not blaming the list for the advice I got, just I > realize I may not be experienced enough to delve into this at this > stage. > Many thanks > > Lorenzo > > > From prabhu at aero.iitb.ac.in Thu Dec 6 13:14:48 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Thu, 06 Dec 2007 23:44:48 +0530 Subject: [SciPy-user] Question about Eggs and easy_install In-Reply-To: References: Message-ID: <47583C18.7050100@aero.iitb.ac.in> Lorenzo Isella wrote: > In [1]: import numpy > > In [2]: print numpy.__file__ > /usr/lib/python2.4/site-packages/numpy-1.0.4-py2.4-linux-i686.egg/numpy/__init__.pyc It looks like your numpy has been upgraded. One way to undo this is to do: easy_install -m numpy and then: rm -rf /usr/lib/python2.4/site-packages/numpy-1.0.4-py2.4-linux-i686.egg This should hopefully revert back to your debian version of numpy. If it doesn't do an apt-get install --reinstall python-numpy to reinstall the debian version. HTH. prabhu From prabhu at aero.iitb.ac.in Thu Dec 6 13:17:36 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Thu, 06 Dec 2007 23:47:36 +0530 Subject: [SciPy-user] Question about Eggs and easy_install In-Reply-To: References: Message-ID: <47583CC0.9040405@aero.iitb.ac.in> Lorenzo Isella wrote: > I think that Numpy got updated to a version not yet available in the > standard Debian testing repositories. > My question is: as soon as newer versions of Numpy and SciPy make it > into Debian testing, will I be able to install them automatically with > the usual apt-get upgrade? If you remove the numpy-1.0.4 egg, then yes you shouldn't have any trouble. > In other words: suppose, by absurd, that tomorrow SciPy 1.0 and Numpy > 1.7 are released simultaneously as Debian testing packages; will my > system upgrade to them automatically? They should. > I am simply wondering if I am now "condemned" to use this > non-standard (for me) procedure for updating SciPy & co. No. You will have trouble when mayavi2 gets into debian though. When that happens you may just want to blow away all the enthought.* eggs that you have already installed. Getting the mayavi2 package into testing may take a little while but a few kind folks are working on the packaging currently. cheers, prabhu From matthieu.brucher at gmail.com Thu Dec 6 14:23:29 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 6 Dec 2007 20:23:29 +0100 Subject: [SciPy-user] Sparse eigensolver In-Reply-To: <80c99e790712060955i21fa05f5w1e30957ca4ec6193@mail.gmail.com> References: <80c99e790712060852r3a879ef5vf814ab78514e94ce@mail.gmail.com> <80c99e790712060955i21fa05f5w1e30957ca4ec6193@mail.gmail.com> Message-ID: Well, if I need it (I can use a dense one for the moment) and if people agree, I'll make it a scikit ;) Matthieu 2007/12/6, lorenzo bolla : > > I agree... > > On Dec 6, 2007 6:01 PM, Matthieu Brucher > wrote: > > > Thank you for this. I hope it will be put in the trunk at some point (or > > in a scikit) as it would be needed for a manifold learning scikit. > > > > Matthieu > > > > 2007/12/6, lorenzo bolla < lbolla at gmail.com>: > > > > > > sure: scipy.sandbox.arpack > > > it's in the sandbox, so you have to enable it and recompile scipy. > > > note that there is a patch for arpack.py (google for it). > > > L. > > > > > > > > > On 12/6/07, Matthieu Brucher < matthieu.brucher at gmail.com> wrote: > > > > > > > Hi, > > > > > > > > Is there a package for scipy (or a function in scipy directly) that > > > > can compute the eigen values and the eigen vectors of a sparse matrix, like > > > > the eigs function in Malab ? > > > > > > > > Matthieu > > > > -- > > > > French PhD student > > > > Website : http://miles.developpez.com/ > > > > Blogs : http://matt.eifelle.com and > > > > http://blog.developpez.com/?blog=92 > > > > LinkedIn : http://www.linkedin.com/in/matthieubrucher > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > -- > > French PhD student > > Website : http://miles.developpez.com/ > > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > > LinkedIn : http://www.linkedin.com/in/matthieubrucher > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alexander.Dietz at astro.cf.ac.uk Thu Dec 6 15:56:50 2007 From: Alexander.Dietz at astro.cf.ac.uk (Alexander Dietz) Date: Thu, 6 Dec 2007 20:56:50 +0000 Subject: [SciPy-user] Problems installing scipy 0.6 on FC5 In-Reply-To: <474BE783.2020707@ar.media.kyoto-u.ac.jp> References: <9cf809a00711260905o3c0d14behed14925157c0be9b@mail.gmail.com> <474B8CC7.7010108@ar.media.kyoto-u.ac.jp> <9cf809a00711270059w117804faj7a03710d8e5fc208@mail.gmail.com> <474BDE2A.5000404@ar.media.kyoto-u.ac.jp> <9cf809a00711270139h3e5ee58bk62f8af9f6322f08c@mail.gmail.com> <474BE6B3.9030709@ar.media.kyoto-u.ac.jp> <474BE783.2020707@ar.media.kyoto-u.ac.jp> Message-ID: <9cf809a00712061256x5a99159fwe5f933b16af78e7f@mail.gmail.com> Hi, I finally found a work-around for my problems installing scipy for my Linux distribution: I needed just to comment the line in /numpy/distutils/fcompiler/gnu.py in which the term "msse2" is set as an compiler option. Then scipy installs without problems... Cheers Alex On Nov 27, 2007 9:46 AM, David Cournapeau wrote: > David Cournapeau wrote: > > Alexander Dietz wrote: > > > >> > >> > >> Also, you could try to see which > >> flag make it fails, for example, does the following: > >> > >> /usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O2 > -funroll-loops > >> -march=i686 -mmmx -msse -fomit-frame-pointer -malign-double -c > >> scipy/fftpack/dfftpack/zfftb1.f > >> > >> > >> That worked actually... > >> > > Yes, so the problem is your gcc version. Maybe we could disable adding > > -msse2 altogether for gcc < 3.3, instead of enabling it for gcc > 3.2.2 > > (look at numpy/distutils/fcompiler/gnu.py, and look for -msse2; if Pearu > > or David are reading this, maybe they know a better solution ?). > > > > Note that FC5 is old. You should consider upgrading (FC 5 is not > > maintained anymore) if you can, > I forgot: another possibility is to use gfortran instead of g77, but you > will have to recompile everything (you cannot really mix g77 and > gfortran code together), including BLAS/LAPACK. For FC6, I remember that > gfortran is the default ABI, so this is anyway the best choice, but I am > not sure for FC5. > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dueck at math.ucdavis.edu Thu Dec 6 23:04:45 2007 From: dueck at math.ucdavis.edu (Pierre Dueck) Date: Thu, 6 Dec 2007 20:04:45 -0800 Subject: [SciPy-user] I'm lost on installation Message-ID: I'm trying to install Numpy and Scipy. I'm lost on how to finish this - I would appreciate help very much! I found a problem when running the numpy test: >>> import numpy Running from numpy source directory. >>> numpy.test(1,10) Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'test' >>> Thanks! -Pierre Here is the requested info: Your OS version OS X 10.5 Leopard gcc --version i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5465) gfortran --version GNU Fortran (GCC) 4.2.1 The versions of numpy and scipy that you are trying to install Numpy 1.0.4 Scipy 0.6.0 The full output of $ python setup.py build: $ python setup.py build Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2_4422 blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands -- compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands -- fcompiler options running build_src building py_modules sources building extension "numpy.core.multiarray" sources adding 'build/src.macosx-10.3-fat-2.5/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.macosx-10.3-fat-2.5/numpy/core/ __multiarray_api.h' to sources. adding 'build/src.macosx-10.3-fat-2.5/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files = ['build/src.macosx-10.3- fat-2.5/numpy/core/src/scalartypes.inc', 'build/src.macosx-10.3- fat-2.5/numpy/core/src/arraytypes.inc', 'build/src.macosx-10.3-fat-2.5/ numpy/core/config.h', 'build/src.macosx-10.3-fat-2.5/numpy/core/ __multiarray_api.h'] building extension "numpy.core.umath" sources adding 'build/src.macosx-10.3-fat-2.5/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.macosx-10.3-fat-2.5/numpy/core/__ufunc_api.h' to sources. adding 'build/src.macosx-10.3-fat-2.5/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files = ['build/src.macosx-10.3- fat-2.5/numpy/core/src/scalartypes.inc', 'build/src.macosx-10.3- fat-2.5/numpy/core/src/arraytypes.inc', 'build/src.macosx-10.3-fat-2.5/ numpy/core/config.h', 'build/src.macosx-10.3-fat-2.5/numpy/core/ __ufunc_api.h'] building extension "numpy.core._sort" sources adding 'build/src.macosx-10.3-fat-2.5/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.macosx-10.3-fat-2.5/numpy/core/ __multiarray_api.h' to sources. numpy.core - nothing done with h_files = ['build/src.macosx-10.3- fat-2.5/numpy/core/config.h', 'build/src.macosx-10.3-fat-2.5/numpy/ core/__multiarray_api.h'] building extension "numpy.core.scalarmath" sources adding 'build/src.macosx-10.3-fat-2.5/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_array_api.py adding 'build/src.macosx-10.3-fat-2.5/numpy/core/ __multiarray_api.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.macosx-10.3-fat-2.5/numpy/core/__ufunc_api.h' to sources. numpy.core - nothing done with h_files = ['build/src.macosx-10.3- fat-2.5/numpy/core/config.h', 'build/src.macosx-10.3-fat-2.5/numpy/ core/__multiarray_api.h', 'build/src.macosx-10.3-fat-2.5/numpy/core/ __ufunc_api.h'] building extension "numpy.core._dotblas" sources adding 'numpy/core/blasdot/_dotblas.c' to sources. building extension "numpy.lib._compiled_base" sources building extension "numpy.numarray._capi" sources building extension "numpy.fft.fftpack_lite" sources building extension "numpy.linalg.lapack_lite" sources adding 'numpy/linalg/lapack_litemodule.c' to sources. building extension "numpy.random.mtrand" sources customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using config C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp - mno-fused-madd -fPIC -fno-common -dynamic -DNDEBUG -g -O3 -Wall - Wstrict-prototypes compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.5/include/python2.5 -c' gcc: _configtest.c gcc _configtest.o -o _configtest _configtest failure. removing: _configtest.c _configtest.o _configtest building data_files sources running build_py copying build/src.macosx-10.3-fat-2.5/numpy/__config__.py -> build/ lib.macosx-10.3-fat-2.5/numpy copying build/src.macosx-10.3-fat-2.5/numpy/distutils/__config__.py -> build/lib.macosx-10.3-fat-2.5/numpy/distutils running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext running build_scripts adding 'build/scripts.macosx-10.3-fat-2.5/f2py' to scripts -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Thu Dec 6 22:58:13 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 07 Dec 2007 12:58:13 +0900 Subject: [SciPy-user] I'm lost on installation In-Reply-To: References: Message-ID: <4758C4D5.9010104@ar.media.kyoto-u.ac.jp> Pierre Dueck wrote: > I'm trying to install Numpy and Scipy. I'm lost on how to finish this > - I would appreciate help very much! I found a problem when running > the numpy test: > > >>> import numpy > Running from numpy source directory. > >>> numpy.test(1,10) > Traceback (most recent call last): > File "", line 1, in > AttributeError: 'module' object has no attribute 'test' > >>> You should not launch python while you are in the source tree. The message "running from numpy source directory" may not be really clear, but this means numpy has no chance of working for normal usage. Same for scipy: once you've finished installing scipy, you should not try to launch python in the source directory. cheers, David From robert.kern at gmail.com Thu Dec 6 23:09:56 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 06 Dec 2007 22:09:56 -0600 Subject: [SciPy-user] I'm lost on installation In-Reply-To: References: Message-ID: <4758C794.2010405@gmail.com> Pierre Dueck wrote: > I'm trying to install Numpy and Scipy. I'm lost on how to finish this - > I would appreciate help very much! I found a problem when running the > numpy test: > >>>> import numpy > Running from numpy source directory. Change to another directory before trying to import numpy. Otherwise, Python will pick up the unbuilt sources instead of the installed package since Python searches the current directory before the installation location. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dueck at math.ucdavis.edu Thu Dec 6 23:14:49 2007 From: dueck at math.ucdavis.edu (Pierre Dueck) Date: Thu, 6 Dec 2007 20:14:49 -0800 Subject: [SciPy-user] I'm lost on installation In-Reply-To: <4758C4D5.9010104@ar.media.kyoto-u.ac.jp> References: <4758C4D5.9010104@ar.media.kyoto-u.ac.jp> Message-ID: <360373E8-B8C1-47A5-B4F7-DBD17DA5C361@math.ucdavis.edu> Thanks! That was fast! -Pierre On Dec 6, 2007, at 7:58 PM, David Cournapeau wrote: > Pierre Dueck wrote: >> I'm trying to install Numpy and Scipy. I'm lost on how to finish >> this >> - I would appreciate help very much! I found a problem when running >> the numpy test: >> >>>>> import numpy >> Running from numpy source directory. >>>>> numpy.test(1,10) >> Traceback (most recent call last): >> File "", line 1, in >> AttributeError: 'module' object has no attribute 'test' >>>>> > You should not launch python while you are in the source tree. The > message "running from numpy source directory" may not be really clear, > but this means numpy has no chance of working for normal usage. Same > for > scipy: once you've finished installing scipy, you should not try to > launch python in the source directory. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From david at ar.media.kyoto-u.ac.jp Thu Dec 6 23:06:54 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 07 Dec 2007 13:06:54 +0900 Subject: [SciPy-user] Question about Eggs and easy_install In-Reply-To: <47583C18.7050100@aero.iitb.ac.in> References: <47583C18.7050100@aero.iitb.ac.in> Message-ID: <4758C6DE.3060205@ar.media.kyoto-u.ac.jp> Prabhu Ramachandran wrote: > Lorenzo Isella wrote: > >> In [1]: import numpy >> >> In [2]: print numpy.__file__ >> /usr/lib/python2.4/site-packages/numpy-1.0.4-py2.4-linux-i686.egg/numpy/__init__.pyc >> > > It looks like your numpy has been upgraded. One way to undo this is to do: > > easy_install -m numpy > > and then: > > rm -rf /usr/lib/python2.4/site-packages/numpy-1.0.4-py2.4-linux-i686.egg > > > This should hopefully revert back to your debian version of numpy. If > it doesn't do an apt-get install --reinstall python-numpy to reinstall > the debian version. > > The whole problem is coming from the fact that Lorenzo installed things in /usr. To keep things simple: never ever do that. For example, if you install numpy from debian, and then overwrite with your own numpy, it is possible that you won't be able to remove the package (the scripts to remove the packages may rely on some behaviour which is different between the packaged numpy and your own). dpkg won't overwrite, though, because by default, it fails if a package tries to overwrite any existing file, but that does not make it a good idea. The only reliable way to have dpkg-aware numpy and your own is to install your own in a different place. Doing otherwise is really likely to give you some headache at some points. cheers, David From lev at vpac.org Fri Dec 7 00:10:36 2007 From: lev at vpac.org (Lev Lafayette) Date: Fri, 07 Dec 2007 16:10:36 +1100 Subject: [SciPy-user] I'm lost on installation In-Reply-To: References: Message-ID: <1197004236.4934.72.camel@sys09.in.vpac.org> On Thu, 2007-12-06 at 20:04 -0800, Pierre Dueck wrote: > I'm trying to install Numpy and Scipy. I'm lost on how to finish this > - I would appreciate help very much! I found a problem when running > the numpy test: > > > >>> import numpy > Running from numpy source directory. There's the answer. Go up a directory and run the test again. All the best, Lev > From prabhu at aero.iitb.ac.in Fri Dec 7 00:35:06 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Fri, 07 Dec 2007 11:05:06 +0530 Subject: [SciPy-user] Question about Eggs and easy_install In-Reply-To: <4758C6DE.3060205@ar.media.kyoto-u.ac.jp> References: <47583C18.7050100@aero.iitb.ac.in> <4758C6DE.3060205@ar.media.kyoto-u.ac.jp> Message-ID: <4758DB8A.9000002@aero.iitb.ac.in> David Cournapeau wrote: > The whole problem is coming from the fact that Lorenzo installed things > in /usr. To keep things simple: never ever do that. For example, if you On Debian/debian-derivatives the recommended way is to simply do easy_install --prefix=/usr/local [other args] and this will work. On other distros this may not work. FC6 is one such beast where this does not work. If you have a ton of stuff in /usr/local and have a hard time managing those, I'd recommend using GNU stow. cheers, prabhu From david at ar.media.kyoto-u.ac.jp Fri Dec 7 01:07:40 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 07 Dec 2007 15:07:40 +0900 Subject: [SciPy-user] Question about Eggs and easy_install In-Reply-To: <4758DB8A.9000002@aero.iitb.ac.in> References: <47583C18.7050100@aero.iitb.ac.in> <4758C6DE.3060205@ar.media.kyoto-u.ac.jp> <4758DB8A.9000002@aero.iitb.ac.in> Message-ID: <4758E32C.3010702@ar.media.kyoto-u.ac.jp> Prabhu Ramachandran wrote: > David Cournapeau wrote: > >> The whole problem is coming from the fact that Lorenzo installed things >> in /usr. To keep things simple: never ever do that. For example, if you >> > > On Debian/debian-derivatives the recommended way is to simply do > > easy_install --prefix=/usr/local [other args] > /usr/local is indeed supposed to be the default path for such things (that's the default path used by GNU build system, for example), and the one where to install things not under the control of your distribution (there is /opt, too; but I don't think it matters much which one you use). David From fperez.net at gmail.com Fri Dec 7 02:21:48 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Dec 2007 00:21:48 -0700 Subject: [SciPy-user] Scipy.org having problems? Message-ID: Hi all, the main page is working, but access to subpages (such as the cookbook) seems to be stalling out, and eventually: OK The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, root at localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Any ideas? Thanks! f From robert.kern at gmail.com Fri Dec 7 02:24:53 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 07 Dec 2007 01:24:53 -0600 Subject: [SciPy-user] Scipy.org having problems? In-Reply-To: References: Message-ID: <4758F545.9020503@gmail.com> Fernando Perez wrote: > Hi all, > > the main page is working, but access to subpages (such as the > cookbook) seems to be stalling out, and eventually: > > OK > > The server encountered an internal error or misconfiguration and was > unable to complete your request. > > Please contact the server administrator, root at localhost and inform > them of the time the error occurred, and anything you might have done > that may have caused the error. > > More information about this error may be available in the server error log. > > > Any ideas? It's working fine for me, but I've forwarded your message on to Ryan. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Fri Dec 7 02:38:34 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Dec 2007 00:38:34 -0700 Subject: [SciPy-user] Scipy.org having problems? In-Reply-To: <4758F545.9020503@gmail.com> References: <4758F545.9020503@gmail.com> Message-ID: On Dec 7, 2007 12:24 AM, Robert Kern wrote: > It's working fine for me, but I've forwarded your message on to Ryan. Thanks! The load average was hovering around 7 a moment ago, with a bunch of trac.fcgi processes, in case it helps. M TIME+ COMMAND 31517 scipy 15 0 50312 19m 4232 S 42.1 1.0 2:08.46 trac.fcgi 23756 scipy 15 0 88872 19m 4232 S 23.2 1.0 18:17.42 trac.fcgi 1800 ipython 15 0 42316 15m 3952 S 18.9 0.8 0:24.71 trac.fcgi 30365 scipy 15 0 42340 14m 4292 S 7.6 0.7 0:29.47 trac.fcgi 27390 astropy 16 0 62168 22m 3960 S 6.0 1.1 32:02.84 trac.fcgi 24035 scipy 15 0 41784 12m 4144 S 4.6 0.6 0:45.02 trac.fcgi Cheers, f From matthieu.brucher at gmail.com Fri Dec 7 03:26:23 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 7 Dec 2007 09:26:23 +0100 Subject: [SciPy-user] Scipy.org having problems? In-Reply-To: References: <4758F545.9020503@gmail.com> Message-ID: One hour ago, the main site did not respond for me. Matthieu 2007/12/7, Fernando Perez : > > On Dec 7, 2007 12:24 AM, Robert Kern wrote: > > > It's working fine for me, but I've forwarded your message on to Ryan. > > Thanks! > > The load average was hovering around 7 a moment ago, with a bunch of > trac.fcgi processes, in case it helps. > > M TIME+ COMMAND > 31517 scipy 15 0 50312 19m 4232 S 42.1 1.0 2:08.46 trac.fcgi > 23756 scipy 15 0 88872 19m 4232 S 23.2 1.0 18:17.42 trac.fcgi > 1800 ipython 15 0 42316 15m 3952 S 18.9 0.8 0:24.71 trac.fcgi > 30365 scipy 15 0 42340 14m 4292 S 7.6 0.7 0:29.47 trac.fcgi > 27390 astropy 16 0 62168 22m 3960 S 6.0 1.1 32:02.84 trac.fcgi > 24035 scipy 15 0 41784 12m 4144 S 4.6 0.6 0:45.02 trac.fcgi > > > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuntim.luk at polyu.edu.hk Fri Dec 7 03:41:33 2007 From: shuntim.luk at polyu.edu.hk (LUK ShunTim) Date: Fri, 07 Dec 2007 16:41:33 +0800 Subject: [SciPy-user] Question about Eggs and easy_install In-Reply-To: <4758E32C.3010702@ar.media.kyoto-u.ac.jp> References: <47583C18.7050100@aero.iitb.ac.in> <4758C6DE.3060205@ar.media.kyoto-u.ac.jp> <4758DB8A.9000002@aero.iitb.ac.in> <4758E32C.3010702@ar.media.kyoto-u.ac.jp> Message-ID: <4759073D.4080206@polyu.edu.hk> David Cournapeau wrote: > Prabhu Ramachandran wrote: >> David Cournapeau wrote: >> >>> The whole problem is coming from the fact that Lorenzo installed things >>> in /usr. To keep things simple: never ever do that. For example, if you >>> >> On Debian/debian-derivatives the recommended way is to simply do >> >> easy_install --prefix=/usr/local [other args] >> > /usr/local is indeed supposed to be the default path for such things > (that's the default path used by GNU build system, for example), and the > one where to install things not under the control of your distribution > (there is /opt, too; but I don't think it matters much which one you use). > > David Hello, A quick and dirty way to make a (non-policy conforming) debian package is checkinstall. It's apt-getable. sid http://packages.debian.org/sid/checkinstall Homepage http://asic-linux.com.mx/~izto/checkinstall/ I've no experience of using it with easy_install though, but it works well with the GNU way. Just issue "checkinstall make install" instead of "make install" as root and the package will be installed/managed by dpkg. Regards, ST -- From jre at enthought.com Fri Dec 7 17:57:06 2007 From: jre at enthought.com (J. Ryan Earl) Date: Fri, 07 Dec 2007 16:57:06 -0600 Subject: [SciPy-user] Scipy.org having problems? In-Reply-To: References: <4758F545.9020503@gmail.com> Message-ID: <4759CFC2.9010603@enthought.com> Hello, We are currently in the process of migrating SciPy.org onto better hardware. The current hardware is overloaded, but the new hardware will handle around six times as much capacity. Cheers, -ryan Fernando Perez wrote: > On Dec 7, 2007 12:24 AM, Robert Kern wrote: > > >> It's working fine for me, but I've forwarded your message on to Ryan. >> > > Thanks! > > The load average was hovering around 7 a moment ago, with a bunch of > trac.fcgi processes, in case it helps. > > M TIME+ COMMAND > 31517 scipy 15 0 50312 19m 4232 S 42.1 1.0 2:08.46 trac.fcgi > 23756 scipy 15 0 88872 19m 4232 S 23.2 1.0 18:17.42 trac.fcgi > 1800 ipython 15 0 42316 15m 3952 S 18.9 0.8 0:24.71 trac.fcgi > 30365 scipy 15 0 42340 14m 4292 S 7.6 0.7 0:29.47 trac.fcgi > 27390 astropy 16 0 62168 22m 3960 S 6.0 1.1 32:02.84 trac.fcgi > 24035 scipy 15 0 41784 12m 4144 S 4.6 0.6 0:45.02 trac.fcgi > > > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From stef.mientki at gmail.com Sat Dec 8 04:54:20 2007 From: stef.mientki at gmail.com (stef mientki) Date: Sat, 08 Dec 2007 10:54:20 +0100 Subject: [SciPy-user] Scipy + wxPython ... Message-ID: <475A69CC.8000307@gmail.com> hello, A month ago or so we discussed the possibilities to use Vision as a possibility to enhance Scipy to make it more look like some of the commercial scientific packages. I looked if I could use Vision, but as some of you already said: "Vision is not very pretty" , and I have to admit I didn't get it working within a limited amount of time. Also the other packages, Orange and Elefant, were too difficult for me to get them working in a short time. So therefor I just started to fork another application I'm working on, which just uses wxPython, to see what I could accomplish. Although it's just a "proof of concept" , made from a quick and dirty glueing of some (most already existing) components, a large part of the program is already working quit well. If you're interested, look here: http://stef.mientki.googlepages.com/pylab_works_1.html (10MB, 8 minutes) Any comment, suggestions, domain expertise are very welcome. Hearing that Enthought is working on something similar, I'm still very curious to see their concept. cheers, Stef From matthieu.brucher at gmail.com Sat Dec 8 05:18:30 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 8 Dec 2007 11:18:30 +0100 Subject: [SciPy-user] Scipy + wxPython ... In-Reply-To: <475A69CC.8000307@gmail.com> References: <475A69CC.8000307@gmail.com> Message-ID: 2007/12/8, stef mientki : > > hello, > > A month ago or so we discussed the possibilities to use Vision as a > possibility > to enhance Scipy to make it more look like some of the commercial > scientific packages. > > I looked if I could use Vision, > but as some of you already said: "Vision is not very pretty" , > and I have to admit I didn't get it working within a limited amount of > time. > Also the other packages, Orange and Elefant, > were too difficult for me to get them working in a short time. > > So therefor I just started to fork another application I'm working on, > which just uses wxPython, to see what I could accomplish. > Although it's just a "proof of concept" , > made from a quick and dirty glueing of some (most already existing) > components, > a large part of the program is already working quit well. > > If you're interested, look here: > http://stef.mientki.googlepages.com/pylab_works_1.html > (10MB, 8 minutes) > > Any comment, suggestions, domain expertise are very welcome. It's great, but the fact that you do real time job as well splits applications in two : the one that need it and teh one that must not have it. If I have to iterate on a folder and apply something to each element in the folder and then save it, I would use a for loop with a sub worflow with the load, process and save boxes inside the for loop box, not outside. Now, I think that if an input is connected to something, the associated editor should not be displayed. But great job, I didn't create such an application that works that well even if I started a long time ago ;) Hearing that Enthought is working on something similar, > I'm still very curious to see their concept. Yes, their block_canvas will be a huge competitor with the support of the Traits library for modifying and/or displaying variables. For what I saw, it will beat Vision hands down (their workflow is a Python script directly, so adding new functions is generally straightforward). Matthieu -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Sat Dec 8 11:02:09 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 08 Dec 2007 17:02:09 +0100 Subject: [SciPy-user] Scipy + wxPython ... In-Reply-To: References: <475A69CC.8000307@gmail.com> Message-ID: <475AC001.1050704@ru.nl> Matthieu Brucher wrote: > > > 2007/12/8, stef mientki >: > > hello, > > A month ago or so we discussed the possibilities to use Vision as a > possibility > to enhance Scipy to make it more look like some of the commercial > scientific packages. > > I looked if I could use Vision, > but as some of you already said: "Vision is not very pretty" , > and I have to admit I didn't get it working within a limited > amount of time. > Also the other packages, Orange and Elefant, > were too difficult for me to get them working in a short time. > > So therefor I just started to fork another application I'm working on, > which just uses wxPython, to see what I could accomplish. > Although it's just a "proof of concept" , > made from a quick and dirty glueing of some (most already existing) > components, > a large part of the program is already working quit well. > > If you're interested, look here: > http://stef.mientki.googlepages.com/pylab_works_1.html > > (10MB, 8 minutes) > > Any comment, suggestions, domain expertise are very welcome. > > > > It's great, but the fact that you do real time job as well splits > applications in two : the one that need it and teh one that must not > have it. If I have to iterate on a folder and apply something to each > element in the folder and then save it, I would use a for loop with a > sub worflow with the load, process and save boxes inside the for loop > box, not outside. Yes, I've no ready solution for that. Labview uses "time-critical" signals for that. I was thinking of grouping bricks together, so they form a new brick, in which closed for loops could exists. > > Now, I think that if an input is connected to something, the > associated editor should not be displayed. Good point, also makes the design more consistent. I now see also an error in the demo, I throw all the controls of the rotation away, while the interpolation should remain. > > But great job, I didn't create such an application that works that > well even if I started a long time ago ;) > > > Hearing that Enthought is working on something similar, > I'm still very curious to see their concept. > > > Yes, their block_canvas will be a huge competitor with the support of > the Traits library for modifying and/or displaying variables. For what > I saw, it will beat Vision hands down (their workflow is a Python > script directly, so adding new functions is generally straightforward). I'ld love to see some peek previews ;-) cheers, Stef From webb.sprague at gmail.com Sat Dec 8 19:19:56 2007 From: webb.sprague at gmail.com (Webb Sprague) Date: Sat, 8 Dec 2007 16:19:56 -0800 Subject: [SciPy-user] Estimating AR(1) model? Message-ID: Hi all, Is there a quick approach for estimating a simple AR(1) time series model from a vector of data points? Currently I use lingregress on lagged data and roll my own standard error computations. It would be great if someone smarter than I has done this already, especially if they use MLE or Yule-Walker. I am using Gentoo, scipy.__version__ == '0.6.0', and I can't find a scipy/stats/models directory in my installation, if that matters. Thx W From david at ar.media.kyoto-u.ac.jp Sat Dec 8 22:23:03 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 09 Dec 2007 12:23:03 +0900 Subject: [SciPy-user] Estimating AR(1) model? In-Reply-To: References: Message-ID: <475B5F97.1070506@ar.media.kyoto-u.ac.jp> Webb Sprague wrote: > Hi all, > > Is there a quick approach for estimating a simple AR(1) time series > model from a vector of data points? Currently I use lingregress on > lagged data and roll my own standard error computations. It would be > great if someone smarter than I has done this already, especially if > they use MLE or Yule-Walker. > > Using Yule-Walker (and something like Levinson Durbin to solve it) for AR(1) is kind of overkill. There is a function similar to lpc into my sandbox (cdavid: from scipy.sandbox.cdavid import lpc). This gives you the ar coefficients, the error, and the k coefficients (follows matlab lpc function). cheers, David From odonnems at yahoo.com Sat Dec 8 23:00:52 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Sat, 8 Dec 2007 20:00:52 -0800 (PST) Subject: [SciPy-user] Can someone shed light on this simple weave inline test Message-ID: <716781.78635.qm@web58006.mail.re3.yahoo.com> I am trying to do some testing and I can not get the following test to work. The code is compiled and the values returned are: >>> creating c:\temp\Michael\python25_intermediate\compiler_12e837eb1ea3ab5199fbcc0e83015e3f 0 0 0 0 0 0 0 0 0 0 I am able to return an array using a similar method, but this is not working for some reason. Is there any documentation that may help me with passing multiple variables and arrays? Thanks, Michael winx = 10 winy = 10 kernel = numpy.random.randint(1,10, (10,10)) #Values between 0 and 10 for an array with 10 x 10 shape #Now make it floating #kernel * 1.0 nullValue = -10000 ND=NNW=WNW=WSW=SSW=SSE=ESE=ENE=NNE=Flat=0 code = r""" #include "Python.h" //int ND, NNW, WNW, WSW, SSW, SSE, ESE, ENE, NNE, Flat; //Cycle through kernel and calculate the frequency of each compass direction //Change row in kernel for (int k=0; k From steveire at gmail.com Sun Dec 9 19:41:31 2007 From: steveire at gmail.com (Stephen Kelly) Date: Mon, 10 Dec 2007 00:41:31 +0000 Subject: [SciPy-user] gmane.comp.kde.devel.core: Authorization required In-Reply-To: References: Message-ID: <200712100041.31966.steveire@gmail.com> On Monday 10 December 2007 00:34:41 Gmane Autoauthorizer wrote: > You have sent a message to be posted on the > gmane.comp.kde.devel.core newsgroup. > > > This is a non-public mailing list, which means that you have to > subscribe to the list to post to it. If you're already subscribed to > the list, Gmane can forward the message you sent to the list if you respond > to this message. If not, you should sign up to the mailing list first, > and then respond to this message, or just forget about it. > > Many mailing lists have an option to subscribe to a list, but then put > it in 'nomail' mode, which means that you won't receive any mail from > the list. > > The mailing list software used for the list in question is mailman. > > > First subscribe to the mailing list. The subscription address is > kde-core-devel-request at kde.org. Then send a message to > kde-core-devel-request at kde.org with the text > > set nomail on YOUR_PASSWORD_HERE > > or (depending on the version of Mailman) > > set authenticate YOUR_PASSWORD_HERE > set delivery off > > in the body of the message. > > You have to respond within one week. From cimrman3 at ntc.zcu.cz Mon Dec 10 04:53:27 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 10 Dec 2007 10:53:27 +0100 Subject: [SciPy-user] Sparse eigensolver In-Reply-To: <80c99e790712060852r3a879ef5vf814ab78514e94ce@mail.gmail.com> References: <80c99e790712060852r3a879ef5vf814ab78514e94ce@mail.gmail.com> Message-ID: <475D0C97.2010108@ntc.zcu.cz> There is also scipy.sandbox.lobpcg (pure scipy code, depending temporarily on symeig, until symeig functionality is in scipy) It seems to run with recent scipy, but lobpcg/tests/benchmark.py gives two additional zero eigenvalues that are not in symeig results. Nils, have you tried it recently? This used to work... r. lorenzo bolla wrote: > sure: scipy.sandbox.arpack > it's in the sandbox, so you have to enable it and recompile scipy. > note that there is a patch for arpack.py (google for it). > L. > > > On 12/6/07, Matthieu Brucher wrote: >> Hi, >> >> Is there a package for scipy (or a function in scipy directly) that can >> compute the eigen values and the eigen vectors of a sparse matrix, like the >> eigs function in Malab ? From lbolla at gmail.com Mon Dec 10 08:37:16 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Mon, 10 Dec 2007 14:37:16 +0100 Subject: [SciPy-user] Sparse eigensolver In-Reply-To: References: <80c99e790712060852r3a879ef5vf814ab78514e94ce@mail.gmail.com> <80c99e790712060955i21fa05f5w1e30957ca4ec6193@mail.gmail.com> Message-ID: <80c99e790712100537r725f5046k293b5ee22ab76567@mail.gmail.com> that would be great! L. On 12/6/07, Matthieu Brucher wrote: > > Well, if I need it (I can use a dense one for the moment) and if people > agree, I'll make it a scikit ;) > > Matthieu > > 2007/12/6, lorenzo bolla < lbolla at gmail.com>: > > > > I agree... > > > > On Dec 6, 2007 6:01 PM, Matthieu Brucher > > wrote: > > > > > Thank you for this. I hope it will be put in the trunk at some point > > > (or in a scikit) as it would be needed for a manifold learning scikit. > > > > > > Matthieu > > > > > > 2007/12/6, lorenzo bolla < lbolla at gmail.com>: > > > > > > > > sure: scipy.sandbox.arpack > > > > it's in the sandbox, so you have to enable it and recompile scipy. > > > > note that there is a patch for arpack.py (google for it). > > > > L. > > > > > > > > > > > > On 12/6/07, Matthieu Brucher < matthieu.brucher at gmail.com > wrote: > > > > > > > > > Hi, > > > > > > > > > > Is there a package for scipy (or a function in scipy directly) > > > > > that can compute the eigen values and the eigen vectors of a sparse matrix, > > > > > like the eigs function in Malab ? > > > > > > > > > > Matthieu > > > > > -- > > > > > French PhD student > > > > > Website : http://miles.developpez.com/ > > > > > Blogs : http://matt.eifelle.com and > > > > > http://blog.developpez.com/?blog=92 > > > > > LinkedIn : http://www.linkedin.com/in/matthieubrucher > > > > > _______________________________________________ > > > > > SciPy-user mailing list > > > > > SciPy-user at scipy.org > > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > > > -- > > > French PhD student > > > Website : http://miles.developpez.com/ > > > Blogs : http://matt.eifelle.com and > > > http://blog.developpez.com/?blog=92 > > > LinkedIn : http://www.linkedin.com/in/matthieubrucher > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > French PhD student > Website : http://miles.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Mon Dec 10 12:03:11 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Dec 2007 18:03:11 +0100 Subject: [SciPy-user] Sparse eigensolver In-Reply-To: <475D0C97.2010108@ntc.zcu.cz> References: <80c99e790712060852r3a879ef5vf814ab78514e94ce@mail.gmail.com> <475D0C97.2010108@ntc.zcu.cz> Message-ID: On Mon, 10 Dec 2007 10:53:27 +0100 Robert Cimrman wrote: > > There is also scipy.sandbox.lobpcg (pure scipy code, >depending > temporarily on symeig, until symeig functionality is in >scipy) > > It seems to run with recent scipy, but >lobpcg/tests/benchmark.py gives > two additional zero eigenvalues that are not in symeig >results. Nils, > have you tried it recently? This used to work... python -i local/lib64/python2.3/site-packages/scipy/sandbox/lobpcg/tests/benchmark.py ****** 128 Results by LOBPCG 128 [ 1. 4. 9. 16. 25. 36. 49. 64. 81. 100.] Results by symeig 128 [ 1. 4. 9. 16. 25. 36. 49. 64. 81. 100.] ****** 256 Results by LOBPCG 256 [ 1. 4. 9. 16. 25. 36. 49. 64. 81. 100.] Results by symeig 256 [ 1. 4. 9. 16. 25. 36. 49. 64. 81. 100.] ****** 512 Results by LOBPCG 512 [ 1. 4. 9. 16. 25. 36. 49. 64. 81. 100.] Results by symeig 512 [ 1. 4. 9. 16. 25. 36. 49. 64. 81. 100.] ****** 1024 Results by LOBPCG 1024 [ 1. 4. 9. 16. 25. 36. 49. 64. 81. 100.] Results by symeig 1024 [ 1. 4. 9. 16. 25. 36. 49. 64. 81. 100.] ****** 2048 Results by LOBPCG 2048 [ 1. 4. 9. 16. 25. 36. 49. 64. 81. 100.] Results by symeig 2048 [ 1. 4. 9. 16. 25. 36. 49. 64. 81. 100.] Cheers, Nils From nwagner at iam.uni-stuttgart.de Mon Dec 10 13:43:32 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Dec 2007 19:43:32 +0100 Subject: [SciPy-user] Slicing arrays Message-ID: Hi all, I have an array >>> a array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]) How can I extract an array b from a such that >>> b array([ 0, 1, 2, 6, 7, 8, 12, 13, 14, 18, 19, 20, 24, 25, 26, 30, 31, 32, 36, 37, 38, 42, 43, 44, 48, 49, 50, 54, 55, 56, 60, 61, 62]) Nils From odonnems at yahoo.com Mon Dec 10 13:47:36 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Mon, 10 Dec 2007 10:47:36 -0800 (PST) Subject: [SciPy-user] using STL in scipy weave inline Message-ID: <124989.33872.qm@web58006.mail.re3.yahoo.com> OS Microsoft XP SP2 x86 python 2.5.1 compiler: mingw Does anyone have experience using multimap or the standard template library within weave inline? I need to create a map that contains numeric keys which are paired to a string, but I get an error when declaring the multimap function. If i include I get other errors about namespace and if don't then I get an error for not declaring the multimap. Neither method works. TIA, Michael Example of code: total = (winx * winy * 1.0) #Total number of cells to evaluate in kernel code = r""" double ND, NNW, WNW, WSW,SSW, SSE, ESE, ENE, NNE, Flat; ND = 22.0; NNW = 11.0; WNW = 10.0; WSW = 30.0; SSW = 8.0; SSE = 2.0; ESE = 1.0; ENE = 5.0; NNE = 5.0; Flat = 3.0; //Calculate the frequency of the counts fND = (ND / total); fNNW = (NNW / total); fWNW = (WNW / total); fWSW = (WSW / total); fSSW = (SSW / total); fSSE = (SSE / total); fESE = (ESE / total); fENE = (ENE / total); fNNE = (NNE / total); fFlat = (Flat / total); //Create a map for pairs: frequency, string multimap map_freq; //Create empty map map_freq[fND) = 'fND'; map_freq[fNNW] = 'fNNW'; map_freq[fWNW] = 'fWNW'; map_freq[fWSW] = 'fWSW'; map_freq[fSSW] = 'fSSW'; map_freq[fSSE] = 'fSSE'; map_freq[fESE] = 'fESE'; map_freq[fENE] = 'fENE'; map_freq[fNNE] = 'fNNE'; map_freq[fFlat] = 'fFlat'; """ weave.inline(code,['total'], type_converters=converters.blitz, compiler='gcc') ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Dec 10 14:58:25 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 10 Dec 2007 13:58:25 -0600 Subject: [SciPy-user] Slicing arrays In-Reply-To: References: Message-ID: <475D9A61.2040607@gmail.com> Nils Wagner wrote: > Hi all, > > I have an array > >>>> a > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, > 13, 14, 15, 16, > 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, > 29, 30, 31, 32, 33, > 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, > 46, 47, 48, 49, 50, > 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, > 63, 64, 65]) > > How can I extract an array b from a such that > >>>> b > array([ 0, 1, 2, 6, 7, 8, 12, 13, 14, 18, 19, 20, 24, > 25, 26, 30, 31, > 32, 36, 37, 38, 42, 43, 44, 48, 49, 50, 54, 55, > 56, 60, 61, 62]) In [1]: from numpy import * In [2]: a = arange(66) In [3]: b = reshape(a, (-1, 6))[:,:3].ravel() In [4]: b Out[4]: array([ 0, 1, 2, 6, 7, 8, 12, 13, 14, 18, 19, 20, 24, 25, 26, 30, 31, 32, 36, 37, 38, 42, 43, 44, 48, 49, 50, 54, 55, 56, 60, 61, 62]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Mon Dec 10 15:14:41 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 10 Dec 2007 21:14:41 +0100 Subject: [SciPy-user] Slicing arrays In-Reply-To: References: Message-ID: <20071210201441.GA25289@clipper.ens.fr> On Mon, Dec 10, 2007 at 07:43:32PM +0100, Nils Wagner wrote: > I have an array > >>> a > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, > 13, 14, 15, 16, > 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, > 29, 30, 31, 32, 33, > 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, > 46, 47, 48, 49, 50, > 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, > 63, 64, 65]) > How can I extract an array b from a such that > >>> b > array([ 0, 1, 2, 6, 7, 8, 12, 13, 14, 18, 19, 20, 24, > 25, 26, 30, 31, > 32, 36, 37, 38, 42, 43, 44, 48, 49, 50, 54, 55, > 56, 60, 61, 62]) I don't understand what is the relationship between a and b. Ga?l From Karl.Young at ucsf.edu Mon Dec 10 14:59:22 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Mon, 10 Dec 2007 11:59:22 -0800 Subject: [SciPy-user] Slicing arrays In-Reply-To: References: Message-ID: <475D9A9A.2060708@ucsf.edu> If you don't care about elegance or efficiency how about: -------------------------------------------------------------------------------------------------- import scipy a = scipy.array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]) runs = 8 space = 4 b = scipy.array(a[1:runs+1]) for i in range(1,(len(a)/runs)-1): b = scipy.concatenate((b,a[b[-1]+space:b[-1]+runs+space+1]),axis=1) ------------------------------------------------------------------------------------------------- PS If you really meant to cut off the last three elements of a, i'll leave that step to you... >Hi all, > >I have an array > > > >>>>a >>>> >>>> >array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, >13, 14, 15, 16, > 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, >29, 30, 31, 32, 33, > 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, >46, 47, 48, 49, 50, > 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, >63, 64, 65]) > >How can I extract an array b from a such that > > > >>>>b >>>> >>>> >array([ 0, 1, 2, 6, 7, 8, 12, 13, 14, 18, 19, 20, 24, >25, 26, 30, 31, > 32, 36, 37, 38, 42, 43, 44, 48, 49, 50, 54, 55, >56, 60, 61, 62]) > >Nils >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From vincefn at users.sourceforge.net Mon Dec 10 15:46:01 2007 From: vincefn at users.sourceforge.net (Vincent Favre-Nicolin) Date: Mon, 10 Dec 2007 21:46:01 +0100 Subject: [SciPy-user] using STL in scipy weave inline In-Reply-To: <529E0C005F46104BA9DB3CB93F3979750117D4C8@TOKYO.intra.cea.fr> References: <529E0C005F46104BA9DB3CB93F3979750117D4C8@TOKYO.intra.cea.fr> Message-ID: <200712102146.01404.vincefn@users.sourceforge.net> On lundi 10 d?cembre 2007, Michael ODonnell wrote: > OS Microsoft XP SP2 x86 > python 2.5.1 > compiler: mingw > > Does anyone have experience using multimap or the standard template library > within weave inline? I need to create a map that contains numeric keys > which are paired to a string, but I get an error when declaring the > multimap function. If i include I get other errors about namespace > and if don't then I get an error for not declaring the multimap. Neither > method works. I have not used the STL with weave, but have you tried either : (1) adding "using namespace std;" to the beginning of your code (2) using "std::multimap" (and same for string : "std::string") for all the STL classes you use ? -- Vincent Favre-Nicolin Universit? Joseph Fourier http://v.favrenicolin.free.fr ObjCryst & Fox : http://objcryst.sourceforge.net From tom.denniston at alum.dartmouth.org Mon Dec 10 15:49:12 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Mon, 10 Dec 2007 14:49:12 -0600 Subject: [SciPy-user] Slicing arrays In-Reply-To: References: Message-ID: Something like this should work: In [8]: a1 = numpy.ones(100) In [9]: a1[::3]=3 In [11]: numpy.cumsum(a1)-2 array([ 1., 2., 3., 6., 7., 8., 11., 12., 13., 16., 17., 18., 21., 22., 23., 26., 27., 28., 31., 32., 33., 36., 37., 38., 41., 42., 43., 46., 47., 48., 51., 52., 53., 56., 57., 58., 61., 62., 63., 66., 67., 68., 71., 72., 73., 76., 77., 78., 81., 82., 83., 86., 87., 88., 91., 92., 93., 96., 97., 98., 101., 102., 103., 106., 107., 108., 111., 112., 113., 116., 117., 118., 121., 122., 123., 126., 127., 128., 131., 132., 133., 136., 137., 138., 141., 142., 143., 146., 147., 148., 151., 152., 153., 156., 157., 158., 161., 162., 163., 166.]) On 12/10/07, Nils Wagner wrote: > Hi all, > > I have an array > > >>> a > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, > 13, 14, 15, 16, > 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, > 29, 30, 31, 32, 33, > 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, > 46, 47, 48, 49, 50, > 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, > 63, 64, 65]) > > How can I extract an array b from a such that > > >>> b > array([ 0, 1, 2, 6, 7, 8, 12, 13, 14, 18, 19, 20, 24, > 25, 26, 30, 31, > 32, 36, 37, 38, 42, 43, 44, 48, 49, 50, 54, 55, > 56, 60, 61, 62]) > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From as8ca at virginia.edu Mon Dec 10 15:57:07 2007 From: as8ca at virginia.edu (Alok Singhal) Date: Mon, 10 Dec 2007 15:57:07 -0500 Subject: [SciPy-user] Slicing arrays In-Reply-To: References: Message-ID: <20071210205707.GH767@virginia.edu> On 10/12/07: 19:43, Nils Wagner wrote: > Hi all, > > I have an array > > >>> a > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, > 13, 14, 15, 16, > 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, > 29, 30, 31, 32, 33, > 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, > 46, 47, 48, 49, 50, > 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, > 63, 64, 65]) > > How can I extract an array b from a such that > > >>> b > array([ 0, 1, 2, 6, 7, 8, 12, 13, 14, 18, 19, 20, 24, > 25, 26, 30, 31, > 32, 36, 37, 38, 42, 43, 44, 48, 49, 50, 54, 55, > 56, 60, 61, 62]) I can do it by: b = a.copy() b.shape = (-1,6) b = b[:,0:3].copy() b.shape = (1, -1) b = b.squeeze() Maybe there is a more efficient method, but this seems to work. -Alok -- Alok Singhal (as8ca at virginia.edu) * * Graduate Student, dept. of Astronomy * * * University of Virginia http://www.astro.virginia.edu/~as8ca/ * * From silva at lma.cnrs-mrs.fr Mon Dec 10 17:07:59 2007 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Mon, 10 Dec 2007 23:07:59 +0100 Subject: [SciPy-user] Slicing arrays In-Reply-To: References: Message-ID: <1197324480.3220.1.camel@localhost.localdomain> >>> a=arange(66) >>> a array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]) >>> tmp = a.reshape(6,11) >>> tmp array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32], [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43], [44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54], [55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]]) >>> b=tmp[:,0:3] >>> b array([[ 0, 1, 2], [ 6, 7, 8], [12, 13, 14], [18, 19, 20], [24, 25, 26], [30, 31, 32], [36, 37, 38], [42, 43, 44], [48, 49, 50], [54, 55, 56], [60, 61, 62]]) >>> b.ravel() array([ 0, 1, 2, 6, 7, 8, 12, 13, 14, 18, 19, 20, 24, 25, 26, 30, 31, 32, 36, 37, 38, 42, 43, 44, 48, 49, 50, 54, 55, 56, 60, 61, 62]) -- Fabrice Silva LMA UPR CNRS 7051 - ?quipe S2M From eric at enthought.com Mon Dec 10 18:17:06 2007 From: eric at enthought.com (eric jones) Date: Mon, 10 Dec 2007 17:17:06 -0600 Subject: [SciPy-user] using STL in scipy weave inline In-Reply-To: <124989.33872.qm@web58006.mail.re3.yahoo.com> References: <124989.33872.qm@web58006.mail.re3.yahoo.com> Message-ID: <475DC8F2.8040705@enthought.com> Michael ODonnell wrote: > OS Microsoft XP SP2 x86 > python 2.5.1 > compiler: mingw > > Does anyone have experience using multimap or the standard template > library within weave inline? I need to create a map that contains > numeric keys which are paired to a string, but I get an error when > declaring the multimap function. If i include I get other errors > about namespace and if don't then I get an error for not declaring the > multimap. Neither method works. There is no problem using the stl from weave. In fact, it uses std::string internally. You can use headers=[""] as a keyword to the inline call to get map into the namespace. Once you do that, you'll need to either declare "using namespace std;" at the top of your code or use std::multimap and std::string to refer to the classes in the stl. Once you've done that, you'll still have some issues as multimap doesn't support the operator [], and your C++ strings have single quotes around them instead of double quotes. I've attached an example for your code that instead works using a standard python dictionary (use py::dict within weave). The output is shown here: In [33]: run we_ex.py In [34]: res Out[34]: {0.050000000000000003: 'fESE', 0.10000000000000001: 'fSSE', 0.14999999999999999: 'fFlat', 0.25: 'fNNE', 0.40000000000000002: 'fSSW', 0.5: 'fWNW', 0.55000000000000004: 'fNNW', 1.1000000000000001: 'fND', 1.5: 'fWSW'} If you need to use the stl for some other reason, then refer to this to see how to do it: http://www.sgi.com/tech/stl/Multimap.html Good luck, eric > > TIA, > Michael > > Example of code: > > total = (winx * winy * 1.0) #Total number of cells to evaluate in kernel > > code = r""" > double ND, NNW, WNW, WSW,SSW, SSE, ESE, ENE, NNE, Flat; > ND = 22.0; > NNW = 11.0; > WNW = 10.0; > WSW = 30.0; > SSW = 8.0; > SSE = 2.0; > ESE = 1.0; > ENE = 5.0; > NNE = 5.0; > Flat = 3.0; > > //Calculate the frequency of the counts > fND = (ND / total); > fNNW = (NNW / total); > fWNW = (WNW / total); > fWSW = (WSW / total); > fSSW = (SSW / total); > fSSE = (SSE / total); > fESE = (ESE / total); > fENE = (ENE / total); > fNNE = (NNE / total); > fFlat = (Flat / total); > > //Create a map for pairs: frequency, string > multimap map_freq; //Create empty map > > map_freq[fND) = 'fND'; > map_freq[fNNW] = 'fNNW'; > map_freq[fWNW] = 'fWNW'; > map_freq[fWSW] = 'fWSW'; > map_freq[fSSW] = 'fSSW'; > map_freq[fSSE] = 'fSSE'; > map_freq[fESE] = 'fESE'; > map_freq[fENE] = 'fENE'; > map_freq[fNNE] = 'fNNE'; > map_freq[fFlat] = 'fFlat'; > """ > > weave.inline(code,['total'], type_converters=converters.blitz, > compiler='gcc') > > ------------------------------------------------------------------------ > Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try > it now. > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: we_ex.py URL: From nmelgarejodiaz at gmail.com Tue Dec 11 06:04:16 2007 From: nmelgarejodiaz at gmail.com (Natali Melgarejo Diaz) Date: Tue, 11 Dec 2007 12:04:16 +0100 Subject: [SciPy-user] Green's function Message-ID: Hi to everyone i'm new in this list...i have a question about a formula i haven't found yet..., do you know the "Green function"??. Is there anything similar in Scipy??, 'cause in Matllab there is one but i haven't found it in Scipy yet. Thanks in advance to everyone ;)) ********Natali******** From nikolaskaralis at gmail.com Tue Dec 11 07:34:07 2007 From: nikolaskaralis at gmail.com (Nikolas Karalis) Date: Tue, 11 Dec 2007 14:34:07 +0200 Subject: [SciPy-user] Characteristic polynomial Message-ID: <85b2e0230712110434x3b71be92sa2607f75886410ed@mail.gmail.com> Hello... I'm trying to compute the characteristic polynomial of an integer (numpy) matrix. But i cannot find any way of doing this. Do you have any ideas/suggestions? -- Nikolas Karalis Applied Mathematics and Physics Undergraduate National Technical University of Athens, Greece http://users.ntua.gr/ge04042 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bryanv at enthought.com Tue Dec 11 10:37:37 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Tue, 11 Dec 2007 09:37:37 -0600 Subject: [SciPy-user] Characteristic polynomial In-Reply-To: <85b2e0230712110434x3b71be92sa2607f75886410ed@mail.gmail.com> References: <85b2e0230712110434x3b71be92sa2607f75886410ed@mail.gmail.com> Message-ID: <475EAEC1.4020501@enthought.com> In [16]: numpy.lib.poly? Type: function Base Class: String Form: Namespace: Interactive File: /workspace/python2.5/lib/python2.5/site-packages/numpy/lib/polynomial.py Definition: numpy.lib.poly(seq_of_zeros) Docstring: Return a sequence representing a polynomial given a sequence of roots. If the input is a matrix, return the characteristic polynomial. Example: >>> b = roots([1,3,1,5,6]) >>> poly(b) array([ 1., 3., 1., 5., 6.]) In [17]: a = array([[1,0],[0,2]]) In [18]: coeffs = numpy.lib.poly(a) In [19]: numpy.lib.poly1d(coeffs) Out[19]: poly1d([ 1., -3., 2.]) Nikolas Karalis wrote: > Hello... I'm trying to compute the characteristic polynomial of an > integer (numpy) matrix. But i cannot find any way of doing this. > Do you have any ideas/suggestions? > > -- > Nikolas Karalis > Applied Mathematics and Physics Undergraduate > National Technical University of Athens, Greece > http://users.ntua.gr/ge04042 > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From nikolaskaralis at gmail.com Tue Dec 11 11:38:17 2007 From: nikolaskaralis at gmail.com (Nikolas Karalis) Date: Tue, 11 Dec 2007 18:38:17 +0200 Subject: [SciPy-user] Characteristic polynomial In-Reply-To: <475EAEC1.4020501@enthought.com> References: <85b2e0230712110434x3b71be92sa2607f75886410ed@mail.gmail.com> <475EAEC1.4020501@enthought.com> Message-ID: <85b2e0230712110838n19e0e229m73f80d1a61fd3cc3@mail.gmail.com> Hello Bryan and thanks for responding. I have already tried that, but check this example... >>> from numpy import * >>> a=arange(25) >>> a.shape=5,5 >>> a array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) >>> coeffs=poly(a) >>> coeffs array([ 1.00000000e+00, -6.00000000e+01, -2.50000000e+02, -1.67542241e-13, 1.82969746e-28, -8.57128973e-45]) >>> poly1d(coeffs) poly1d([ 1.00000000e+00, -6.00000000e+01, -2.50000000e+02, -1.67542241e-13, 1.82969746e-28, -8.57128973e-45]) Check these coefficients, like -1.67542241e-13 etc... these numbers are supposed to be zero, but they are "very small". The very same example in sage : A = matrix(5,5, range(25)); A A.charpoly() x^5 - 60*x^4 - 250*x^3 I tried to apply round function on the coefficients : >>> for i in range(len(coeffs)): coeffs[i]=round(coeffs[i]) ... >>> coeffs array([ 1., -60., -250., 0., 0., 0.]) and while in this case it returns the right result, there are cases where the coefficients are like : 1.1234566789e64, where the results cannot be rounded (since they are already integers) but they are different where they were supposed to be the same. A strange example : For the 10x10 matrices : A=matrix([[36,8,8,1,1,1,1,1,8,1], [8,36,8,8,1,1,1,1,1,1], [8,8,25,8,1,8,1,1,1,1], [1,8,8,36,8,1,1,1,1,1], [1,1,1,8,36,1,8,8,1,1], [1,1,8,1,1,36,8,1,1,8], [1,1,1,1,8,8,49,1,1,1], [1,1,1,1,8,1,1,49,8,1], [8,1,1,1,1,1,1,8,36,8], [1,1,1,1,1,8,1,1,8,49]]) B=matrix([[36,8,8,1,1,1,1,8,1,1], [8,25,8,1,8,8,1,1,1,1], [8,8,36,8,1,1,1,1,1,1], [1,1,8,36,8,1,1,1,1,8], [1,8,1,8,36,8,1,1,1,1], [1,8,1,1,8,36,8,1,1,1], [1,1,1,1,1,8,49,1,8,1], [8,1,1,1,1,1,1,49,8,1], [1,1,1,1,1,1,8,8,36,8], [1,1,1,8,1,1,1,1,8,49]]) >>> poly(A) array([ 1.00000000e+00, -3.88000000e+02, 6.65430000e+04, -6.63946000e+06, 4.26558031e+08, -1.84256082e+10, 5.41543001e+11, -1.06843860e+13, 1.35292436e+14, -9.91717494e+14, 3.19104663e+15]) >>> poly(B) array([ 1.00000000e+00, -3.88000000e+02, 6.65430000e+04, -6.63946000e+06, 4.26558031e+08, -1.84256082e+10, 5.41543001e+11, -1.06843860e+13, 1.35292436e+14, -9.91717494e+14, 3.19104663e+15]) sage returns : x^10 - 388*x^9 + 66543*x^8 - 6639460*x^7 + 426558031*x^6 - 18425608218*x^5 + 541543001174*x^4 - 10684385967388*x^3 + 135292435663763*x^2 - 991717494497240*x + 3191046625350150 for both matrices. You can see the difference between 3.19104663e+15 and 3191046625350150 On the other hand, if we get the exact value of 3.19104663e+15 (by using : poly(A)[-1] and poly(B)[-1]) we have : 3191046625350152.0 3191046625350153.5 And with all these... :S I'm really confused. Anyway, it would be really useful if there existed a function to compute the modular characteristic polynomial (modulo a big prime) (like in maple). On Dec 11, 2007 5:37 PM, Bryan Van de Ven wrote: > In [16]: numpy.lib.poly? > Type: function > Base Class: > String Form: > Namespace: Interactive > File: > /workspace/python2.5/lib/python2.5/site-packages/numpy/lib/polynomial.py > Definition: numpy.lib.poly(seq_of_zeros) > Docstring: > Return a sequence representing a polynomial given a sequence of roots. > > If the input is a matrix, return the characteristic polynomial. > > Example: > > >>> b = roots([1,3,1,5,6]) > >>> poly(b) > array([ 1., 3., 1., 5., 6.]) > > > In [17]: a = array([[1,0],[0,2]]) > > In [18]: coeffs = numpy.lib.poly(a) > > In [19]: numpy.lib.poly1d(coeffs) > Out[19]: poly1d([ 1., -3., 2.]) > > > > > Nikolas Karalis wrote: > > Hello... I'm trying to compute the characteristic polynomial of an > > integer (numpy) matrix. But i cannot find any way of doing this. > > Do you have any ideas/suggestions? > > > > -- > > Nikolas Karalis > > Applied Mathematics and Physics Undergraduate > > National Technical University of Athens, Greece > > http://users.ntua.gr/ge04042 > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Nikolas Karalis Applied Mathematics and Physics Undergraduate National Technical University of Athens, Greece http://users.ntua.gr/ge04042 -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Tue Dec 11 11:45:03 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 11 Dec 2007 17:45:03 +0100 Subject: [SciPy-user] Characteristic polynomial In-Reply-To: <85b2e0230712110838n19e0e229m73f80d1a61fd3cc3@mail.gmail.com> References: <85b2e0230712110434x3b71be92sa2607f75886410ed@mail.gmail.com> <475EAEC1.4020501@enthought.com> <85b2e0230712110838n19e0e229m73f80d1a61fd3cc3@mail.gmail.com> Message-ID: You can't expect scipy to find the exact coefficients. sage is a symbolic package, it will find the correct answers, but scipy will find only an approximated one, up to the machine precision. This is what you see in your exemple. If you have integers, you could expect scipy to return long integers (exact result), but this is not the case as almost everything is converted into a float array before the actual C or Fortran routine is run. Matthieu 2007/12/11, Nikolas Karalis : > > Hello Bryan and thanks for responding. > I have already tried that, but check this example... > > >>> from numpy import * > >>> a=arange(25) > >>> a.shape=5,5 > >>> a > array([[ 0, 1, 2, 3, 4], > [ 5, 6, 7, 8, 9], > [10, 11, 12, 13, 14], > [15, 16, 17, 18, 19], > [20, 21, 22, 23, 24]]) > >>> coeffs=poly(a) > >>> coeffs > array([ 1.00000000e+00, -6.00000000e+01 , -2.50000000e+02, > -1.67542241e-13, 1.82969746e-28, -8.57128973e-45]) > >>> poly1d(coeffs) > poly1d([ 1.00000000e+00, -6.00000000e+01, -2.50000000e+02, > -1.67542241e-13, 1.82969746e-28 , -8.57128973e-45]) > > Check these coefficients, like -1.67542241e-13 etc... these numbers are > supposed to be zero, but they are "very small". > > The very same example in sage : > A = matrix(5,5, range(25)); A > A.charpoly() > > x^5 - 60*x^4 - 250*x^3 > > I tried to apply round function on the coefficients : > >>> for i in range(len(coeffs)): coeffs[i]=round(coeffs[i]) > ... > >>> coeffs > array([ 1., -60., -250., 0., 0., 0.]) > > and while in this case it returns the right result, there are cases where > the coefficients are like : 1.1234566789e64, > where the results cannot be rounded (since they are already integers) but > they are different where they were supposed to be the same. > A strange example : > > For the 10x10 matrices : > A=matrix([[36,8,8,1,1,1,1,1,8,1], > [8,36,8,8,1,1,1,1,1,1], > [8,8,25,8,1,8,1,1,1,1], > [1,8,8,36,8,1,1,1,1,1], > [1,1,1,8,36,1,8,8,1,1], > [1,1,8,1,1,36,8,1,1,8], > [1,1,1,1,8,8,49,1,1,1], > [1,1,1,1,8,1,1,49,8,1], > [8,1,1,1,1,1,1,8,36,8], > [1,1,1,1,1,8,1,1,8,49]]) > > B=matrix([[36,8,8,1,1,1,1,8,1,1], > [8,25,8,1,8,8,1,1,1,1], > [8,8,36,8,1,1,1,1,1,1], > [1,1,8,36,8,1,1,1,1,8], > [1,8,1,8,36,8,1,1,1,1], > [1,8,1,1,8,36,8,1,1,1], > [1,1,1,1,1,8,49,1,8,1], > [8,1,1,1,1,1,1,49,8,1], > [1,1,1,1,1,1,8,8,36,8], > [1,1,1,8,1,1,1,1,8,49]]) > > >>> poly(A) > array([ 1.00000000e+00, -3.88000000e+02, 6.65430000e+04, > -6.63946000e+06, 4.26558031e+08, -1.84256082e+10, > 5.41543001e+11 , -1.06843860e+13, 1.35292436e+14, > -9.91717494e+14, 3.19104663e+15]) > >>> poly(B) > array([ 1.00000000e+00, -3.88000000e+02, 6.65430000e+04, > -6.63946000e+06, 4.26558031e+08, - 1.84256082e+10, > 5.41543001e+11, -1.06843860e+13, 1.35292436e+14, > -9.91717494e+14, 3.19104663e+15]) > > sage returns : > > x^10 - 388*x^9 + 66543*x^8 - 6639460*x^7 + 426558031*x^6 - > > 18425608218*x^5 + 541543001174*x^4 - 10684385967388*x^3 + > 135292435663763*x^2 - 991717494497240*x + 3191046625350150 > > for both matrices. > > You can see the difference between 3.19104663e+15 and 3191046625350150 > On the other hand, if we get the exact value of 3.19104663e+15 (by using : > poly(A)[-1] and poly(B)[-1]) we have : > 3191046625350152.0 > 3191046625350153.5 > > And with all these... :S I'm really confused. > Anyway, it would be really useful if there existed a function to compute > the modular characteristic polynomial (modulo a big prime) (like in maple). > > On Dec 11, 2007 5:37 PM, Bryan Van de Ven < bryanv at enthought.com> wrote: > > > In [16]: numpy.lib.poly ? > > Type: function > > Base Class: > > String Form: > > Namespace: Interactive > > File: > > /workspace/python2.5/lib/python2.5/site-packages/numpy/lib/polynomial.py > > > > Definition: numpy.lib.poly(seq_of_zeros) > > Docstring: > > Return a sequence representing a polynomial given a sequence of > > roots. > > > > If the input is a matrix, return the characteristic polynomial. > > > > Example: > > > > >>> b = roots([1,3,1,5,6]) > > >>> poly(b) > > array([ 1., 3., 1., 5., 6.]) > > > > > > In [17]: a = array([[1,0],[0,2]]) > > > > In [18]: coeffs = numpy.lib.poly (a) > > > > In [19]: numpy.lib.poly1d(coeffs) > > Out[19]: poly1d([ 1., -3., 2.]) > > > > > > > > > > Nikolas Karalis wrote: > > > Hello... I'm trying to compute the characteristic polynomial of an > > > integer (numpy) matrix. But i cannot find any way of doing this. > > > Do you have any ideas/suggestions? > > > > > > -- > > > Nikolas Karalis > > > Applied Mathematics and Physics Undergraduate > > > National Technical University of Athens, Greece > > > http://users.ntua.gr/ge04042 > > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > Nikolas Karalis > Applied Mathematics and Physics Undergraduate > National Technical University of Athens, Greece > http://users.ntua.gr/ge04042 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From lou_boog2000 at yahoo.com Tue Dec 11 11:52:19 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 11 Dec 2007 08:52:19 -0800 (PST) Subject: [SciPy-user] Green's function In-Reply-To: Message-ID: <447725.19101.qm@web34409.mail.mud.yahoo.com> There is no particular special function called the Green Function (sometimes named Green's Function). Green functions are special solutions of certain partial differential equations (PDE), i.e. they are a certain class of functions. It's like saying Orthogonal Polynomials -- that's a class, not a particular type. Green functions have the property that when used in the PDE they yield a Dirac delta function. That makes them useful for the solution of the same PDE, but with a nonhomogeneous term added to the "other side." So each PDE has it's own Green function. Eg. the Helmholtz equation, D^2 G + k^2 G=0 (D^2 = Laplacian) has the Green function G(r,r')= i H_0(k|r'-r|)/4, where H_0 is a Hankel function and i=sqrt(-1). I suggest you look at some texts (maybe mathematical physics books) for the subject. Even a Google search will give you some information. -- Lou Pecora --- Natali Melgarejo Diaz wrote: > Hi to everyone i'm new in this list...i have a > question about a > formula i haven't found yet..., do you know the > "Green function"??. Is > there anything similar in Scipy??, 'cause in Matllab > there is one but > i haven't found it in Scipy yet. > > Thanks in advance to everyone ;)) > > > ********Natali******** ____________________________________________________________________________________ Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs From fie.pye at atlas.cz Tue Dec 11 14:03:00 2007 From: fie.pye at atlas.cz (Pye Fie) Date: Tue, 11 Dec 2007 19:03:00 GMT Subject: [SciPy-user] Building VTK Message-ID: Hello. I would like to build and install VTK and MayaVi2. PC: 2x Dual-core Opteron 275 OS: CentOS 5.0, kernel 2.6.18-8.1.15.el5 OS compiler: gcc version 4.2.0, gfortran I have already built and installed python2.5.1 and other module with compiler gcc3.4.6. Now I can't build VTK. I am building with gcc3.4.6. Building stops on the following error: [ 74%] Building CXX object IO/CMakeFiles/vtkIOTCL.dir/vtkZLibDataCompressorTcl.o [ 74%] Building CXX object IO/CMakeFiles/vtkIOTCL.dir/vtkIOTCLInit.o Linking CXX shared library ../bin/libvtkIOTCL.so [ 74%] Built target vtkIOTCL Scanning dependencies of target vtkParseOGLExt [ 74%] Building CXX object Utilities/ParseOGLExt/CMakeFiles/vtkParseOGLExt.dir/ParseOGLExt.o [ 74%] Building CXX object Utilities/ParseOGLExt/CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o Linking CXX executable ../../bin/vtkParseOGLExt CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o: In function `std::__verify_grouping(char const*, unsigned long, std::basic_string, std::allocator > const&)': Tokenizer.cxx:(.text+0x19): undefined reference to `std::basic_string, std::allocator >::size() const' Tokenizer.cxx:(.text+0x70): undefined reference to `std::basic_string, std::allocator >::operator[](unsigned long) const' Tokenizer.cxx:(.text+0xb0): undefined reference to `std::basic_string, std::allocator >::operator[](unsigned long) const' Tokenizer.cxx:(.text+0xdd): undefined reference to `std::basic_string, std::allocator >::operator[](unsigned long) const' CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o: In function `Tokenizer::Tokenizer(char const*, char const*)': Tokenizer.cxx:(.text+0x12a): undefined reference to `std::allocator::allocator()' Tokenizer.cxx:(.text+0x13b): undefined reference to `std::basic_string, std::allocator >::basic_string(char const*, std::allocator const&)' Tokenizer.cxx:(.text+0x14e): undefined reference to `std::allocator::~allocator()' Tokenizer.cxx:(.text+0x164): undefined reference to `std::allocator::~allocator()' Tokenizer.cxx:(.text+0x16d): undefined reference to `std::allocator::allocator()' Tokenizer.cxx:(.text+0x182): undefined reference to `std::basic_string, std::allocator >::basic_string(char const*, std::allocator const&)' The list continues. It seams that there is something wrong with OpenGL but I can't find what is wrong. Could anybody help me? I have attached more detailed information about VTK configuration, building output and building script. Best regards fie pye ------------------------------------------ http://search.atlas.cz/ -------------- next part -------------- A non-text attachment was scrubbed... Name: vtk_build.tar.gz Type: application/x-gzip Size: 30276 bytes Desc: not available URL: From clovisgo at gmail.com Tue Dec 11 14:09:34 2007 From: clovisgo at gmail.com (Clovis Goldemberg) Date: Tue, 11 Dec 2007 17:09:34 -0200 Subject: [SciPy-user] Small problem with scipy.signal Message-ID: <6f239f130712111109r139e0d47y2c5c2ab273892094@mail.gmail.com> I am having some problems with scipy.signal (specifically with scipy.signal.step) which can be explained by the following text, which was edited with some comments. Configuration is also given in the text. ##################################### Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\>d:\python25\python Python 2.5.1 (r251:54863, Apr 18 2007, win32 Type "help", "copyright", "credits" or >>> import scipy.signal as ss >>> import scipy >>> scipy.__version__ '0.6.0' >>> import numpy >>> numpy.__version__ '1.0.4' ##################################### # numpy.test(10,2) runs without any problem # scipy.test(10,2) runs without any problem ##################################### - Ignored: >>> num=[1.0] >>> den=[1.0,1.0] >>> ss.step((num,den)) ##################################### # This works fine! # The transfer function would be "1/(1s+1)" # which is a simple low-pass filter. ##################################### (array([ 0. , 0.07, 0.14, 0.21, 0. 0.63, 0.7 , 0.77, 0.84, 0.9 1.26, 1.33, 1.4 , 1.47, 1.5 1.89, 1.96, 2.03, 2.1 , 2.1 2.52, 2.59, 2.66, 2.73, 2.8 3.15, 3.22, 3.29, 3.36, 3.4 3.78, 3.85, 3.92, 3.99, 4.0 4.41, 4.48, 4.55, 4.62, 4.6 5.04, 5.11, 5.18, 5.25, 5.3 5.67, 5.74, 5.81, 5.88, 5.9 6.3 , 6.37, 6.44, 6.51, 6.5 array([ 0. , 0.06760618, 0.1306 0.29531191, 0.34295318, 0.387 0.5034147 , 0.53698693, 0.568 0.65006225, 0.67372021, 0.695 0.75340304, 0.77007451, 0.785 0.82622606, 0.83797425, 0.848 0.87754357, 0.88582238, 0.893 0.91370641, 0.91954039, 0.924 0.93918994, 0.94330107, 0.947 0.95714787, 0.96004494, 0.962 0.96980262, 0.97184415, 0.973 0.97872026, 0.98015891, 0.981 0.98500442, 0.98601822, 0.986 0.9894328 , 0.9901472 , 0.990 0.99255342, 0.99305685, 0.993 0.99475248, 0.99510725, 0.995 0.99630214, 0.99655213, 0.996 0.99739416, 0.99757033, 0.997 0.9981637 , 0.99828784, 0.998 0.99870598, 0.99879346, 0.998 >>> den=[1.0,0.0] >>> ss.step((num,den)) ##################################### # This fails! # The transfer function would be "1/s" # which is a simple integrator. ##################################### Traceback (most recent call last): File "", line 1, in File "d:\python25\lib\site-packages\s vals = lsim(sys, U, T, X0=X0) File "d:\python25\lib\site-packages\s xout[0] = X0 IndexError: index out of bounds >>> ##################################### Any suggestion? Thanks for your kind support. - Done. ---------- Forwarded message ---------- From: "Clovis Goldemberg" To: scipy-user-request at scipy.org Date: Mon, 10 Dec 2007 09:53:26 -0200 Subject: Small problem with "scipy.signal.step" I'm having a small problem with scipy.signal, which is exposed in the following text, which was edited with some comments. ##################################### Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\>d:\python25\python Python 2.5.1 (r251:54863, Apr 18 2007, win32 Type "help", "copyright", "credits" or >>> import scipy.signal as ss >>> import scipy >>> scipy.__version__ '0.6.0' >>> import numpy >>> numpy.__version__ '1.0.4' ##################################### # numpy.test(10,2) runs without any problem # scipy.test(10,2) runs without any problem ##################################### >>> num=[1.0] >>> den=[1.0,1.0] >>> ss.step((num,den)) ##################################### # This works fine! # The transfer function would be "1/(1s+1)" # which is a simple low-pass filter. ##################################### (array([ 0. , 0.07, 0.14, 0.21, 0. 0.63, 0.7 , 0.77, 0.84, 0.9 1.26, 1.33, 1.4 , 1.47, 1.5 1.89, 1.96, 2.03, 2.1 , 2.1 2.52, 2.59, 2.66, 2.73, 2.8 3.15, 3.22, 3.29, 3.36, 3.4 3.78, 3.85, 3.92, 3.99, 4.0 4.41, 4.48, 4.55, 4.62, 4.6 5.04, 5.11, 5.18, 5.25, 5.3 5.67, 5.74, 5.81, 5.88, 5.9 6.3 , 6.37, 6.44, 6.51, 6.5 rray([ 0. , 0.06760618, 0.1306 0.29531191, 0.34295318, 0.387 0.5034147 , 0.53698693, 0.568 0.65006225, 0.67372021, 0.695 0.75340304, 0.77007451, 0.785 0.82622606, 0.83797425, 0.848 0.87754357, 0.88582238, 0.893 0.91370641, 0.91954039, 0.924 0.93918994, 0.94330107 , 0.947 0.95714787, 0.96004494, 0.962 0.96980262, 0.97184415, 0.973 0.97872026, 0.98015891, 0.981 0.98500442, 0.98601822, 0.986 0.9894328 , 0.9901472 , 0.990 0.99255342, 0.99305685, 0.993 0.99475248, 0.99510725, 0.995 0.99630214, 0.99655213, 0.996 0.99739416, 0.99757033, 0.997 0.9981637 , 0.99828784, 0.998 0.99870598, 0.99879346, 0.998 >>> den=[1.0,0.0] >>> ss.step((num,den)) ##################################### # This fails! # The transfer function would be "1/s" # which is a simple integrator. ##################################### Traceback (most recent call last): File "", line 1, in File "d:\python25\lib\site-packages\s vals = lsim(sys, U, T, X0=X0) File "d:\python25\lib\site-packages\s xout[0] = X0 IndexError: index out of bounds >>> ##################################### Any suggestion? Thanks for your kind support. Clovis Goldemberg Escola Politecnica da USP Brasil From matthieu.brucher at gmail.com Tue Dec 11 14:56:05 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 11 Dec 2007 20:56:05 +0100 Subject: [SciPy-user] Building VTK In-Reply-To: References: Message-ID: Hi, This issue is related to a C++ problem in VTK, not Python, so we can't say much. I advise you to post this on the VTK ML. Matthieu 2007/12/11, Pye Fie : > > Hello. > > I would like to build and install VTK and MayaVi2. > PC: 2x Dual-core Opteron > 275 > OS: CentOS 5.0, kernel 2.6.18-8.1.15.el5 > OS compiler: gcc version 4.2.0, gfortran > > I > have already built and installed python2.5.1 and other module with > compiler gcc3.4.6. > Now I can't build VTK. I am building with gcc3.4.6. Building stops on the > following > error: > > [ 74%] Building CXX object > IO/CMakeFiles/vtkIOTCL.dir/vtkZLibDataCompressorTcl.o > [ > 74%] Building CXX object IO/CMakeFiles/vtkIOTCL.dir/vtkIOTCLInit.o > Linking CXX shared > library ../bin/libvtkIOTCL.so > [ 74%] Built target vtkIOTCL > Scanning dependencies > of target vtkParseOGLExt > [ 74%] Building CXX object > Utilities/ParseOGLExt/CMakeFiles/vtkParseOGLExt.dir/ParseOGLExt.o > [ > 74%] Building CXX object > Utilities/ParseOGLExt/CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o > Linking > CXX executable ../../bin/vtkParseOGLExt > CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o: > In function `std::__verify_grouping(char const*, unsigned long, > std::basic_string std::char_traits, std::allocator > const&)': > Tokenizer.cxx:(.text+0x19): > undefined reference to `std::basic_string, > std::allocator > >::size() const' > Tokenizer.cxx:(.text+0x70): undefined reference to > `std::basic_string std::char_traits, std::allocator >::operator[](unsigned long) > const' > Tokenizer.cxx:(.text+0xb0): > undefined reference to `std::basic_string, > std::allocator > >::operator[](unsigned long) const' > Tokenizer.cxx:(.text+0xdd): undefined reference > to `std::basic_string, std::allocator > >::operator[](unsigned > long) const' > CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o: In function > `Tokenizer::Tokenizer(char > const*, char const*)': > Tokenizer.cxx:(.text+0x12a): undefined reference to > `std::allocator::allocator()' > Tokenizer.cxx:(.text+0x13b): > undefined reference to `std::basic_string, > std::allocator > >::basic_string(char const*, std::allocator const&)' > Tokenizer.cxx:(.text+0x14e): > undefined reference to `std::allocator::~allocator()' > Tokenizer.cxx:(.text+0x164): > undefined reference to `std::allocator::~allocator()' > Tokenizer.cxx:(.text+0x16d): > undefined reference to `std::allocator::allocator()' > Tokenizer.cxx:(.text+0x182): > undefined reference to `std::basic_string, > std::allocator > >::basic_string(char const*, std::allocator const&)' > > The list continues. It > seams that there is something wrong with OpenGL but I can't find what is > wrong. Could > anybody help me? > I have attached more detailed information about VTK configuration, > building output and building script. > > Best regards > fie pye > ------------------------------------------ > > http://search.atlas.cz/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Tue Dec 11 14:57:49 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 11 Dec 2007 20:57:49 +0100 Subject: [SciPy-user] Building VTK In-Reply-To: References: Message-ID: Besides, you tell us you use GCC 3.4.6 but in your description, it is GCC 4.2... The issue may be solved by linking against GCC libraries, but again, the VTK ML may be a better place for the solution. Matthieu 2007/12/11, Matthieu Brucher : > > Hi, > > This issue is related to a C++ problem in VTK, not Python, so we can't say > much. I advise you to post this on the VTK ML. > > Matthieu > > 2007/12/11, Pye Fie < fie.pye at atlas.cz>: > > > > Hello. > > > > I would like to build and install VTK and MayaVi2. > > PC: 2x Dual-core Opteron > > 275 > > OS: CentOS 5.0, kernel 2.6.18-8.1.15.el5 > > OS compiler: gcc version 4.2.0, gfortran > > > > I > > have already built and installed python2.5.1 and other module with > > compiler gcc3.4.6. > > Now I can't build VTK. I am building with gcc3.4.6. Building stops on > > the following > > error: > > > > [ 74%] Building CXX object > > IO/CMakeFiles/vtkIOTCL.dir/vtkZLibDataCompressorTcl.o > > [ > > 74%] Building CXX object IO/CMakeFiles/vtkIOTCL.dir/vtkIOTCLInit.o > > Linking CXX shared > > library ../bin/libvtkIOTCL.so > > [ 74%] Built target vtkIOTCL > > Scanning dependencies > > of target vtkParseOGLExt > > [ 74%] Building CXX object > > Utilities/ParseOGLExt/CMakeFiles/vtkParseOGLExt.dir/ParseOGLExt.o > > [ > > 74%] Building CXX object > > Utilities/ParseOGLExt/CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o > > Linking > > CXX executable ../../bin/vtkParseOGLExt > > CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o: > > In function `std::__verify_grouping(char const*, unsigned long, > > std::basic_string > std::char_traits, std::allocator > const&)': > > Tokenizer.cxx:(.text+0x19): > > undefined reference to `std::basic_string, > > std::allocator > > >::size() const' > > Tokenizer.cxx:(.text+0x70): undefined reference to > > `std::basic_string > std::char_traits, std::allocator >::operator[](unsigned > > long) const' > > Tokenizer.cxx: (.text+0xb0): > > undefined reference to `std::basic_string, > > std::allocator > > >::operator[](unsigned long) const' > > Tokenizer.cxx:(.text+0xdd): undefined reference > > to `std::basic_string, std::allocator > > >::operator[](unsigned > > long) const' > > CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o: In function > > `Tokenizer::Tokenizer(char > > const*, char const*)': > > Tokenizer.cxx:(.text+0x12a): undefined reference to > > `std::allocator::allocator()' > > Tokenizer.cxx:(.text+0x13b): > > undefined reference to `std::basic_string, > > std::allocator > > >::basic_string(char const*, std::allocator const&)' > > Tokenizer.cxx:(.text+0x14e): > > undefined reference to `std::allocator::~allocator()' > > Tokenizer.cxx:(.text+0x164): > > undefined reference to `std::allocator::~allocator()' > > Tokenizer.cxx:(.text+0x16d): > > undefined reference to `std::allocator::allocator()' > > Tokenizer.cxx:(.text+0x182): > > undefined reference to `std::basic_string, > > std::allocator > > >::basic_string(char const*, std::allocator const&)' > > > > The list continues. It > > seams that there is something wrong with OpenGL but I can't find what is > > wrong. Could > > anybody help me? > > I have attached more detailed information about VTK configuration, > > building output and building script. > > > > Best regards > > fie pye > > ------------------------------------------ > > > > http://search.atlas.cz/ > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Tue Dec 11 15:01:24 2007 From: hasslerjc at comcast.net (John Hassler) Date: Tue, 11 Dec 2007 15:01:24 -0500 Subject: [SciPy-user] Characteristic polynomial In-Reply-To: References: <85b2e0230712110434x3b71be92sa2607f75886410ed@mail.gmail.com> <475EAEC1.4020501@enthought.com> <85b2e0230712110838n19e0e229m73f80d1a61fd3cc3@mail.gmail.com> Message-ID: <475EEC94.3010609@comcast.net> An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Tue Dec 11 15:05:04 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 11 Dec 2007 13:05:04 -0700 Subject: [SciPy-user] Characteristic polynomial In-Reply-To: References: <85b2e0230712110434x3b71be92sa2607f75886410ed@mail.gmail.com> <475EAEC1.4020501@enthought.com> <85b2e0230712110838n19e0e229m73f80d1a61fd3cc3@mail.gmail.com> Message-ID: On Dec 11, 2007 9:45 AM, Matthieu Brucher wrote: > You can't expect scipy to find the exact coefficients. sage is a symbolic > package, it will find the correct answers, but scipy will find only an > approximated one, up to the machine precision. This is what you see in your > exemple. > If you have integers, you could expect scipy to return long integers (exact > result), but this is not the case as almost everything is converted into a > float array before the actual C or Fortran routine is run. In this case Sage isn't using anything symbolic though: the issue is that Sage has from the ground up defined a type system where it knows what field its inputs are being computed over. So if a matrix is made up of only integers, it knows that computations are to be performed in exact arithmetic over Z, and does so accordingly. Sage's origins are actually number theory, not symbolic computing, and its strongest area is precisely in exact arithmetic, with enormous sophistication for some of its algorithms. Initially its (free) symbolic capabilities were all obtained via calls to Maxima, though now that Ondrej is actively helping with SymPy integration into Sage, that is changing and SymPy is natively available as well, which means that over time there will be more and more native (python) symbolic support as well. I'd agree though that for this kind of calculation, Sage is the tool to use. Cheers, f From lorenzo.isella at gmail.com Tue Dec 11 15:39:18 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 11 Dec 2007 21:39:18 +0100 Subject: [SciPy-user] Operations on Arrays in SciPy Message-ID: <475EF576.8010503@gmail.com> Dear All, A probably quick question: SciPy's arrays are objects with methods, are they not? For instance, consider the following: import scipy as s z=s.arange(30) mean_z=s.mean(z) mean_z_2=z.mean() Is there a reason why one way of operating on the array should be preferable? Is it the same thing to work out the mean of z as mean_z or mean_z_2? Many thanks Lorenzo From tjhnson at gmail.com Tue Dec 11 15:48:01 2007 From: tjhnson at gmail.com (Tom Johnson) Date: Tue, 11 Dec 2007 12:48:01 -0800 Subject: [SciPy-user] maskedarray Message-ID: What is the status regarding 'maskedarray'? When will this become part (replace ma) of the standard distribution? Also, what is the recommended way to import sandbox modules so that code changes are minimized when (if) the package becomes part of the standard distribution. From robert.kern at gmail.com Tue Dec 11 16:19:32 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Dec 2007 15:19:32 -0600 Subject: [SciPy-user] Operations on Arrays in SciPy In-Reply-To: <475EF576.8010503@gmail.com> References: <475EF576.8010503@gmail.com> Message-ID: <475EFEE4.2060407@gmail.com> Lorenzo Isella wrote: > Dear All, > A probably quick question: SciPy's arrays are objects with methods, > are they not? Yes. > For instance, consider the following: > import scipy as s > z=s.arange(30) > mean_z=s.mean(z) > mean_z_2=z.mean() > > Is there a reason why one way of operating on the array should be > preferable? scipy.mean() can accept anything that can be converted to an array, like a list of numbers. > Is it the same thing to work out the mean of z as mean_z or mean_z_2? Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Tue Dec 11 17:41:17 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 12 Dec 2007 00:41:17 +0200 Subject: [SciPy-user] Small problem with scipy.signal In-Reply-To: <6f239f130712111109r139e0d47y2c5c2ab273892094@mail.gmail.com> References: <6f239f130712111109r139e0d47y2c5c2ab273892094@mail.gmail.com> Message-ID: <20071211224117.GG11744@mentat.za.net> Hi Clovis On Tue, Dec 11, 2007 at 05:09:34PM -0200, Clovis Goldemberg wrote: > I am having some problems with scipy.signal > (specifically with scipy.signal.step) which can be > explained by the following text, > which was edited with some comments. > Configuration is also given in the text. [...] > >>> num=[1.0] > >>> den=[1.0,0.0] > >>> ss.step((num,den)) > ##################################### > # This fails! > # The transfer function would be "1/s" > # which is a simple integrator. > ##################################### Looks like the system is converted to state space. After that, somewhere along the line A is inverted, which causes the problem. I think a person can get away with only inverting [I-A], but I'll have to poke around the source a bit to be sure. Regards St?fan From roger.herikstad at gmail.com Tue Dec 11 21:28:12 2007 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Wed, 12 Dec 2007 10:28:12 +0800 Subject: [SciPy-user] Correlation coefficients Message-ID: Hi all, I was wondering if there is an efficient way of determining the degree of correlation between an array and and pool of reference arrays (> 10,000). I'm using corrcoef when the pool is small ( <1,000), but I have cases where I need to compare against more than 10,000 arrays, in which case corrcoef is slow. Any suggestions? Thanks! ~ Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnsulman at wisc.edu Wed Dec 12 01:04:26 2007 From: bnsulman at wisc.edu (Ben Sulman) Date: Wed, 12 Dec 2007 06:04:26 +0000 (UTC) Subject: [SciPy-user] time series analysis References: <200711051103.07410.pgmdevlist@gmail.com> <200711051437.34362.pgmdevlist@gmail.com> Message-ID: Matt Knox hotmail.com> writes: > > > > Wow, this looks great. But a little complex > > > > > Well, one could write functions for common tasks that fascilitate it a > > > bit... > > If you have any ideas for simplifying/improving things, we are certainly open > to suggestions and would love the feedback. Being a sandbox package currently, > there is no better time then now to get your ideas incorporated into the > timeseries module. > Hi, I was just looking through this list for ideas on how I could better use scipy. My issue is that I'm working with time series data, with missing values, that is in half-hour increments. As far as I can tell, with the current implementation of timeseries many operations will only work if the data has a point for every increment of a built in frequency. This means I have to either put my data in hourly form, which would require averaging, or convert it to minute frequency with missing values, which expands the size of my data by 30x. Is there a way around this, where I could just process half-hourly data? Thanks for your help! Timeseries looks like a really useful package if I can get it to play well with my data! Ben Sulman From cscheit at lstm.uni-erlangen.de Wed Dec 12 05:57:14 2007 From: cscheit at lstm.uni-erlangen.de (Christoph Scheit) Date: Wed, 12 Dec 2007 11:57:14 +0100 Subject: [SciPy-user] read ascii data In-Reply-To: References: Message-ID: <200712121157.14457.cscheit@lstm.uni-erlangen.de> Hi All, I have a file containing ascii data related to a block-structured grid. I could use readtxt, but I would like to have the possibility to read only a certain range of rows out of the file. Lets say I have ten blocks in my grid and each block 10 rows of data. So in one read I would like to read for instance line 30 to line 39 I can use skip rows, but how can I limit the loadtxt to the upper limit? Is there something already available or do I have to do it on my own? Thanks in advance, Christoph From xavier.barthelemy at cmla.ens-cachan.fr Wed Dec 12 07:57:32 2007 From: xavier.barthelemy at cmla.ens-cachan.fr (Xavier Barthelemy) Date: Wed, 12 Dec 2007 13:57:32 +0100 Subject: [SciPy-user] read ascii data In-Reply-To: <200712121157.14457.cscheit@lstm.uni-erlangen.de> References: <200712121157.14457.cscheit@lstm.uni-erlangen.de> Message-ID: <475FDABC.7000408@cmla.ens-cachan.fr> hi in scipy, or numpy I can't recall, there's a function name io.read_array data= io.read_array(filename,columns=(0,-1),lines=(0,-1),atype='thetypeyouwant') columns and lines can be all like int his example, or you can just choose where to begin and where to end, or just a discrete list. check the Doc, it is relatively well explained Cheers Xavier Christoph Scheit a ?crit : > Hi All, > > I have a file containing ascii data related to a block-structured grid. > I could use readtxt, but I would like to have the possibility to read only > a certain range of rows out of the file. Lets say I have ten blocks in my grid > and each block 10 rows of data. > So in one read I would like to read for instance line 30 to line 39 > I can use skip rows, but how can I limit the loadtxt to the upper limit? > Is there something already available or do I have to do it on my own? > > Thanks in advance, > > Christoph > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From odonnems at yahoo.com Wed Dec 12 08:55:49 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Wed, 12 Dec 2007 05:55:49 -0800 (PST) Subject: [SciPy-user] read ascii data Message-ID: <255482.8679.qm@web58012.mail.re3.yahoo.com> I am a new python/scipy user, but my first question is what type of grid data are you working with. I am not sure how you are defining 'grid' in other words. If you are working with spatial raster data, there is a library called GDAL. If the data has no spatial content you may try PIL library. I have also worked with reading comma delimited ascii files and you could also enumerate and skip all lines except for those you are interested in pretty easily and quickly. These are a few ideas, but which method you should select likely depends on the format of the data as well as what you plan on doing with the data that you are reading. I hope this gives you some ideas, Michael ----- Original Message ---- From: Christoph Scheit To: scipy-user at scipy.org Sent: Wednesday, December 12, 2007 3:57:14 AM Subject: [SciPy-user] read ascii data Hi All, I have a file containing ascii data related to a block-structured grid. I could use readtxt, but I would like to have the possibility to read only a certain range of rows out of the file. Lets say I have ten blocks in my grid and each block 10 rows of data. So in one read I would like to read for instance line 30 to line 39 I can use skip rows, but how can I limit the loadtxt to the upper limit? Is there something already available or do I have to do it on my own? Thanks in advance, Christoph _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Dec 12 09:24:49 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 12 Dec 2007 09:24:49 -0500 Subject: [SciPy-user] maskedarray In-Reply-To: References: Message-ID: <200712120924.49482.pgmdevlist@gmail.com> On Tuesday 11 December 2007 15:48:01 Tom Johnson wrote: > What is the status regarding 'maskedarray'? When will this become > part (replace ma) of the standard distribution? Also, what is the > recommended way to import sandbox modules so that code changes are > minimized when (if) the package becomes part of the standard > distribution. Mmh, no answer from our leaders ? Well, for now maskedarray lives in the scipy sandbox, but it probably could be transferred to the numpy sandbox. There's still no particular decision I'm aware of about putting it the official release. More feedback is probably needed from users. From bsouthey at gmail.com Wed Dec 12 09:42:12 2007 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 12 Dec 2007 08:42:12 -0600 Subject: [SciPy-user] Correlation coefficients In-Reply-To: References: Message-ID: Hi, What exactly do you want as an outcome? In particular, is there a simpler outcome such as only the covariance? Are you correlating one array with each array of the pool (or against the pool average)? Or correlating each array in the pool with each other? You may only need the covariance or crossproduct if the variance of each array are essentially the same or covariance is sufficient. In that case a array by array multiplication (pool as a multi-dimensional array) will be sufficient. If not, create your own correlation function where you store the mean and variance of each array the pool so you only compute it once. Regards Bruce On Dec 11, 2007 8:28 PM, Roger Herikstad wrote: > Hi all, > I was wondering if there is an efficient way of determining the degree of > correlation between an array and and pool of reference arrays (> 10,000). > I'm using corrcoef when the pool is small ( <1,000), but I have cases where > I need to compare against more than 10,000 arrays, in which case corrcoef is > slow. Any suggestions? Thanks! > > ~ Roger > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From odonnems at yahoo.com Wed Dec 12 12:02:45 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Wed, 12 Dec 2007 09:02:45 -0800 (PST) Subject: [SciPy-user] using STL in scipy weave inline Message-ID: <92324.71222.qm@web58005.mail.re3.yahoo.com> Vincent and Eric thank you for the information. Sorry I took a while to get back to this, but I was side tracked while building another library that I needed for this program. I do have a couple more questions about headers and includes. I tried to use #include at the beginning of the script originally but when I did this I got a bunch of errors about the . My question: is there a way to list the headers at the beginning of the script rather than passing through the weave.inline code? I am interested in knowing this because I will need to do this for several things and iI would like to minimize the information in the command line as much as possible. Secondly, can someone explain the py:: code. I have seen this but I got an error when I try to include it in my code? I am not getting an error with the code Eric attached. Third, can someone recommend any documentation or books that may help me with the interface between python and c++? For example, understanding the py::. I have not seen a whole lot on line. Thanks again for the help and the code!! Michael ----- Original Message ---- From: eric jones To: SciPy Users List Sent: Monday, December 10, 2007 4:17:06 PM Subject: Re: [SciPy-user] using STL in scipy weave inline Michael ODonnell wrote: > OS Microsoft XP SP2 x86 > python 2.5.1 > compiler: mingw > > Does anyone have experience using multimap or the standard template > library within weave inline? I need to create a map that contains > numeric keys which are paired to a string, but I get an error when > declaring the multimap function. If i include I get other errors > about namespace and if don't then I get an error for not declaring the > multimap. Neither method works. There is no problem using the stl from weave. In fact, it uses std::string internally. You can use headers=[""] as a keyword to the inline call to get map into the namespace. Once you do that, you'll need to either declare "using namespace std;" at the top of your code or use std::multimap and std::string to refer to the classes in the stl. Once you've done that, you'll still have some issues as multimap doesn't support the operator [], and your C++ strings have single quotes around them instead of double quotes. I've attached an example for your code that instead works using a standard python dictionary (use py::dict within weave). The output is shown here: In [33]: run we_ex.py In [34]: res Out[34]: {0.050000000000000003: 'fESE', 0.10000000000000001: 'fSSE', 0.14999999999999999: 'fFlat', 0.25: 'fNNE', 0.40000000000000002: 'fSSW', 0.5: 'fWNW', 0.55000000000000004: 'fNNW', 1.1000000000000001: 'fND', 1.5: 'fWSW'} If you need to use the stl for some other reason, then refer to this to see how to do it: http://www.sgi.com/tech/stl/Multimap.html Good luck, eric > > TIA, > Michael > > Example of code: > > total = (winx * winy * 1.0) #Total number of cells to evaluate in kernel > > code = r""" > double ND, NNW, WNW, WSW,SSW, SSE, ESE, ENE, NNE, Flat; > ND = 22.0; > NNW = 11.0; > WNW = 10.0; > WSW = 30.0; > SSW = 8.0; > SSE = 2.0; > ESE = 1.0; > ENE = 5.0; > NNE = 5.0; > Flat = 3.0; > > //Calculate the frequency of the counts > fND = (ND / total); > fNNW = (NNW / total); > fWNW = (WNW / total); > fWSW = (WSW / total); > fSSW = (SSW / total); > fSSE = (SSE / total); > fESE = (ESE / total); > fENE = (ENE / total); > fNNE = (NNE / total); > fFlat = (Flat / total); > > //Create a map for pairs: frequency, string > multimap map_freq; //Create empty map > > map_freq[fND) = 'fND'; > map_freq[fNNW] = 'fNNW'; > map_freq[fWNW] = 'fWNW'; > map_freq[fWSW] = 'fWSW'; > map_freq[fSSW] = 'fSSW'; > map_freq[fSSE] = 'fSSE'; > map_freq[fESE] = 'fESE'; > map_freq[fENE] = 'fENE'; > map_freq[fNNE] = 'fNNE'; > map_freq[fFlat] = 'fFlat'; > """ > > weave.inline(code,['total'], type_converters=converters.blitz, > compiler='gcc') > > ------------------------------------------------------------------------ > Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try > it now. > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -----Inline Attachment Follows----- from scipy import weave from scipy.weave import converters total = 20.0 # (winx * winy * 1.0) #Total number of cells to evaluate in kernel code = r""" double ND, NNW, WNW, WSW,SSW, SSE, ESE, ENE, NNE, Flat; ND = 22.0; NNW = 11.0; WNW = 10.0; WSW = 30.0; SSW = 8.0; SSE = 2.0; ESE = 1.0; ENE = 5.0; NNE = 5.0; Flat = 3.0; //Calculate the frequency of the counts double fND, fNNW, fWNW, fWSW,fSSW, fSSE, fESE, fENE, fNNE, fFlat; fND = (ND / total); fNNW = (NNW / total); fWNW = (WNW / total); fWSW = (WSW / total); fSSW = (SSW / total); fSSE = (SSE / total); fESE = (ESE / total); fENE = (ENE / total); fNNE = (NNE / total); fFlat = (Flat / total); //Create a map for pairs: frequency, string //std::multimap map_freq; //Create empty map py::dict map_freq; map_freq[fND] = "fND"; map_freq[fNNW] = "fNNW"; map_freq[fWNW] = "fWNW"; map_freq[fWSW] = "fWSW"; map_freq[fSSW] = "fSSW"; map_freq[fSSE] = "fSSE"; map_freq[fESE] = "fESE"; map_freq[fENE] = "fENE"; map_freq[fNNE] = "fNNE"; map_freq[fFlat] = "fFlat"; return_val = map_freq; """ res = weave.inline(code,['total'], #headers= [''], compiler='gcc') ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Dec 12 12:42:17 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 12 Dec 2007 17:42:17 +0000 Subject: [SciPy-user] maskedarray In-Reply-To: <200712120924.49482.pgmdevlist@gmail.com> References: <200712120924.49482.pgmdevlist@gmail.com> Message-ID: <1e2af89e0712120942t65a88f21xf5d83aa1f9966d9a@mail.gmail.com> Hi, > Mmh, no answer from our leaders ? Well, for now maskedarray lives in the scipy > sandbox, but it probably could be transferred to the numpy sandbox. There's > still no particular decision I'm aware of about putting it the official > release. More feedback is probably needed from users. Perhaps by making the shift in SVN? (!) Matthew From matthieu.brucher at gmail.com Wed Dec 12 13:01:14 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 12 Dec 2007 19:01:14 +0100 Subject: [SciPy-user] using STL in scipy weave inline In-Reply-To: <92324.71222.qm@web58005.mail.re3.yahoo.com> References: <92324.71222.qm@web58005.mail.re3.yahoo.com> Message-ID: 2007/12/12, Michael ODonnell : > > I tried to use #include at the beginning of the script originally > but when I did this I got a bunch of errors about the . My question: is > there a way to list the headers at the beginning of the script rather than > passing through the weave.inline code? I am interested in knowing this > because I will need to do this for several things and iI would like to > minimize the information in the command line as much as possible. > You can't include it in your code directly as the code is in fact put in another function. The cleanest way is to use the headers keyword. Secondly, can someone explain the py:: code. I have seen this but I got an > error when I try to include it in my code? I am not getting an error with > the code Eric attached. > py:: est the namespace where the wrappers between Python and C++ objects are made. Weave seems to use the SCXX package according to the source code (but I'm not sure it is the real package). Third, can someone recommend any documentation or books that may help me > with the interface between python and c++? For example, understanding the > py::. I have not seen a whole lot on line. > There are no such book at the moment. There are tutorials for SWIG, less for Boost.Python, but for weave, I don't know. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Dec 12 14:17:30 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 12 Dec 2007 12:17:30 -0700 Subject: [SciPy-user] maskedarray In-Reply-To: <200712120924.49482.pgmdevlist@gmail.com> References: <200712120924.49482.pgmdevlist@gmail.com> Message-ID: On Dec 12, 2007 7:24 AM, Pierre GM wrote: > On Tuesday 11 December 2007 15:48:01 Tom Johnson wrote: > > What is the status regarding 'maskedarray'? When will this become > > part (replace ma) of the standard distribution? Also, what is the > > recommended way to import sandbox modules so that code changes are > > minimized when (if) the package becomes part of the standard > > distribution. > > > Mmh, no answer from our leaders ? Well, for now maskedarray lives in the scipy > sandbox, but it probably could be transferred to the numpy sandbox. There's > still no particular decision I'm aware of about putting it the official > release. More feedback is probably needed from users. Well, I'm certainly not one of the leaders, but it's probably worth mentioning that this weekend Jarrod is organizing a small sprint at UC Berkeley to try to flush out a number of things for numpy/scipy. And last weekend John Hunter and I taught a workshop at NCAR here in Boulder, where they expressed a LOT of interest in seeing the MA situation resolved, and were willing to commit resources to making it happen (their codes rely on MA throughout). So I added MA to the sprint todo list already, counting on NCAR's resources once a decision is made. I'd meant to mention it here and just hadn't had the time for it. So it would be great if perhaps we could be in touch over the weekend (Fri/Sat/Sun) to touch bases on this at some point, to get your feedback. If you have skype access, perhaps you could email me directly and we can then set up a time(s) when a brief skype conversation might be feasible. Cheers, f From stefan at sun.ac.za Wed Dec 12 15:02:37 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 12 Dec 2007 22:02:37 +0200 Subject: [SciPy-user] maskedarray In-Reply-To: References: <200712120924.49482.pgmdevlist@gmail.com> Message-ID: <20071212200237.GB14894@mentat.za.net> On Wed, Dec 12, 2007 at 12:17:30PM -0700, Fernando Perez wrote: > On Dec 12, 2007 7:24 AM, Pierre GM wrote: > > On Tuesday 11 December 2007 15:48:01 Tom Johnson wrote: > > > What is the status regarding 'maskedarray'? When will this become > > > part (replace ma) of the standard distribution? Also, what is the > > > recommended way to import sandbox modules so that code changes are > > > minimized when (if) the package becomes part of the standard > > > distribution. > > > > > > Mmh, no answer from our leaders ? Well, for now maskedarray lives in the scipy > > sandbox, but it probably could be transferred to the numpy sandbox. There's > > still no particular decision I'm aware of about putting it the official > > release. More feedback is probably needed from users. > > Well, I'm certainly not one of the leaders, but it's probably worth > mentioning that this weekend Jarrod is organizing a small sprint at UC > Berkeley to try to flush out a number of things for numpy/scipy. And > last weekend John Hunter and I taught a workshop at NCAR here in > Boulder, where they expressed a LOT of interest in seeing the MA > situation resolved, and were willing to commit resources to making it > happen (their codes rely on MA throughout). So I added MA to the > sprint todo list already, counting on NCAR's resources once a decision > is made. I'd meant to mention it here and just hadn't had the time > for it. If you guys organise sprints, please let the rest of us know so that we can work together online (if possible). Thanks! St?fan From bsouthey at gmail.com Wed Dec 12 16:06:25 2007 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 12 Dec 2007 15:06:25 -0600 Subject: [SciPy-user] maskedarray In-Reply-To: References: <200712120924.49482.pgmdevlist@gmail.com> Message-ID: Hi, This has been raised on this list in the past. IIRC the main issue is one of sufficient testing particularly by users. Of course this is a little like what comes first as people probably just use ma as it is part of numpy. I would like to see it in but I don't use it heavily to sufficiently comment on it. Regards Bruce On Dec 12, 2007 1:17 PM, Fernando Perez wrote: > On Dec 12, 2007 7:24 AM, Pierre GM wrote: > > On Tuesday 11 December 2007 15:48:01 Tom Johnson wrote: > > > What is the status regarding 'maskedarray'? When will this become > > > part (replace ma) of the standard distribution? Also, what is the > > > recommended way to import sandbox modules so that code changes are > > > minimized when (if) the package becomes part of the standard > > > distribution. > > > > > > Mmh, no answer from our leaders ? Well, for now maskedarray lives in the scipy > > sandbox, but it probably could be transferred to the numpy sandbox. There's > > still no particular decision I'm aware of about putting it the official > > release. More feedback is probably needed from users. > > Well, I'm certainly not one of the leaders, but it's probably worth > mentioning that this weekend Jarrod is organizing a small sprint at UC > Berkeley to try to flush out a number of things for numpy/scipy. And > last weekend John Hunter and I taught a workshop at NCAR here in > Boulder, where they expressed a LOT of interest in seeing the MA > situation resolved, and were willing to commit resources to making it > happen (their codes rely on MA throughout). So I added MA to the > sprint todo list already, counting on NCAR's resources once a decision > is made. I'd meant to mention it here and just hadn't had the time > for it. > > So it would be great if perhaps we could be in touch over the weekend > (Fri/Sat/Sun) to touch bases on this at some point, to get your > feedback. If you have skype access, perhaps you could email me > directly and we can then set up a time(s) when a brief skype > conversation might be feasible. > > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From travis at enthought.com Wed Dec 12 16:21:29 2007 From: travis at enthought.com (Travis Vaught) Date: Wed, 12 Dec 2007 15:21:29 -0600 Subject: [SciPy-user] maskedarray In-Reply-To: References: <200712120924.49482.pgmdevlist@gmail.com> Message-ID: On Dec 12, 2007, at 1:17 PM, Fernando Perez wrote: >> ... > > Well, I'm certainly not one of the leaders, but it's probably worth > mentioning that this weekend Jarrod is organizing a small sprint at UC > Berkeley to try to flush out a number of things for numpy/scipy. And > last weekend John Hunter and I taught a workshop at NCAR here in > Boulder, where they expressed a LOT of interest in seeing the MA > situation resolved, and were willing to commit resources to making it > happen (their codes rely on MA throughout). So I added MA to the > sprint todo list already, counting on NCAR's resources once a decision > is made. I'd meant to mention it here and just hadn't had the time > for it. > > So it would be great if perhaps we could be in touch over the weekend > (Fri/Sat/Sun) to touch bases on this at some point, to get your > feedback. If you have skype access, perhaps you could email me > directly and we can then set up a time(s) when a brief skype > conversation might be feasible. > > Cheers, > > f Just to remind folks, there is a scipy channel at irc.freenode.net as well...that might be nice to allow more folks to participate, and to have an archive of the conversation. From fie.pye at atlas.cz Wed Dec 12 16:46:09 2007 From: fie.pye at atlas.cz (Pye Fie) Date: Wed, 12 Dec 2007 21:46:09 GMT Subject: [SciPy-user] Building VTK Message-ID: Hi Metthieu. Than you for your response. I posted my question on VTK ML first. http://public.kitware.com/pipermail/vtkusers/2007-December/093627.html Unfortunately I didn't get any answer until now. You asker about my compiler. CentOS 5.0 is compiled with gcc 4.1. I installed also gcc 4.2 and gcc 3.4.6 on the PC. I would like to have a computational environment based on Python and its module. I compiled Python and module Ipython, SciPy, NumPy, tables, matplotlib, wxpython ......... and libraries such as BLAS, LAPACK, ATLAS, HDF5, ........ with gcc3.4.6. Now I continue with compiling VTK with gcc3.4.6 too. More information about compilation setup is the file CMakeCache.txt. Best regards. Fie pye Od: Matthieu Brucher P?ijato: 11.12.2007 21:18:04 P?edm?t: Re: [SciPy-user] Building VTK Besides, you tell us you use GCC 3.4.6 but in your description, it is GCC 4.2... The issue may be solved by linking against GCC libraries, but again, the VTK ML may be a better place for the solution. Matthieu 2007/12/11, Matthieu Brucher : Hi, This issue is related to a C++ problem in VTK, not Python, so we can't say much. I advise you to post this on the VTK ML. Matthieu 2007/12/11, Pye Fie < fie.pye at atlas.cz>:Hello. I would like to build and install VTK and MayaVi2. PC: 2x Dual-core Opteron 275 OS: CentOS 5.0, kernel 2.6.18-8.1.15.el5 OS compiler: gcc version 4.2.0, gfortran I have already built and installed python2.5.1 and other module with compiler gcc3.4.6 . Now I can't build VTK. I am building with gcc3.4.6. Building stops on the following error: [ 74%] Building CXX object IO/CMakeFiles/vtkIOTCL.dir/vtkZLibDataCompressorTcl.o [ 74%] Building CXX object IO/CMakeFiles/vtkIOTCL.dir/vtkIOTCLInit.o Linking CXX shared library ../bin/libvtkIOTCL.so [ 74%] Built target vtkIOTCL Scanning dependencies of target vtkParseOGLExt [ 74%] Building CXX object Utilities/ParseOGLExt/CMakeFiles/vtkParseOGLExt.dir/ParseOGLExt.o [ 74%] Building CXX object Utilities/ParseOGLExt/CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o Linking CXX executable ../../bin/vtkParseOGLExt CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o: In function `std::__verify_grouping(char const*, unsigned long, std::basic_string std::char_traits, std::allocator > const&)': Tokenizer.cxx:(.text+0x19): undefined reference to `std::basic_string, std::allocator >::size() const' Tokenizer.cxx:(.text+0x70): undefined reference to `std::basic_string std::char_traits, std::allocator >::operator[](unsigned long) const' Tokenizer.cxx : (.text+0xb0): undefined reference to `std::basic_string, std::allocator >::operator[](unsigned long) const' Tokenizer.cxx:(.text+0xdd): undefined reference to `std::basic_string, std::allocator >::operator[](unsigned long) const' CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o: In function `Tokenizer::Tokenizer(char const*, char const*)': Tokenizer.cxx:(.text+0x12a): undefined reference to `std::allocator::allocator()' Tokenizer.cxx:(.text+0x13b): undefined reference to `std::basic_string, std::allocator >::basic_string(char const*, std::allocator const&)' Tokenizer.cxx:(.text+0x14e): undefined reference to `std::allocator::~allocator()' Tokenizer.cxx:(.text+0x164): undefined reference to `std::allocator::~allocator()' Tokenizer.cxx:(.text+0x16d): undefined reference to `std::allocator::allocator()' Tokenizer.cxx:(.text+0x182): undefined reference to `std::basic_string, std::allocator >::basic_string(char const*, std::allocator const&)' The list continues. It seams that there is something wrong with OpenGL but I can't find what is wrong. Could anybody help me? I have attached more detailed information about VTK configuration, building output and building script. Best regards fie pye ------------------------------------------ http://search.atlas.cz/ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher ------------------------------------------ http://search.atlas.cz/ From fperez.net at gmail.com Wed Dec 12 17:23:09 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 12 Dec 2007 15:23:09 -0700 Subject: [SciPy-user] maskedarray In-Reply-To: References: <200712120924.49482.pgmdevlist@gmail.com> Message-ID: On Dec 12, 2007 2:21 PM, Travis Vaught wrote: > Just to remind folks, there is a scipy channel at irc.freenode.net as > well...that might be nice to allow more folks to participate, and to > have an archive of the conversation. I wasn't even aware of this, as it's not mentioned anywhere I can find on the site... I've just added a brief note about it here: http://scipy.org/Mailing_Lists Though perhaps someone who uses IRC regularly (I don't) could expand it with a bit more useful info... Cheers, f From pgmdevlist at gmail.com Wed Dec 12 17:25:50 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 12 Dec 2007 17:25:50 -0500 Subject: [SciPy-user] maskedarray In-Reply-To: References: Message-ID: <200712121725.52774.pgmdevlist@gmail.com> On Wednesday 12 December 2007 16:21:29 Travis Vaught wrote: > On Dec 12, 2007, at 1:17 PM, Fernando Perez wrote: > Just to remind folks, there is a scipy channel at irc.freenode.net as > well...that might be nice to allow more folks to participate, and to > have an archive of the conversation. Sounds like a good solution indeed, as I don't have Skype. I'll try to stick around my box this week-end, let me know what the best times would be. About the use of maskedarray: so far, no major complaints. It works great with matplotlib, it's the base for TimeSeries, it's overall far easier to subclass. Of course, parts could be ported to C for optimization, but that would bring another set of predicaments. From oliphant at enthought.com Wed Dec 12 18:55:00 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 12 Dec 2007 17:55:00 -0600 Subject: [SciPy-user] maskedarray In-Reply-To: <200712120924.49482.pgmdevlist@gmail.com> References: <200712120924.49482.pgmdevlist@gmail.com> Message-ID: <476074D4.4080003@enthought.com> Pierre GM wrote: > On Tuesday 11 December 2007 15:48:01 Tom Johnson wrote: > >> What is the status regarding 'maskedarray'? When will this become >> part (replace ma) of the standard distribution? Also, what is the >> recommended way to import sandbox modules so that code changes are >> minimized when (if) the package becomes part of the standard >> distribution. >> > > > Mmh, no answer from our leaders ? Well, for now maskedarray lives in the scipy > sandbox, but it probably could be transferred to the numpy sandbox. There's > still no particular decision I'm aware of about putting it the official > release. More feedback is probably needed from users. > I apologize for not chiming in here. It has been difficult to keep up with all the lists on which I should ostensibly know what is being discussed as I have changed jobs and am settling in to new responsibilites. It seems to me that it is time to move the scipy.sandbox.maskedarray implementation over to numpy.ma. I'm a little concerned about doing this before 1.1, but if the API has not changed, then it should be fine. If there is no opposition, then it seems that this can move forward before 1.0.5 which I would like to see in January. For the future 1.1 release (late 2008), I would like to see (in no particular priority order): 1) The MA made a C-subclass 2) The matrix object made a C-subclass (for speed). 3) The sparse matrix structure (but not all the algorithms) brought over into NumPy with the indexing matching the matrix indexing. 4) Weave brought over into NumPy The additions should be created in such a way that they can be disabled should so that perhaps a numpy-lite could be produced as well. Or, perhaps better separate packages like numpy-devel, numpy-compat, numpy-sparse could be created. I'm looking forward to some good discussions over the weekend. -Travis O. From s.mientki at ru.nl Wed Dec 12 19:16:23 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 13 Dec 2007 01:16:23 +0100 Subject: [SciPy-user] problems with copying / comparing lists, containing arrays ? Message-ID: <476079D7.8050701@ru.nl> hello, I've a list, that contains an (numpy) array on one or more locations. This list acts as a structure to hold all kinds of objects (an array is also such an object). This list is "self.Input_Value" in the code below. Initially this list is just [ None, None, None, None, None ] Now some part of the program may fill elements in this list, so it become something like this (in practice the array will be much larger) [None, array([ 0. , 1.77857046, 2.86459363, 2.83518923, 1.70180685, -0.09423228, -1.85357884, -2.89117103, -2.80298683, -1.62336376]), None, None, None, None] I want to test if the array has changed, so I compare it to "self.Input_Value_Old", if self.Input_Value != self.Input_Value_Old: If changes are found, I copy Input_Value to Input_Value_Old self.Input_Value_Old = copy.copy ( self.Input_Value ) But the next comparison gives an exception, So the next piece of code # Check for any changes print 'Check Inputs',self.Input_Value print 'Check Inputs',self.Input_Value_Old try: if self.Input_Value != self.Input_Value_Old: print 'INPUTS CHANGED',self.Name Change = True else : print 'INPUT;',self.Name,self.Input_Value except: print 'ERROR' print 'DONE' Generates the following output: Check Inputs [None, array([ 0. , 1.77857046, 2.86459363, 2.83518923, 1.70180685, -0.09423228, -1.85357884, -2.89117103, -2.80298683, -1.62336376]), None, None, None, None] Check Inputs [None, array([ 0.00000000e+00, 1.76335576e+00, 2.85316955e+00, 2.85316955e+00, 1.76335576e+00, 3.67381906e-16, -1.76335576e+00, -2.85316955e+00, -2.85316955e+00, -1.76335576e+00]), None, None, None, None] ERROR DONE Each time some value in the array changes, the whole array is generated as a new array. I discovered that if I change the array in place (which is of course the final solution), the exception doesn't occur, but the change is also never detected, because probably the next statement is never True if self.Input_Value != self.Input_Value_Old: Now with in place changing of the array, and a deepcopy, the exception error occurs again and besides that a deepcopy has some trouble with other objects. Is there another way to detect the changes in the array, or is it better to set a flag when the array is changed ? thanks, Stef Mientki From timmichelsen at gmx-topmail.de Wed Dec 12 19:35:40 2007 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Thu, 13 Dec 2007 01:35:40 +0100 Subject: [SciPy-user] time series: Python vs. R Message-ID: Hello, first of all it's nice to hear that there are a few more interested to get on with time series. Unfortunately, I haven't found a lot of time to investigate into the packages. Time's kinda short at the end of the year... I just finished to install them on my Ubuntu box with checkinstall. I stumbled onto the R package time series. It looks like R can really read structured data with ease. Where would you see the advantage of using Python when analysing time series over R? Where is R (still) stronger? I think for the long run Python will help be more because it would be easier to write filters for the measurement data and include it with all the other modules available... Would been nice to hear about your experiences. Kind regards, Timmie From gael.varoquaux at normalesup.org Wed Dec 12 19:49:20 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 13 Dec 2007 01:49:20 +0100 Subject: [SciPy-user] time series: Python vs. R In-Reply-To: References: Message-ID: <20071213004920.GG29192@clipper.ens.fr> On Thu, Dec 13, 2007 at 01:35:40AM +0100, Tim Michelsen wrote: > Where would you see the advantage of using Python when analysing time > series over R? Where is R (still) stronger? Full integration with other tools. With Python you can get very good 2D plotting, 3D plotting, web service, network abstraction layer, numerical methods, ... I use Python in situation where I need a large variety of different tools. I can indeed agregate different programs throught pipes, shell calls, .... I've done it before, I am much happier with Python and being able to put all this in a consistant language rather than a patchwork. Obviously your demands depend on what you do. However in the long run there are a strong usecases for all-Python packages, or well-interface-to-Python packages to implement the different operations needed in scientific computing. Cheers, Ga?l From aisaac at american.edu Wed Dec 12 22:27:48 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 12 Dec 2007 22:27:48 -0500 Subject: [SciPy-user] maskedarray In-Reply-To: <476074D4.4080003@enthought.com> References: <200712120924.49482.pgmdevlist@gmail.com><476074D4.4080003@enthought.com> Message-ID: On Wed, 12 Dec 2007, "Travis E. Oliphant" apparently wrote: > 2) The matrix object made a C-subclass (for speed). This will probably be the last chance for such a change, so I again hope that consideration will be given to *one* change in the matrix object: iteration over a matrix should return arrays (instead of matrices). So if A is a matrix, A[1] should be an array, but A[1,:] should be a matrix. Obviously this is an argument from design rather than from functionality. Current behavior is not "natural". E.g., it makes it awkward to iterate over all elements in the "natural" way, which I claim is:: for row in A: for element in row: print element This example is just meant to illustrate in a simple way what is "odd" about the current behavior. (It is not meant to be an "argument" nor to suggest that the current absence simple ways to do the same thing---e.g., using A.A.) Whenever I am working with matrices, I notice some aspect of this "oddity", and it is annoying when so much else is quite aesthetic. Cheers, Alan Isaac PS For those who do not know, here is an example of the current behavior. (The following prints 2 matrices instead of 4 elements.) >>> A = N.mat('1 2;3 4') >>> for row in A: ... for element in row: ... print element ... [[1 2]] [[3 4]] >>> From sgarcia at olfac.univ-lyon1.fr Thu Dec 13 04:54:24 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Thu, 13 Dec 2007 10:54:24 +0100 Subject: [SciPy-user] scipy.stats.percentileofscore : something strange Message-ID: <47610150.4090307@olfac.univ-lyon1.fr> Hi all, I found something strange in scipy.stats.percentileofscore In [1]: from scipy import * In [2]: a = rand(10000) In [3]: stats.percentileofscore(a,.2) Out[3]: 20.0157565073 This OK. In [4]: stats.percentileofscore(a,.0002) Out[4]: 102.898311442 This is strange !!!!! In [5]: stats.percentileofscore(a,1.4) --------------------------------------------------------------------------- Traceback (most recent call last) /home/sgarcia/ in () /usr/lib/python2.5/site-packages/scipy/stats/stats.py in percentileofscore(a, score, histbins, defaultlimits) 942 cumhist = np.cumsum(h*1, axis=0) 943 i = int((score - lrl)/float(binsize)) --> 944 pct = (cumhist[i-1]+((score-(lrl+binsize*i))/float(binsize))*h[i])/float(len(a)) * 100 945 return pct 946 : index out of bounds This does not work... Any idea, why ? Sam -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logistique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Thu Dec 13 05:24:09 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 13 Dec 2007 11:24:09 +0100 Subject: [SciPy-user] scipy.stats.percentileofscore : something strange In-Reply-To: <47610150.4090307@olfac.univ-lyon1.fr> References: <47610150.4090307@olfac.univ-lyon1.fr> Message-ID: 2007/12/13, Samuel GARCIA : > > Hi all, > I found something strange in scipy.stats.percentileofscore > In [1]: from scipy import * > In [2]: a = rand(10000) > In [3]: stats.percentileofscore(a,.2) > Out[3]: 20.0157565073 > This OK. > In [4]: stats.percentileofscore(a,.0002) > Out[4]: 102.898311442 > This is strange !!!!! > > This may be a bug In [5]: stats.percentileofscore(a,1.4) > --------------------------------------------------------------------------- > Traceback (most recent call last) > */home/sgarcia/* in () > /usr/lib/python2.5/site-packages/scipy/stats/stats.py in > percentileofscore(a, score, histbins, defaultlimits) > 942 cumhist = np.cumsum(h*1, axis=0) > 943 i = int((score - lrl)/float(binsize)) > --> 944 pct = > (cumhist[i-1]+((score-(lrl+binsize*i))/float(binsize))*h[i])/float(len(a)) > * 100 > 945 return pct > 946: index out of bounds > This does not work... > > I expect this to raise this exception, you cannot take a percentile outside [0, 1] (1.4 = 140%) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgarcia at olfac.univ-lyon1.fr Thu Dec 13 05:36:15 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Thu, 13 Dec 2007 11:36:15 +0100 Subject: [SciPy-user] scipy.stats.percentileofscore : something strange In-Reply-To: References: <47610150.4090307@olfac.univ-lyon1.fr> Message-ID: <47610B1F.5010502@olfac.univ-lyon1.fr> Matthieu Brucher a ?crit : > > > 2007/12/13, Samuel GARCIA >: > > Hi all, > I found something strange in scipy.stats.percentileofscore > > In [1]: from scipy import * > > In [2]: a = rand(10000) > > In [3]: stats.percentileofscore(a,.2) > Out[3]: 20.0157565073 > This OK. > > > In [4]: stats.percentileofscore(a,.0002) > Out[4]: 102.898311442 > This is strange !!!!! > > > This may be a bug > > > In [5]: stats.percentileofscore(a,1.4) > --------------------------------------------------------------------------- > '> Traceback (most recent call last) > > //home/sgarcia// in () > > /usr/lib/python2.5/site-packages/scipy/stats/stats.py in > percentileofscore(a, score, histbins, defaultlimits) > 942 cumhist = np.cumsum(h*1, axis=0) > 943 i = int((score - lrl)/float(binsize)) > > --> 944 pct = > (cumhist[i-1]+((score-(lrl+binsize*i))/float(binsize))*h[i])/float(len(a)) > * 100 > 945 return pct > 946 > > : index out of bounds > > This does not work... > > > I expect this to raise this exception, you cannot take a percentile > outside [0, 1] (1.4 = 140%) I think that it should return 100 if superior to rigtht limit and 0 if inferior to left limit. No ? > > Matthieu > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logistique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.barthelemy at cmla.ens-cachan.fr Thu Dec 13 05:48:43 2007 From: xavier.barthelemy at cmla.ens-cachan.fr (Xavier Barthelemy) Date: Thu, 13 Dec 2007 11:48:43 +0100 Subject: [SciPy-user] problem in reading a indices In-Reply-To: References: <47610150.4090307@olfac.univ-lyon1.fr> Message-ID: <47610E0B.20502@cmla.ens-cachan.fr> Hi all I'm becoming mad, because I can't see what's wrong: I am constructing a GUI, to plot some datas. so let's have a look of what's wrong: in my code I have a variable named choice[i].current which is the current selection of the i-th Listbox object. it is a tuple, with one element. so when I write print type(i),type(choice[i].current) I have: int and tuple print type(i),type(choice[i].current[0]) I have: int and str print type(i),type(int(choice[i].current[0])) I have: int and int so when I call another array with these indices ArrayWithData[i,int(choice[i].current[0])] I have the following error: TypeError: list indices must be integers so I tried an intermediate value, because sometimes, the oneliner code doesn't work, so with an intermediate passage: value=int(choice[i].current[0]) ArrayWithData[i,value] I have the same error and I don't understand why. What's wrong? May anyone have an idea? Xavier pm: and print type(ArrayWithData), ArrayWithData gives me [array([ 2.01, 5.01]),...] From rgold at lsw.uni-heidelberg.de Thu Dec 13 05:20:57 2007 From: rgold at lsw.uni-heidelberg.de (rgold at lsw.uni-heidelberg.de) Date: Thu, 13 Dec 2007 11:20:57 +0100 (CET) Subject: [SciPy-user] read ascii data In-Reply-To: <200712121157.14457.cscheit@lstm.uni-erlangen.de> References: <200712121157.14457.cscheit@lstm.uni-erlangen.de> Message-ID: <36424.147.142.111.40.1197541257.squirrel@srv0.lsw.uni-heidelberg.de> Hi Christoph, Probably you restrict yourself to read only certain rows of your ascii file because of performance issues... If this is not the case you might want to ignore this Email! Note, that if these lines are at the very end of your ascii-file you don't gain much speed, because there's no line end in an ascii-file (most of the file has to be read anyway). So if you know the size of your array, depending on what you want to do it's maybe faster to read in the whole ascii-file as one single string and then selecting the lines you want using split(). i.e.: LIST=[float(D) for D in FILE.read().split()] It is simple, but very fast! However I suggest to store your data in binary format, it's actually very simple handling them using python. Cheers, Roman > Hi All, > > I have a file containing ascii data related to a block-structured grid. > I could use readtxt, but I would like to have the possibility to read only > a certain range of rows out of the file. Lets say I have ten blocks in my > grid > and each block 10 rows of data. > So in one read I would like to read for instance line 30 to line 39 > I can use skip rows, but how can I limit the loadtxt to the upper limit? > Is there something already available or do I have to do it on my own? > > Thanks in advance, > > Christoph > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Thu Dec 13 05:53:40 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 13 Dec 2007 11:53:40 +0100 Subject: [SciPy-user] scipy.stats.percentileofscore : something strange In-Reply-To: <47610B1F.5010502@olfac.univ-lyon1.fr> References: <47610150.4090307@olfac.univ-lyon1.fr> <47610B1F.5010502@olfac.univ-lyon1.fr> Message-ID: 2007/12/13, Samuel GARCIA : > > > > Matthieu Brucher a ?crit : > > > > 2007/12/13, Samuel GARCIA : > > > > Hi all, > > I found something strange in scipy.stats.percentileofscore > > In [1]: from scipy import * > > In [2]: a = rand(10000) > > In [3]: stats.percentileofscore(a,.2) > > Out[3]: 20.0157565073 > > This OK. > > In [4]: stats.percentileofscore(a,.0002) > > Out[4]: 102.898311442 > > This is strange !!!!! > > > > > This may be a bug > > If you put 0 for the score, a number greater than 100 is returned, so this is a bug. But for 0.0002, your percentile will be null, you can change the histbins value (default=10) for a better precision. I expect this to raise this exception, you cannot take a percentile outside > [0, 1] (1.4 = 140%) > > > I think that it should return 100 if superior to rigtht limit and 0 if > inferior to left limit. > No ? > My mistake, I was confused by the interval. I think you are right, but perhaps someone else will give the answer to this behaviour. I think you should open a ticket on the TRAC to keep track of this bug. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Dec 13 07:06:47 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Dec 2007 13:06:47 +0100 Subject: [SciPy-user] Rutherford Boeing format Message-ID: Hi all, I was wondering if someone has written a function to import matrices given in the Rutherford Boeing format. Nils Reference: http://www.cise.ufl.edu/research/sparse/RBio/ From nwagner at iam.uni-stuttgart.de Thu Dec 13 07:57:25 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Dec 2007 13:57:25 +0100 Subject: [SciPy-user] Creating a sparse matrix Message-ID: Hi all, How can I build a sparse matrix from the following array ? >>> data array([['1', '1', '1.7244067583090530E+05'], ['1', '2', '4.7526228631699840E+04'], ['2', '2', '1.7244064192412970E+05'], ['3', '3', '2.7456944493758570E+02'], ['1', '4', '2.3763114315849920E+05'], ['2', '4', '8.6220320962064820E+05'], ['3', '4', '5.9378661309939430E+02'], ['4', '4', '4.3948451437807490E+06'], ['1', '5', '-8.6220337915452610E+05'], ['2', '5', '-2.3763114315849920E+05'], ['3', '5', '-5.9378754413417760E+02'], ['4', '5', '-1.1952119204729520E+06'], ['5', '5', '4.3948459988011970E+06'], ['6', '6', '4.2234547275787270E+03'], ['1', '7', '-1.6598232969237920E+04'], ['2', '7', '-6.4128987931960360E+03'], ['4', '7', '-3.2064493965980180E+04'], ['5', '7', '8.2991164846189540E+04'], ['7', '7', '1.7244065067439350E+05'], ['1', '8', '6.4129110657815880E+03'], ['2', '8', '-6.0501784478906850E+04'], ['4', '8', '-3.0250892239453440E+05'], ['5', '8', '-3.2064555328907960E+04'], ['7', '8', '-4.7526234795758550E+04'], ['8', '8', '1.7244065036281190E+05'], ['3', '9', '-6.6701524177624400E+01'], ['4', '9', '1.0014969892580470E+03'], ['5', '9', '1.8006472158568300E+02'], ['9', '9', '2.7456950540497500E+02'], ['1', '10', '3.2064555328907960E+04'], ['2', '10', '-3.0250892239453440E+05'], ['3', '10', '-1.0014968431605310E+03'], ['4', '10', '-1.4981909287503440E+06'], ['5', '10', '-1.5690996463630880E+05'], ['7', '10', '-2.3763117397879280E+05'], ['8', '10', '8.6220325181405960E+05'], ['9', '10', '-5.9378786697859830E+02'], ['10', '10', '4.3948453583907450E+06'], ['1', '11', '8.2991164846189560E+04'], ['2', '11', '3.2064493965980140E+04'], ['3', '11', '1.8006434238808650E+02'], ['4', '11', '1.5690966299256120E+05'], ['5', '11', '-4.2216699312324380E+05'], ['7', '11', '-8.6220325337196720E+05'], ['8', '11', '2.3763117397879260E+05'], ['9', '11', '-5.9378773737342270E+02'], ['10', '11', '1.1952120829330800E+06'], ['11', '11', '4.3948453666102750E+06'], ['6', '12', '-1.0245631116652000E+03'], ['12', '12', '4.2234546752310640E+03'], ['1', '13', '-2.1562536183412980E+04'], ['2', '13', '-1.3326854263605380E+04'], ['4', '13', '-6.6634271318026890E+04'], ['5', '13', '1.0781268091706490E+05'], ['7', '13', '-6.0501787019015210E+04'], ['8', '13', '-6.4129083147729780E+03'], ['10', '13', '-3.2064541573864920E+04'], ['11', '13', '3.0250893509507610E+05'], ['13', '13', '1.7244065108409200E+05'], ['1', '14', '-1.3326856939548480E+04'], ['2', '14', '-2.1562538882437520E+04'], ['4', '14', '-1.0781269441218760E+05'], ['5', '14', '6.6634284697742330E+04'], ['7', '14', '6.4129066592753080E+03'], ['8', '14', '-1.6598230710213180E+04'], ['10', '14', '-8.2991153551065850E+04'], ['11', '14', '-3.2064533296376550E+04'], ['13', '14', '4.7526232819775960E+04'], ['14', '14', '1.7244065558155810E+05'], ['3', '15', '3.9071327498923200E+00'], ['4', '15', '1.1208665237160810E+02'], ['5', '15', '-1.1208648718752250E+02'], ['9', '15', '-6.6701524986800910E+01'], ['10', '15', '1.8006454779629130E+02'], ['11', '15', '-1.0014969364802770E+03'], ['15', '15', '2.7456952836064130E+02'], ['1', '16', '-6.6634284697742330E+04'], ['2', '16', '-1.0781269441218760E+05'], ['3', '16', '-1.1208653552827830E+02'], ['4', '16', '-5.3738372531588020E+05'], ['5', '16', '3.3136527542132390E+05'], ['7', '16', '3.2064533296376570E+04'], ['8', '16', '-8.2991153551065860E+04'], ['9', '16', '1.8006450558919260E+02'], ['10', '16', '-4.2216693696279110E+05'], ['11', '16', '-1.5690985700615200E+05'], ['13', '16', '2.3763116409887990E+05'], ['14', '16', '8.6220327790779050E+05'], ['15', '16', '-5.9378754489808300E+02'], ['16', '16', '4.3948454932894830E+06'], ['1', '17', '1.0781268091706490E+05'], ['2', '17', '6.6634271318026870E+04'], ['3', '17', '1.1208660539636940E+02'], ['4', '17', '3.3136520639089480E+05'], ['5', '17', '-5.3738365971501340E+05'], ['7', '17', '3.0250893509507610E+05'], ['8', '17', '3.2064541573864890E+04'], ['9', '17', '1.0014969080580460E+03'], ['10', '17', '1.5690989812553410E+05'], ['11', '17', '-1.4981909926673520E+06'], ['13', '17', '-8.6220325542046030E+05'], ['14', '17', '-2.3763116409887980E+05'], ['15', '17', '5.9378787780320290E+02'], ['16', '17', '-1.1952120315502540E+06'], ['17', '17', '4.3948453781456250E+06'], ['6', '18', '-4.0834357736871400E+02'], ['12', '18', '-1.0245631207891270E+03'], ['18', '18', '4.2234546579613690E+03'], ['1', '19', '-6.0501796429481100E+04'], ['2', '19', '6.4128982934224380E+03'], ['4', '19', '3.2064491467112190E+04'], ['5', '19', '3.0250898214740550E+05'], ['7', '19', '-2.1562538882438510E+04'], ['8', '19', '1.3326853882609100E+04'], ['10', '19', '6.6634269413045520E+04'], ['11', '19', '1.0781269441219260E+05'], ['13', '19', '-1.6598239767392180E+04'], ['14', '19', '-6.4129061595026810E+03'], ['16', '19', '-3.2064530797513400E+04'], ['17', '19', '8.2991198836960840E+04'], ['19', '19', '1.7244064683129310E+05'], ['1', '20', '-6.4129127399249150E+03'], ['2', '20', '-1.6598242026418230E+04'], ['4', '20', '-8.2991210132091170E+04'], ['5', '20', '3.2064563699624550E+04'], ['7', '20', '1.3326857320545740E+04'], ['8', '20', '-2.1562536183413320E+04'], ['10', '20', '-1.0781268091706660E+05'], ['11', '20', '-6.6634286602728680E+04'], ['13', '20', '6.4129099889152090E+03'], ['14', '20', '-6.0501798969589890E+04'], ['16', '20', '-3.0250899484794950E+05'], ['17', '20', '-3.2064549944576080E+04'], ['19', '20', '-4.7526226655715920E+04'], ['20', '20', '1.7244067655218550E+05'], ['3', '21', '-6.6701552188476970E+01'], ['4', '21', '-1.8006429542064950E+02'], ['5', '21', '-1.0014970368071590E+03'], ['9', '21', '3.9071327498907450E+00'], ['10', '21', '-1.1208655989768520E+02'], ['11', '21', '-1.1208657177767270E+02'], ['15', '21', '-6.6701552997659000E+01'], ['16', '21', '-1.0014972526987650E+03'], ['17', '21', '-1.8006453997854880E+02'], ['21', '21', '2.7456946789324640E+02'], ['1', '22', '-3.2064563699624570E+04'], ['2', '22', '-8.2991210132091170E+04'], ['3', '22', '-1.8006471376798420E+02'], ['4', '22', '-4.2216722298368720E+05'], ['5', '22', '1.5691000662161880E+05'], ['7', '22', '6.6634286602728650E+04'], ['8', '22', '-1.0781268091706660E+05'], ['9', '22', '1.1208653268623420E+02'], ['10', '22', '-5.3738365971502150E+05'], ['11', '22', '-3.3136528414261740E+05'], ['13', '22', '3.2064549944576050E+04'], ['14', '22', '-3.0250899484794950E+05'], ['15', '22', '1.0014971017047370E+03'], ['16', '22', '-1.4981912910366350E+06'], ['17', '22', '-1.5690994011081730E+05'], ['19', '22', '-2.3763113327857960E+05'], ['20', '22', '8.6220338276092740E+05'], ['21', '22', '5.9378755495883890E+02'], ['22', '22', '4.3948460185560810E+06'], ['1', '23', '3.0250898214740550E+05'], ['2', '23', '-3.2064491467112220E+04'], ['3', '23', '1.0014973054764940E+03'], ['4', '23', '-1.5690965052770850E+05'], ['5', '23', '-1.4981912271196160E+06'], ['7', '23', '1.0781269441219260E+05'], ['8', '23', '-6.6634269413045550E+04'], ['9', '23', '1.1208661612225140E+02'], ['10', '23', '-3.3136519766962600E+05'], ['11', '23', '-5.3738372531590490E+05'], ['13', '23', '8.2991198836960880E+04'], ['14', '23', '3.2064530797513400E+04'], ['15', '23', '-1.8006445862190810E+02'], ['16', '23', '1.5690984454132130E+05'], ['17', '23', '-4.2216716682320250E+05'], ['19', '23', '-8.6220323415646550E+05'], ['20', '23', '2.3763113327857960E+05'], ['21', '23', '5.9378642062413570E+02'], ['22', '23', '1.1952118690900900E+06'], ['23', '23', '4.3948452704599260E+06'], ['6', '24', '-1.0245630840369950E+03'], ['12', '24', '-4.0834357736871400E+02'], ['18', '24', '-1.0245630931609220E+03'], ['24', '24', '4.2234547103090340E+03']], dtype='|S23') data contains information about row, column and the corresponding entry. Any pointer would be appreciated. Thanks in advance Nils From cimrman3 at ntc.zcu.cz Thu Dec 13 08:03:14 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 13 Dec 2007 14:03:14 +0100 Subject: [SciPy-user] Creating a sparse matrix In-Reply-To: References: Message-ID: <47612D92.4020408@ntc.zcu.cz> Nils Wagner wrote: > Hi all, > > How can I build a sparse matrix from the following array ? >>>> data > array([['1', '1', '1.7244067583090530E+05'], > ['1', '2', '4.7526228631699840E+04'], > ... > ['18', '24', '-1.0245630931609220E+03'], > ['24', '24', '4.2234547103090340E+03']], > dtype='|S23') > > data contains information about row, column and the > corresponding entry. > > Any pointer would be appreciated. You read the data from a file, right? You should convert the row/column indices to integers and values to floats during the reading. Then you would have your data ready for the COO matrix constructor. r. From nwagner at iam.uni-stuttgart.de Thu Dec 13 08:12:51 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Dec 2007 14:12:51 +0100 Subject: [SciPy-user] Creating a sparse matrix In-Reply-To: <47612D92.4020408@ntc.zcu.cz> References: <47612D92.4020408@ntc.zcu.cz> Message-ID: On Thu, 13 Dec 2007 14:03:14 +0100 Robert Cimrman wrote: > Nils Wagner wrote: >> Hi all, >> >> How can I build a sparse matrix from the following array >>? >>>>> data >> array([['1', '1', '1.7244067583090530E+05'], >> ['1', '2', '4.7526228631699840E+04'], > > ... >> ['18', '24', '-1.0245630931609220E+03'], >> ['24', '24', '4.2234547103090340E+03']], >> dtype='|S23') >> >> data contains information about row, column and the >> corresponding entry. >> >> Any pointer would be appreciated. > > You read the data from a file, right? Exactly. You should convert >the row/column > indices to integers and values to floats during the >reading. How can I do that on-the-fly ? Then you > would have your data ready for the COO matrix >constructor. Nils From aisaac at american.edu Thu Dec 13 08:13:55 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 13 Dec 2007 08:13:55 -0500 Subject: [SciPy-user] problem in reading a indices In-Reply-To: <47610E0B.20502@cmla.ens-cachan.fr> References: <47610150.4090307@olfac.univ-lyon1.fr><47610E0B.20502@cmla.ens-cachan.fr> Message-ID: On Thu, 13 Dec 2007, Xavier Barthelemy apparently wrote: > ArrayWithData[i,int(choice[i].current[0])] I have the > following error: TypeError: list indices must be integers Looks like ArrayWithData is a list, not an array. Cheers, Alan Isaac From aisaac at american.edu Thu Dec 13 08:16:41 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 13 Dec 2007 08:16:41 -0500 Subject: [SciPy-user] problem in reading a indices In-Reply-To: <47610E0B.20502@cmla.ens-cachan.fr> References: <47610150.4090307@olfac.univ-lyon1.fr><47610E0B.20502@cmla.ens-cachan.fr> Message-ID: On Thu, 13 Dec 2007, Xavier Barthelemy apparently wrote: > ArrayWithData[i,int(choice[i].current[0])] > pm: > and print type(ArrayWithData), ArrayWithData gives me > [array([ 2.01, 5.01]),...] Sorry, missed your postscript. (What does pm mean?) So ArrayWithData is a list of arrays. Then try ArrayWithData[i][otherindex] since lists do not handle multiple indexes. Cheers, Alan Isaac From aisaac at american.edu Thu Dec 13 08:26:06 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 13 Dec 2007 08:26:06 -0500 Subject: [SciPy-user] Creating a sparse matrix In-Reply-To: References: <47612D92.4020408@ntc.zcu.cz> Message-ID: On Thu, 13 Dec 2007, Nils Wagner apparently wrote: > How can I do that on-the-fly ? That depends on the file format. But are you just asking if you can do the following? data = [] for row in open(filename,'r'): r0, r1, r2 = row.split() data.append( [int(r0), int(r1), float(r2)] ) Cheers, Alan Isaac From cimrman3 at ntc.zcu.cz Thu Dec 13 08:27:05 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 13 Dec 2007 14:27:05 +0100 Subject: [SciPy-user] Creating a sparse matrix In-Reply-To: References: <47612D92.4020408@ntc.zcu.cz> Message-ID: <47613329.4020603@ntc.zcu.cz> Nils Wagner wrote: > On Thu, 13 Dec 2007 14:03:14 +0100 > Robert Cimrman wrote: >> Nils Wagner wrote: >>> Hi all, >>> >>> How can I build a sparse matrix from the following array >>> ? >>>>>> data >>> array([['1', '1', '1.7244067583090530E+05'], >>> ['1', '2', '4.7526228631699840E+04'], >>> ... >>> ['18', '24', '-1.0245630931609220E+03'], >>> ['24', '24', '4.2234547103090340E+03']], >>> dtype='|S23') >>> >>> data contains information about row, column and the >>> corresponding entry. >>> >>> Any pointer would be appreciated. >> You read the data from a file, right? > > Exactly. > > You should convert >> the row/column >> indices to integers and values to floats during the >> reading. > > How can I do that on-the-fly ? Well, supposing the file is not too large: fd = open( ... ) rows, cols, vals = [], [], [] while 1: try: line = fd.readline() if (len( line ) == 0): break if len( line ) == 1: continue except EOFError: break line = line.split() ir, ic, val = int( line[0] ), int( line[1] ), float( line[2] ) rows.append( ir ) cols.append( ic ) vals.append( val ) Alternatively, you can try to use numpy.fromfile( ..., sep = ' ', dtype = numpy.float64 ) and then reshape it to (n, 3) and re-type the first two columns to ints. r. From xavier.barthelemy at cmla.ens-cachan.fr Thu Dec 13 09:10:51 2007 From: xavier.barthelemy at cmla.ens-cachan.fr (Xavier Barthelemy) Date: Thu, 13 Dec 2007 15:10:51 +0100 Subject: [SciPy-user] problem in reading a indices In-Reply-To: References: <47610150.4090307@olfac.univ-lyon1.fr><47610E0B.20502@cmla.ens-cachan.fr> Message-ID: <47613D6B.8060509@cmla.ens-cachan.fr> Yesss it's working!!! thanks, i did pass too much time on it for this thanks a lot. I did not know this point in calling lists. and by the way, pm is a latinicism for: post mailum ;) Xavier Alan G Isaac a ?crit : > On Thu, 13 Dec 2007, Xavier Barthelemy apparently wrote: >> ArrayWithData[i,int(choice[i].current[0])] >> pm: >> and print type(ArrayWithData), ArrayWithData gives me >> [array([ 2.01, 5.01]),...] > > Sorry, missed your postscript. (What does pm mean?) > So ArrayWithData is a list of arrays. Then try > ArrayWithData[i][otherindex] > since lists do not handle multiple indexes. > From nwagner at iam.uni-stuttgart.de Thu Dec 13 09:27:41 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Dec 2007 15:27:41 +0100 Subject: [SciPy-user] Creating a sparse matrix In-Reply-To: <47613329.4020603@ntc.zcu.cz> References: <47612D92.4020408@ntc.zcu.cz> <47613329.4020603@ntc.zcu.cz> Message-ID: On Thu, 13 Dec 2007 14:27:05 +0100 Robert Cimrman wrote: > Nils Wagner wrote: >> On Thu, 13 Dec 2007 14:03:14 +0100 >> Robert Cimrman wrote: >>> Nils Wagner wrote: >>>> Hi all, >>>> >>>> How can I build a sparse matrix from the following array >>>> ? >>>>>>> data >>>> array([['1', '1', '1.7244067583090530E+05'], >>>> ['1', '2', '4.7526228631699840E+04'], >>>> ... >>>> ['18', '24', '-1.0245630931609220E+03'], >>>> ['24', '24', '4.2234547103090340E+03']], >>>> dtype='|S23') >>>> >>>> data contains information about row, column and the >>>> corresponding entry. >>>> >>>> Any pointer would be appreciated. >>> You read the data from a file, right? >> >> Exactly. >> >> You should convert >>> the row/column >>> indices to integers and values to floats during the >>> reading. >> >> How can I do that on-the-fly ? > > Well, supposing the file is not too large: > fd = open( ... ) > rows, cols, vals = [], [], [] > while 1: > try: > line = fd.readline() > if (len( line ) == 0): break > if len( line ) == 1: continue > except EOFError: > break > line = line.split() > ir, ic, val = int( line[0] ), int( line[1] ), >float( line[2] ) > rows.append( ir ) > cols.append( ic ) > vals.append( val ) > > Alternatively, you can try to use numpy.fromfile( ..., >sep = ' ', dtype > = numpy.float64 ) and then reshape it to (n, 3) and >re-type the first > two columns to ints. > > r. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Hi Robert, Thank you very much for your code snippet ! How do I use the COO constructor to build the matrix from rows, cols and vals ? Cheers, Nils From strang at nmr.mgh.harvard.edu Thu Dec 13 09:56:35 2007 From: strang at nmr.mgh.harvard.edu (Gary Strangman) Date: Thu, 13 Dec 2007 09:56:35 -0500 (EST) Subject: [SciPy-user] scipy.stats.percentileofscore : something strange In-Reply-To: <47610B1F.5010502@olfac.univ-lyon1.fr> References: <47610150.4090307@olfac.univ-lyon1.fr> <47610B1F.5010502@olfac.univ-lyon1.fr> Message-ID: Yes, this is a bug. I believe there is no checking for edge or out of bound cases. (At least there wasn't when it was first added to scipy ...) Gary On Thu, 13 Dec 2007, Samuel GARCIA wrote: > > > Matthieu Brucher a ?crit : >> >> >> 2007/12/13, Samuel GARCIA > >: >> >> Hi all, >> I found something strange in scipy.stats.percentileofscore >> >> In [1]: from scipy import * >> >> In [2]: a = rand(10000) >> >> In [3]: stats.percentileofscore(a,.2) >> Out[3]: 20.0157565073 >> This OK. >> >> >> In [4]: stats.percentileofscore(a,.0002) >> Out[4]: 102.898311442 >> This is strange !!!!! >> >> >> This may be a bug >> >> In [5]: stats.percentileofscore(a,1.4) >> --------------------------------------------------------------------------- >> > '> Traceback (most recent call last) >> >> //home/sgarcia// in () >> >> /usr/lib/python2.5/site-packages/scipy/stats/stats.py in >> percentileofscore(a, score, histbins, defaultlimits) >> 942 cumhist = np.cumsum(h*1, axis=0) >> 943 i = int((score - lrl)/float(binsize)) >> >> --> 944 pct = >> (cumhist[i-1]+((score-(lrl+binsize*i))/float(binsize))*h[i])/float(len(a)) >> * 100 >> 945 return pct >> 946 >> >> : index out of bounds >> >> This does not work... >> >> >> I expect this to raise this exception, you cannot take a percentile outside >> [0, 1] (1.4 = 140%) > > I think that it should return 100 if superior to rigtht limit and 0 if > inferior to left limit. > No ? > > >> >> Matthieu >> -- >> French PhD student >> Website : http://matthieu-brucher.developpez.com/ >> Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 >> LinkedIn : http://www.linkedin.com/in/matthieubrucher >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > From dominique.orban at gmail.com Thu Dec 13 10:03:58 2007 From: dominique.orban at gmail.com (Dominique Orban) Date: Thu, 13 Dec 2007 10:03:58 -0500 Subject: [SciPy-user] Rutherford Boeing format In-Reply-To: References: Message-ID: <8793ae6e0712130703k4e6f2a87qcab343a67841748b@mail.gmail.com> On 12/13/07, Nils Wagner wrote: > Hi all, > > I was wondering if someone has written a function to > import matrices given in the Rutherford Boeing format. A while ago I wrote a function to import Harwell-Boeing format. RB is quite similar so I expect few changed will do the trick. My function relies on Konrad Hinsen's FortranFormat module, which I also attach below. I haven't looked at this lately, so I hope it works for you. If this is useful, I have no problem including it in SciPy. In the test function, the sparsetools module is part of NLPy and plots a sparsity pattern. If you don't use NLPy, just replace that with any other plot you like :) Dominique -------------- next part -------------- A non-text attachment was scrubbed... Name: FortranFormat.py Type: text/x-python-script Size: 6117 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hb.py Type: text/x-python-script Size: 2848 bytes Desc: not available URL: From cimrman3 at ntc.zcu.cz Thu Dec 13 10:45:43 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 13 Dec 2007 16:45:43 +0100 Subject: [SciPy-user] Creating a sparse matrix In-Reply-To: References: <47612D92.4020408@ntc.zcu.cz> <47613329.4020603@ntc.zcu.cz> Message-ID: <476153A7.1020707@ntc.zcu.cz> Nils Wagner wrote: > Thank you very much for your code snippet ! > How do I use the COO constructor to build the > matrix from rows, cols and vals ? A = coo_matrix( (vals, (rows, cols)) ) r. From cscheit at lstm.uni-erlangen.de Thu Dec 13 11:56:15 2007 From: cscheit at lstm.uni-erlangen.de (Christoph Scheit) Date: Thu, 13 Dec 2007 17:56:15 +0100 Subject: [SciPy-user] read ascii data Message-ID: <200712131756.15777.cscheit@lstm.uni-erlangen.de> Thank you all, I tried the proposed possibilities, but I guess actually that I will go for numpy.fromfile and will switch to binary format like proposed by Roman. -- ============================ M.Sc. Christoph Scheit Institute of Fluid Mechanics FAU Erlangen-Nuremberg Cauerstrasse 4 D-91058 Erlangen Phone: +49 9131 85 29508 ============================ From nwagner at iam.uni-stuttgart.de Thu Dec 13 12:07:17 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Dec 2007 18:07:17 +0100 Subject: [SciPy-user] Creating a sparse matrix In-Reply-To: <476153A7.1020707@ntc.zcu.cz> References: <47612D92.4020408@ntc.zcu.cz> <47613329.4020603@ntc.zcu.cz> <476153A7.1020707@ntc.zcu.cz> Message-ID: On Thu, 13 Dec 2007 16:45:43 +0100 Robert Cimrman wrote: > Nils Wagner wrote: >> Thank you very much for your code snippet ! >> How do I use the COO constructor to build the >> matrix from rows, cols and vals ? > > A = coo_matrix( (vals, (rows, cols)) ) > Many thanks. The file contains only the upper triangular part of the symmetric matrix. How can I add the lower part to A ? Nils From clintonwallen at gmail.com Thu Dec 13 12:49:39 2007 From: clintonwallen at gmail.com (Clinton Allen) Date: Thu, 13 Dec 2007 09:49:39 -0800 Subject: [SciPy-user] Green's function In-Reply-To: <447725.19101.qm@web34409.mail.mud.yahoo.com> References: <447725.19101.qm@web34409.mail.mud.yahoo.com> Message-ID: Try wikapedia: http://en.wikipedia.org/wiki/Greens_function --Clint Allen On Dec 11, 2007 8:52 AM, Lou Pecora wrote: > There is no particular special function called the > Green Function (sometimes named Green's Function). > Green functions are special solutions of certain > partial differential equations (PDE), i.e. they are a > certain class of functions. It's like saying > Orthogonal Polynomials -- that's a class, not a > particular type. > > Green functions have the property that when used in > the PDE they yield a Dirac delta function. That makes > them useful for the solution of the same PDE, but with > a nonhomogeneous term added to the "other side." > > So each PDE has it's own Green function. Eg. the > Helmholtz equation, > > D^2 G + k^2 G=0 (D^2 = Laplacian) > > has the Green function G(r,r')= i H_0(k|r'-r|)/4, > where H_0 is a Hankel function and i=sqrt(-1). > > I suggest you look at some texts (maybe mathematical > physics books) for the subject. Even a Google search > will give you some information. > > -- Lou Pecora > > --- Natali Melgarejo Diaz > wrote: > > > Hi to everyone i'm new in this list...i have a > > question about a > > formula i haven't found yet..., do you know the > > "Green function"??. Is > > there anything similar in Scipy??, 'cause in Matllab > > there is one but > > i haven't found it in Scipy yet. > > > > Thanks in advance to everyone ;)) > > > > > > ********Natali******** > > > > > ____________________________________________________________________________________ > Never miss a thing. Make Yahoo your home page. > http://www.yahoo.com/r/hs > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Thu Dec 13 18:15:29 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 13 Dec 2007 18:15:29 -0500 Subject: [SciPy-user] arbitrary submatrices of a sparse matrix? Message-ID: Hi there, I'm porting some MATLAB code that makes use of the Matlab convention of multiple index vectors to pull out an arbitrary submatrix, i.e. x([4,6,2,7],[1,7,3]) would give you a 4x3 matrix containing rows 4,5,6,7 and cols 1,2,3 of the original matrix. I've figured out how to do this for full matrices (either x [[4,6,2,7],:][:,1,7,3] or by playing with take()) but I'm at a loss for how to do it with a sparse matrix, if it's even possible (not sure how matlab pulls it off). Any ideas? Thanks, David From roger.herikstad at gmail.com Thu Dec 13 19:17:36 2007 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Fri, 14 Dec 2007 08:17:36 +0800 Subject: [SciPy-user] scipy.weave.test() failures Message-ID: Hi all, I just built and installed the latest scipy release, but when I run scipy.weave.test() I get these results: >>> scipy.weave.test() Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'weave' >>> import scipy.weave >>> scipy.weave.test() Found 1 tests for scipy.weave.ast_tools Found 2 tests for scipy.weave.blitz_tools Found 9 tests for scipy.weave.build_tools Found 0 tests for scipy.weave.c_spec Found 26 tests for scipy.weave.catalog building extensions here: /opt/home/roger/.python25_compiled/m7 Found 1 tests for scipy.weave.ext_tools Found 0 tests for scipy.weave.inline_tools Found 74 tests for scipy.weave.size_check Found 16 tests for scipy.weave.slice_handler Found 3 tests for scipy.weave.standard_array_spec Warning: FAILURE importing tests for /opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_wx_spec.py:16: ImportError: No module named wxPython (in ) Found 0 tests for __main__ ...warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations ..warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations ..EE.........E..........................................F..F............................................................. ====================================================================== ERROR: check_add_function_ordered ( scipy.weave.tests.test_catalog.test_catalog) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_catalog.py", line 282, in check_add_function_ordered q.add_function('f',string.upper) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/catalog.py", line 649, in add_function self.add_function_persistent(code,function) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/catalog.py", line 665, in add_function_persistent cat = get_catalog(cat_dir,mode) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/catalog.py", line 300, in get_catalog sh = shelve.open(catalog_file,mode) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/shelve.py", line 225, in open return DbfilenameShelf(filename, flag, protocol, writeback) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/shelve.py", line 209, in __init__ Shelf.__init__(self, anydbm.open(filename, flag), protocol, writeback) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/anydbm.py", line 83, in open return mod.open(file, flag, mode) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/dbhash.py", line 16, in open return bsddb.hashopen(file, flag, mode) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/bsddb/__init__.py", line 306, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: Test persisting a function in the default catalog ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_catalog.py", line 270, in check_add_function_persistent1 q.add_function_persistent('code',i) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/catalog.py", line 665, in add_function_persistent cat = get_catalog(cat_dir,mode) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/catalog.py", line 300, in get_catalog sh = shelve.open(catalog_file,mode) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/shelve.py", line 225, in open return DbfilenameShelf(filename, flag, protocol, writeback) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/shelve.py", line 209, in __init__ Shelf.__init__(self, anydbm.open(filename, flag), protocol, writeback) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/anydbm.py", line 83, in open return mod.open(file, flag, mode) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/dbhash.py", line 16, in open return bsddb.hashopen(file, flag, mode) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/bsddb/__init__.py", line 306, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== ERROR: Shouldn't get a single file from the temp dir. ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_catalog.py", line 198, in check_get_existing_files2 q.add_function('code', os.getpid) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/catalog.py", line 649, in add_function self.add_function_persistent(code,function) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/catalog.py", line 665, in add_function_persistent cat = get_catalog(cat_dir,mode) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/catalog.py", line 300, in get_catalog sh = shelve.open(catalog_file,mode) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/shelve.py", line 225, in open return DbfilenameShelf(filename, flag, protocol, writeback) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/shelve.py", line 209, in __init__ Shelf.__init__(self, anydbm.open(filename, flag), protocol, writeback) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/anydbm.py", line 83, in open return mod.open(file, flag, mode) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/dbhash.py", line 16, in open return bsddb.hashopen(file, flag, mode) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/bsddb/__init__.py", line 306, in hashopen d.open(file, db.DB_HASH, flags, mode) DBFileExistsError: (17, 'File exists -- __fop_file_setup: Retry limit (100) exceeded') ====================================================================== FAIL: check_1d_3 ( scipy.weave.tests.test_size_check.test_dummy_array_indexing) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_size_check.py", line 168, in check_1d_3 self.generic_1d('a[-11:]') File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_size_check.py", line 135, in generic_1d self.generic_wrap(a,expr) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_size_check.py", line 127, in generic_wrap self.generic_test(a,expr,desired) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_size_check.py", line 123, in generic_test assert_array_equal(actual,desired, expr) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/numpy/testing/utils.py", line 223, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/opt/cluster/usr/lib/python2.5/site-packages/maci/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not equal a[-11:] (mismatch 100.0%) x: array([1]) y: array([10]) ====================================================================== FAIL: check_1d_6 ( scipy.weave.tests.test_size_check.test_dummy_array_indexing) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_size_check.py", line 174, in check_1d_6 self.generic_1d('a[:-11]') File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_size_check.py", line 135, in generic_1d self.generic_wrap(a,expr) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_size_check.py", line 127, in generic_wrap self.generic_test(a,expr,desired) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/scipy/weave/tests/test_size_check.py", line 123, in generic_test assert_array_equal(actual,desired, expr) File "/opt/cluster/usr/lib/python2.5/site-packages/maci/numpy/testing/utils.py", line 223, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/opt/cluster/usr/lib/python2.5/site-packages/maci/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not equal a[:-11] (mismatch 100.0%) x: array([9]) y: array([0]) ---------------------------------------------------------------------- Ran 132 tests in 7.907s FAILED (failures=2, errors=3) This was run on an Intel Mac Pro running 10.5.1 . If I do the same on an Intel MacBook, running the same OS version, the test passes, while still giving the warnings about the build_dir. Any ideas as to what could be the problem? Thanks! ~ Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Thu Dec 13 20:23:50 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 13 Dec 2007 20:23:50 -0500 Subject: [SciPy-user] arbitrary submatrices of a sparse matrix? In-Reply-To: References: Message-ID: <9406141D-E419-4B6C-A4B4-033ACC2E1DF2@cs.toronto.edu> On 13-Dec-07, at 6:15 PM, David Warde-Farley wrote: > Hi there, > > I'm porting some MATLAB code that makes use of the Matlab convention > of multiple index vectors to pull out an arbitrary submatrix, i.e. > > x([4,6,2,7],[1,7,3]) would give you a 4x3 matrix containing rows > 4,5,6,7 and cols 1,2,3 of the original matrix. > > I've figured out how to do this for full matrices (either x > [[4,6,2,7],:][:,1,7,3] or by playing with take()) but I'm at a loss > for how to do it with a sparse matrix, if it's even possible (not > sure how matlab pulls it off). Hmm, I've just noticed that get_submatrix() in SVN *claims* to do what I want it to do, but actually doesn't. It works fine with slices, but with tuples it always returns a 1x1. Is it supposed to do that? (There's no test case for this particular functionality and the documentation is, shall we say, sparse) If there's some easy way to permute the rows and columns of a CSR/CSC matrix then I would be able to get the job done with slices... Is there? Or is there some way to do this that I'm not seeing? Thanks again, David From odonnems at yahoo.com Fri Dec 14 01:13:45 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Thu, 13 Dec 2007 22:13:45 -0800 (PST) Subject: [SciPy-user] weave inline returning a multimap to a dict nested within c++ Message-ID: <665274.38999.qm@web58013.mail.re3.yahoo.com> I would like to convert a multimap created within c++ to a python dictionary in order to return the value. I think I need to do it this way because I have a multimap (potential for redundant keys) that needs to be sorted. I am using scipy.weave.inline with converters.blitz. If you think there is a better way I am open to what ever may work. Although, I prefer to stick with weave.inline blitz converter--This is all new and there are a lot of components to wrapping c++ in python. Anyhow, I tired this: py::dict ckmap_freq = map_freq; return_val = ckmap_freq; And received this error: error: conversion from `std::multimap, std::allocator > >' to non-scalar type `py::dict' requested I was able to successfully use a similar method but with a vector instead. Therefore, it is likely I need to change the syntax. thanks for any suggestions and help, Michael ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Fri Dec 14 01:59:33 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 14 Dec 2007 07:59:33 +0100 Subject: [SciPy-user] weave inline returning a multimap to a dict nested within c++ In-Reply-To: <665274.38999.qm@web58013.mail.re3.yahoo.com> References: <665274.38999.qm@web58013.mail.re3.yahoo.com> Message-ID: Hi, I don't think you can transform a multimap into a dict : a multimap can have several times the same key, a dict cannot. So first you might need to solve this problem. Matthieu 2007/12/14, Michael ODonnell : > > I would like to convert a multimap created within c++ to a python > dictionary in order to return the value. I think I need to do it this way > because I have a multimap (potential for redundant keys) that needs to be > sorted. I am using scipy.weave.inline with converters.blitz. If you think > there is a better way I am open to what ever may work. Although, I prefer to > stick with weave.inline blitz converter--This is all new and there are a > lot of components to wrapping c++ in python. > > Anyhow, I tired this: > py::dict ckmap_freq = map_freq; > return_val = ckmap_freq; > > And received this error: > error: conversion from `std::multimap std::less, std::allocator > >' > to non-scalar type `py::dict' requested > > I was able to successfully use a similar method but with a vector instead. > Therefore, it is likely I need to change the syntax. > > thanks for any suggestions and help, > Michael > > > ------------------------------ > Never miss a thing. Make Yahoo your homepage. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Dec 14 03:33:41 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 14 Dec 2007 09:33:41 +0100 Subject: [SciPy-user] Rutherford Boeing format In-Reply-To: <8793ae6e0712130703k4e6f2a87qcab343a67841748b@mail.gmail.com> References: <8793ae6e0712130703k4e6f2a87qcab343a67841748b@mail.gmail.com> Message-ID: On Thu, 13 Dec 2007 10:03:58 -0500 "Dominique Orban" wrote: > On 12/13/07, Nils Wagner >wrote: >> Hi all, >> >> I was wondering if someone has written a function to >> import matrices given in the Rutherford Boeing format. > > A while ago I wrote a function to import Harwell-Boeing >format. RB is > quite similar so I expect few changed will do the trick. >My function > relies on Konrad Hinsen's FortranFormat module, which I >also attach > below. I haven't looked at this lately, so I hope it >works for you. Thank you very much for your reply. I will try it asap. If > this is useful, I have no problem including it in SciPy. +1 > Best wishes Nils > In the test function, the sparsetools module is part of >NLPy and plots > a sparsity pattern. If you don't use NLPy, just replace >that with any > other plot you like :) > > Dominique From cimrman3 at ntc.zcu.cz Fri Dec 14 05:21:05 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 14 Dec 2007 11:21:05 +0100 Subject: [SciPy-user] arbitrary submatrices of a sparse matrix? In-Reply-To: <9406141D-E419-4B6C-A4B4-033ACC2E1DF2@cs.toronto.edu> References: <9406141D-E419-4B6C-A4B4-033ACC2E1DF2@cs.toronto.edu> Message-ID: <47625911.1010400@ntc.zcu.cz> David Warde-Farley wrote: > Hmm, I've just noticed that get_submatrix() in SVN *claims* to do > what I want it to do, but actually doesn't. It works fine with > slices, but with tuples it always returns a 1x1. Is it supposed to do > that? (There's no test case for this particular functionality and the > documentation is, shall we say, sparse) It works for slices only. Yes, the docstring, as I reread it now, is not clear. What it should say is, that the slice can be given as: 1. slice object 2. a tuple (from, to) 3. a scalar for single row/column selection The method should probably be named _get_submatrix(), as it is used internally in __getitem__(). r. From cimrman3 at ntc.zcu.cz Fri Dec 14 10:00:42 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 14 Dec 2007 16:00:42 +0100 Subject: [SciPy-user] ANN: SFE-00.35.01 Message-ID: <47629A9A.8030708@ntc.zcu.cz> Let me announce SFE-00.35.01, bringing per term integration - now each term can use its own quadrature points. This is a major change at the heart of the code - some parts may not work as all terms were not migrated yet to the new framework. All test examples work, though, as well as acoustic band gaps. See http://ui505p06-mbs.ntc.zcu.cz/sfe . SFE is a finite element analysis software written almost entirely in Python. The code is released under BSD license. best regards, r. From s.mientki at ru.nl Fri Dec 14 10:52:44 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 14 Dec 2007 16:52:44 +0100 Subject: [SciPy-user] static magnetic field, simulation ? Message-ID: <4762A6CC.80602@ru.nl> hello, Does anyone knows an implementation of the calculation and vizialization of a static magnetic field and the induced forces ? I couldn't find anything in python on this subject, and outside python there's also almost nothing to find. Background, my son is doing a school project on magnetic levitation, my knowledge about magnetic fields is too rusty, and it would be a very nice example to put in PyLab Works ;-) thanks, Stef Mientki Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. From bryanv at enthought.com Fri Dec 14 11:16:32 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Fri, 14 Dec 2007 10:16:32 -0600 Subject: [SciPy-user] static magnetic field, simulation ? In-Reply-To: <4762A6CC.80602@ru.nl> References: <4762A6CC.80602@ru.nl> Message-ID: <4762AC60.2060808@enthought.com> http://www.mare.ee/indrek/ephi/ might have what you want. Stef Mientki wrote: > hello, > > Does anyone knows an implementation of the calculation and vizialization > of a static magnetic field and the induced forces ? > I couldn't find anything in python on this subject, > and outside python there's also almost nothing to find. > > Background, my son is doing a school project on magnetic levitation, > my knowledge about magnetic fields is too rusty, > and it would be a very nice example to put in PyLab Works ;-) > > thanks, > Stef Mientki > > > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. > The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Fri Dec 14 14:04:39 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 14 Dec 2007 20:04:39 +0100 Subject: [SciPy-user] Coo_constructor Message-ID: Hi all, I use the coo constructor to build sparse matrices. A_i = coo_matrix( (vals_i, (rows_i, cols_i)) ) I read the data for each matrix from a different file. One of the matrices is rank-one with only non-zero entry. (7, 7) 5.0 Another matrix has the entries (0, 0) 80000000.0 (0, 1) -120000000.0 (1, 0) -120000000.0 (1, 1) 480000000.0 (0, 2) 40000000.0 (2, 0) 40000000.0 (2, 2) 160000000.0 (1, 3) -240000000.0 (3, 1) -240000000.0 (2, 3) -120000000.0 (3, 2) -120000000.0 (3, 3) 480000000.0 (1, 4) 120000000.0 (4, 1) 120000000.0 (2, 4) 40000000.0 (4, 2) 40000000.0 (4, 4) 160000000.0 (3, 5) -240000000.0 (5, 3) -240000000.0 (4, 5) -120000000.0 (5, 4) -120000000.0 (5, 5) 480000000.0 (3, 6) 120000000.0 (6, 3) 120000000.0 (4, 6) 40000000.0 : : (9, 11) -240000000.0 (11, 9) -240000000.0 (10, 11) -120000000.0 (11, 10) -120000000.0 (11, 11) 480000000.0 (9, 12) 120000000.0 (12, 9) 120000000.0 (10, 12) 40000000.0 (12, 10) 40000000.0 (12, 12) 160000000.0 (11, 13) -240000000.0 (13, 11) -240000000.0 (12, 13) -120000000.0 (13, 12) -120000000.0 (13, 13) 480000000.0 (11, 14) 120000000.0 (14, 11) 120000000.0 (12, 14) 40000000.0 (14, 12) 40000000.0 (14, 14) 160000000.0 (13, 15) 120000000.0 (15, 13) 120000000.0 (14, 15) 40000000.0 (15, 14) 40000000.0 (15, 15) 80000000.0 How can I circumvent the ValueError: shape mismatch: objects cannot be broadcast to a single shape if I try to add the matrices ? Nils From lbolla at gmail.com Fri Dec 14 14:31:22 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 14 Dec 2007 20:31:22 +0100 Subject: [SciPy-user] Coo_constructor In-Reply-To: References: Message-ID: <80c99e790712141131o3d8ee818x730f4f5e44d8d4d2@mail.gmail.com> You can always define the sparse matrix dimension in this way: scipy.sparse.coo_matrix(([5.], [[7.],[7.]]), dims=(16, 16)) L. On Dec 14, 2007 8:04 PM, Nils Wagner wrote: > Hi all, > > I use the coo constructor to build sparse matrices. > A_i = coo_matrix( (vals_i, (rows_i, cols_i)) ) > > I read the data for each matrix from a different file. > > One of the matrices is rank-one with only non-zero entry. > > (7, 7) 5.0 > > Another matrix has the entries > > (0, 0) 80000000.0 > (0, 1) -120000000.0 > (1, 0) -120000000.0 > (1, 1) 480000000.0 > (0, 2) 40000000.0 > (2, 0) 40000000.0 > (2, 2) 160000000.0 > (1, 3) -240000000.0 > (3, 1) -240000000.0 > (2, 3) -120000000.0 > (3, 2) -120000000.0 > (3, 3) 480000000.0 > (1, 4) 120000000.0 > (4, 1) 120000000.0 > (2, 4) 40000000.0 > (4, 2) 40000000.0 > (4, 4) 160000000.0 > (3, 5) -240000000.0 > (5, 3) -240000000.0 > (4, 5) -120000000.0 > (5, 4) -120000000.0 > (5, 5) 480000000.0 > (3, 6) 120000000.0 > (6, 3) 120000000.0 > (4, 6) 40000000.0 > : : > (9, 11) -240000000.0 > (11, 9) -240000000.0 > (10, 11) -120000000.0 > (11, 10) -120000000.0 > (11, 11) 480000000.0 > (9, 12) 120000000.0 > (12, 9) 120000000.0 > (10, 12) 40000000.0 > (12, 10) 40000000.0 > (12, 12) 160000000.0 > (11, 13) -240000000.0 > (13, 11) -240000000.0 > (12, 13) -120000000.0 > (13, 12) -120000000.0 > (13, 13) 480000000.0 > (11, 14) 120000000.0 > (14, 11) 120000000.0 > (12, 14) 40000000.0 > (14, 12) 40000000.0 > (14, 14) 160000000.0 > (13, 15) 120000000.0 > (15, 13) 120000000.0 > (14, 15) 40000000.0 > (15, 14) 40000000.0 > (15, 15) 80000000.0 > > > How can I circumvent the > ValueError: shape mismatch: objects cannot be broadcast to > a single shape > > if I try to add the matrices ? > > Nils > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Fri Dec 14 19:12:19 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 14 Dec 2007 19:12:19 -0500 Subject: [SciPy-user] Sparse with fast element-wise multiply? Message-ID: <27261D40-C6F1-402B-AFE3-D6B333CDA4FB@cs.toronto.edu> Hi folks, I'm currently using sparse.lil_matrix to do element-wise multiplication of two sparse matrices, and, well, it's really slow. Any suggestions for something speedier? (it seems csr and csc matrices don't have .multiply()). David From dwf at cs.toronto.edu Fri Dec 14 19:21:43 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 14 Dec 2007 19:21:43 -0500 Subject: [SciPy-user] arbitrary submatrices of a sparse matrix? In-Reply-To: <47625911.1010400@ntc.zcu.cz> References: <9406141D-E419-4B6C-A4B4-033ACC2E1DF2@cs.toronto.edu> <47625911.1010400@ntc.zcu.cz> Message-ID: On 14-Dec-07, at 5:21 AM, Robert Cimrman wrote: > It works for slices only. Yes, the docstring, as I reread it now, is > not > clear. What it should say is, that the slice can be given as: > 1. slice object > 2. a tuple (from, to) > 3. a scalar for single row/column selection > The method should probably be named _get_submatrix(), as it is used > internally in __getitem__(). Ah, I figured that out, yes, the tuple is just a start/end point. I ended up using the coo_matrix format and just manually do the element selection and reindexing. I've pasted a copy of my code here, so that it's in the archives (and so that others may comment and suggest improvements). I'll also submit a patch of sparse.py that adds a method for this, in case the authors think it warrants inclusion. Cheers, DWF def coo_submatrix_pull(matr, rows, cols): """ Pulls out an arbitrary i.e. non-contiguous submatrix out of a sparse.coo_matrix. This does create a copy, however, which is probably not ideal. """ if type(matr) != S.sparse.coo_matrix: raise TypeError('Matrix must be sparse COOrdinate format') gr = -1 * ones(matr.shape[0]) gc = -1 * ones(matr.shape[1]) lr = len(rows) lc = len(cols) ar = arange(0, lr) ac = arange(0, lc) gr[rows[ar]] = ar gc[cols[ac]] = ac newelem = (gr[matr.row] > -1) & (gc[matr.col] > -1) newrows = matr.row[newelem] newcols = matr.col[newelem] return S.sparse.coo_matrix((matr.data[newelem], array([gr[newrows], \ gc[newcols]])),(lr, lc)) From tjhnson at gmail.com Sat Dec 15 00:26:59 2007 From: tjhnson at gmail.com (Tom Johnson) Date: Fri, 14 Dec 2007 21:26:59 -0800 Subject: [SciPy-user] Mathematica Element-wise Multiplication Message-ID: While playing with Mathematica, I must say that I was surprised to find that it handled element-wise multiplication differently from scipy. In scipy, >>> A = array([[1,2],[3,4]]) >>> B = array([2,3]) >>> A * B array([[2,6],[6,12]]) which essentially multiplies each COLUMN of A with each COLUMN of B. In mathematica, >>> A = {{1,2},{3,4}} >>> B = {2,3} >>> A * B {{2,4},{9,12}} which essentially multiples each ROW of A with each COLUMN of B. I actually find this method of multiplication more intuitive (as it is like matrix multiplication). Now, scipy can easily obtain the Mathematica result by the following: >>> A = array([[1,2],[3,4]]) >>> B = array([[2],[3]]) >>> A * B array([[2,4],[9,12]]) but it isn't as easy to get Mathematica to return the scipy result...that is, the following natural attempt does not work: >>> A = {{1,2},{3,4}} >>> B = {{2},{3}} >>> A * B Thread::tdlen: Objects of unequal length in {1,2}{2} cannot be combined. Thread::tdlen: Objects of unequal length in {3,4}{3} cannot be combined. and it is easy to see why Mathematica is unable to complete this operation...but of course there are other ways to achieve this result. Anyway, I just thought the difference was interesting.... From stefan at sun.ac.za Sat Dec 15 06:31:42 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 15 Dec 2007 13:31:42 +0200 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: References: Message-ID: <20071215113142.GH7167@mentat.za.net> Hi Tom On Fri, Dec 14, 2007 at 09:26:59PM -0800, Tom Johnson wrote: > While playing with Mathematica, I must say that I was surprised to > find that it handled element-wise multiplication differently from > scipy. > > In scipy, > > >>> A = array([[1,2],[3,4]]) > >>> B = array([2,3]) > >>> A * B > array([[2,6],[6,12]]) > > which essentially multiplies each COLUMN of A with each COLUMN of B. Numpy does what we call broadcasting. You can read more about it at http://www.scipy.org/EricsBroadcastingDoc If you'd like to have the Mathematica behaviour, you can always use the transpose property (i.e. x.T). Regards St?fan From rex at nosyntax.net Sat Dec 15 12:37:22 2007 From: rex at nosyntax.net (rex) Date: Sat, 15 Dec 2007 09:37:22 -0800 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <20071215113142.GH7167@mentat.za.net> References: <20071215113142.GH7167@mentat.za.net> Message-ID: <20071215173722.GF24062@nosyntax.net> Stefan van der Walt [2007-12-15 03:32]: >Numpy does what we call broadcasting. You can read more about it at > >http://www.scipy.org/EricsBroadcastingDoc Very nice! A big "Thank you!" to Eric for making such clear documentation available. The diagrams really help. -rex From dmitrey.kroshko at scipy.org Sat Dec 15 15:17:06 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 15 Dec 2007 22:17:06 +0200 Subject: [SciPy-user] Ann: OpenOpt v 0.15 (free optimization framework) Message-ID: <47643642.80808@scipy.org> Hi all, we are glad to inform you about OpenOpt v 0.15 (release), free (license: BSD) optimization framework for Python language programmers Changes since previous release (September 4): * some new classes * several new solvers written * some more solvers connected * NLP/NSP solver ralg can handle constrained problems * some bugfixes * some enhancements in graphical output (especially for constrained problems) Regards, OpenOpt developers http://scipy.org/scipy/scikits/wiki/OpenOpt http://openopt.blogspot.com/ From lbolla at gmail.com Sun Dec 16 14:03:49 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Sun, 16 Dec 2007 20:03:49 +0100 Subject: [SciPy-user] Sparse with fast element-wise multiply? In-Reply-To: <27261D40-C6F1-402B-AFE3-D6B333CDA4FB@cs.toronto.edu> References: <27261D40-C6F1-402B-AFE3-D6B333CDA4FB@cs.toronto.edu> Message-ID: <80c99e790712161103j4f5ae3ecj4b53d6100403dea8@mail.gmail.com> converting the matrix from lil_matrix to csc or csr format gives a 140x speed improvement on multiplication. In [4]: A = scipy.sparse.lil_matrix((1000, 1000)) In [5]: A[:,100] = scipy.rand(1000) In [6]: A[99, :] = scipy.rand(1000) In [7]: A.setdiag(scipy.rand(1000)) In [8]: %timeit A * A 10 loops, best of 3: 72.2 ms per loop In [9]: B = A.tocsr() In [10]: %timeit B * B 1000 loops, best of 3: 510 ?s per loop In [11]: C = A.tocsc() In [12]: %timeit C * C 1000 loops, best of 3: 527 ?s per loop hth, L. On Dec 15, 2007 1:12 AM, David Warde-Farley wrote: > Hi folks, > > I'm currently using sparse.lil_matrix to do element-wise > multiplication of two sparse matrices, and, well, it's really slow. > Any suggestions for something speedier? (it seems csr and csc matrices > don't have .multiply()). > > David > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Sun Dec 16 14:50:12 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 16 Dec 2007 14:50:12 -0500 Subject: [SciPy-user] Sparse with fast element-wise multiply? In-Reply-To: <80c99e790712161103j4f5ae3ecj4b53d6100403dea8@mail.gmail.com> References: <27261D40-C6F1-402B-AFE3-D6B333CDA4FB@cs.toronto.edu> <80c99e790712161103j4f5ae3ecj4b53d6100403dea8@mail.gmail.com> Message-ID: <65C83323-C9E0-487A-8CFE-A5238B0A4C57@cs.toronto.edu> On 16-Dec-07, at 2:03 PM, lorenzo bolla wrote: > converting the matrix from lil_matrix to csc or csr format gives a > 140x speed improvement on multiplication. > > In [4]: A = scipy.sparse.lil_matrix((1000, 1000)) > > In [5]: A[:,100] = scipy.rand(1000) > > In [6]: A[99, :] = scipy.rand(1000) > > In [7]: A.setdiag(scipy.rand(1000)) Hi Lorenzo, Thanks for your reply. Actually, though, I'm not looking for matrix- multiply; I know CSR/CSC is faster for that. I'm looking for elementwise multiply, i.e. take two matrices of the same size, multiply each element in matrix 1 with the corresponding element in matrix 2, and put the result in the same position in the result matrix (as with matlab's a .* b syntax). It seems lil_matrix is the only sparse type that implements .multiply(), and converting to lil_matrix is slow, so I've written my own function for multiplying two coo_matrix's together. It's a LITTLE slower than .multiply() but faster than converting both matrices to lil_matrix and then multiplying. I'm not entirely sure how CSR and CSC work so it might be possible to implement faster elementwise multiplication on top of them. David From cohen at slac.stanford.edu Sun Dec 16 19:28:14 2007 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 16 Dec 2007 16:28:14 -0800 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <20071215113142.GH7167@mentat.za.net> References: <20071215113142.GH7167@mentat.za.net> Message-ID: <4765C29E.6070306@slac.stanford.edu> Actually I do not manage to use .T or .transpose() method on 1D arrays : In [42]: a = array([[ 0.0, 0.0, 0.0],[10.0,10.0,10.0],[20.0,20.0,20.0],[30.0,30.0,30.0]]) <--this is example 3 of thiis indeed very nice tutorial on broadcasting In [43]: b = array([1.0,2.0,3.0,4.0]) In [44]: a+b ValueError: shape mismatch: objects cannot be broadcast to a single shape <----- fine, mismatch of trailing dimensions In [45]: b.transpose() Out[45]: array([ 1., 2., 3., 4.]) <------ ominous : no change In [47]: a+b.transpose() ValueError: shape mismatch: objects cannot be broadcast to a single shape <-----transpose did not work In [48]: a+b.T ValueError: shape mismatch: objects cannot be broadcast to a single shape <------ nor .T In [49]: b = array([[1.0],[2.0],[3.0],[4.0]]) In [51]: a+b Out[51]: array([[ 1., 1., 1.], [ 12., 12., 12.], [ 23., 23., 23.], [ 34., 34., 34.]]) <------------ expected result for the brodcast of b as a column vector into a In [52]: aa=array([[1,2],[3,4]]) In [53]: aa.transpose() Out[53]: array([[1, 3], [2, 4]]) <------------- transpose and T work for this 2D array In [55]: aa.T Out[55]: array([[1, 3], [2, 4]]) So, what is going on? Is that a bug, a feature, or some weir failure of my install? thanks, Johann Stefan van der Walt wrote: > Hi Tom > > On Fri, Dec 14, 2007 at 09:26:59PM -0800, Tom Johnson wrote: > >> While playing with Mathematica, I must say that I was surprised to >> find that it handled element-wise multiplication differently from >> scipy. >> >> In scipy, >> >> >>>>> A = array([[1,2],[3,4]]) >>>>> B = array([2,3]) >>>>> A * B >>>>> >> array([[2,6],[6,12]]) >> >> which essentially multiplies each COLUMN of A with each COLUMN of B. >> > > Numpy does what we call broadcasting. You can read more about it at > > http://www.scipy.org/EricsBroadcastingDoc > > If you'd like to have the Mathematica behaviour, you can always use > the transpose property (i.e. x.T). > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From david at ar.media.kyoto-u.ac.jp Sun Dec 16 21:28:48 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 17 Dec 2007 11:28:48 +0900 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <4765C29E.6070306@slac.stanford.edu> References: <20071215113142.GH7167@mentat.za.net> <4765C29E.6070306@slac.stanford.edu> Message-ID: <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> Johann Cohen-Tanugi wrote: > Actually I do not manage to use .T or .transpose() method on 1D arrays : > > In [42]: a = array([[ 0.0, 0.0, > 0.0],[10.0,10.0,10.0],[20.0,20.0,20.0],[30.0,30.0,30.0]]) <--this is > example 3 of thiis indeed very nice tutorial on broadcasting > In [43]: b = array([1.0,2.0,3.0,4.0]) > In [44]: a+b > ValueError: shape mismatch: objects cannot be broadcast to a single > shape <----- fine, mismatch of trailing dimensions > > In [45]: b.transpose() > Out[45]: array([ 1., 2., 3., 4.]) <------ ominous : no change This is confusing if you are coming from matlab and similar softwares (it was for me, at least). In matlab, any array is rank 2 by default (let's put aside rank > 2 for now). That is, in matlab, a = [1, 2, 3] gives you a row array, which is interpreted as a matrix: size(a) gives you [1, 3]. In numpy, this is not the case: there is a real difference between "row arrays", "column arrays" and vectors. a = array([1, 2, 3]) a.ndim <----------- is 1 (rank 1) a = array([[1], [2], [3]]) a.ndim <----------- is 2 (rank 2: column) a = array([[1, 2, 3]]) a.ndim <----------- is 2 (rank 2: row) In Matlab, there is no such difference, and it is really ingrained in the software (for example, at the C api level, the function to get the number of dimensions, alas the 'rank', always returns at least 2). To solve your problem, you should have a new dimension: b = array([1., 2, 3]) b.ndim <----------- 1 b[:, numpy.newaxis] b.ndim <----------- 2 (this will be a column array: b is now exactly the same as array([[1.], [2], [3]]) b = array([1., 2, 3]) b[numpy.newaxis, :] b.ndim <----------- 2 (this will be a row array: b is now exactly the same as array([[1., 2, 3]]) cheers, David From cohen at slac.stanford.edu Mon Dec 17 00:10:15 2007 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 16 Dec 2007 21:10:15 -0800 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> References: <20071215113142.GH7167@mentat.za.net> <4765C29E.6070306@slac.stanford.edu> <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> Message-ID: <476604B7.4060400@slac.stanford.edu> thanks for these precisions, David. Reading it, I still come to think that it is a potential source of confusion to let a "row array" have a transpose or T method, that essentially does nothing. I guess there is a reason behind this situation, but given the fact that it is there, I am wondering whether T or transpose of a row array could in fact return what it is expected to, aka the 2d column array. Is there any reason not to have this functionality? best, Johann David Cournapeau wrote: > Johann Cohen-Tanugi wrote: > >> Actually I do not manage to use .T or .transpose() method on 1D arrays : >> >> In [42]: a = array([[ 0.0, 0.0, >> 0.0],[10.0,10.0,10.0],[20.0,20.0,20.0],[30.0,30.0,30.0]]) <--this is >> example 3 of thiis indeed very nice tutorial on broadcasting >> In [43]: b = array([1.0,2.0,3.0,4.0]) >> In [44]: a+b >> ValueError: shape mismatch: objects cannot be broadcast to a single >> shape <----- fine, mismatch of trailing dimensions >> >> In [45]: b.transpose() >> Out[45]: array([ 1., 2., 3., 4.]) <------ ominous : no change >> > This is confusing if you are coming from matlab and similar softwares > (it was for me, at least). In matlab, any array is rank 2 by default > (let's put aside rank > 2 for now). That is, in matlab, a = [1, 2, 3] > gives you a row array, which is interpreted as a matrix: size(a) gives > you [1, 3]. In numpy, this is not the case: there is a real difference > between "row arrays", "column arrays" and vectors. > > a = array([1, 2, 3]) > a.ndim <----------- is 1 (rank 1) > a = array([[1], [2], [3]]) > a.ndim <----------- is 2 (rank 2: column) > a = array([[1, 2, 3]]) > a.ndim <----------- is 2 (rank 2: row) > > In Matlab, there is no such difference, and it is really ingrained in > the software (for example, at the C api level, the function to get the > number of dimensions, alas the 'rank', always returns at least 2). To > solve your problem, you should have a new dimension: > > b = array([1., 2, 3]) > b.ndim <----------- 1 > b[:, numpy.newaxis] > b.ndim <----------- 2 (this will be a column array: b is now exactly the > same as array([[1.], [2], [3]]) > b = array([1., 2, 3]) > b[numpy.newaxis, :] > b.ndim <----------- 2 (this will be a row array: b is now exactly the > same as array([[1., 2, 3]]) > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Mon Dec 17 01:48:05 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 17 Dec 2007 07:48:05 +0100 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <476604B7.4060400@slac.stanford.edu> References: <20071215113142.GH7167@mentat.za.net> <4765C29E.6070306@slac.stanford.edu> <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> <476604B7.4060400@slac.stanford.edu> Message-ID: 2007/12/17, Johann Cohen-Tanugi : > > thanks for these precisions, David. Reading it, I still come to think > that it is a potential source of confusion to let a "row array" have a > transpose or T method, that essentially does nothing. In object oriented code, this can happen often, but it is not a problem. It does what you want : inverse the axis, even if there is only one axis. I guess there is a > reason behind this situation, but given the fact that it is there, I am > wondering whether T or transpose of a row array could in fact return > what it is expected to, aka the 2d column array. Is there any reason not > to have this functionality? More code in a simple function (thus slower) ? Breaking some code that depend on it ? Not doing what the documentation says ? I think the method should not be changed, it does exactly what it says it does. Changing its behaviour because of Matlab is not a good solution, Python is not Matlab. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Mon Dec 17 02:49:38 2007 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 16 Dec 2007 23:49:38 -0800 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: References: <20071215113142.GH7167@mentat.za.net> <4765C29E.6070306@slac.stanford.edu> <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> <476604B7.4060400@slac.stanford.edu> Message-ID: <47662A12.7060107@slac.stanford.edu> Matthieu Brucher wrote: > > > 2007/12/17, Johann Cohen-Tanugi >: > > thanks for these precisions, David. Reading it, I still come to think > that it is a potential source of confusion to let a "row array" have a > transpose or T method, that essentially does nothing. > > > > In object oriented code, this can happen often, but it is not a > problem. It does what you want : inverse the axis, even if there is > only one axis. hmmm...... okay... What I wanted was to transpose a 1D array into a vector, or vice-versa, with the linear algebra behavior in mind. I understand that numpy does not follow this, but I cannot believe that this behavior *is* what everybody wants! Tom's initial email was symptomatic, and Stefan's response, with the proposal to use the T method even more so! Assuming that this natural linear algebra could be retrieved when, and *only* when, the array is 1D, I do not see how such an implementation could break codes that depend on it, because I don't see why someone would call 'a.T' just to have 'a' again.... But it is probably my lack of imagination. Anyway, enough of this. I am sure the developers know better than me... cheers, Johann > > > I guess there is a > reason behind this situation, but given the fact that it is there, > I am > wondering whether T or transpose of a row array could in fact return > what it is expected to, aka the 2d column array. Is there any > reason not > to have this functionality? > > > > More code in a simple function (thus slower) ? Breaking some code that > depend on it ? Not doing what the documentation says ? > I think the method should not be changed, it does exactly what it says > it does. Changing its behaviour because of Matlab is not a good > solution, Python is not Matlab. > > Matthieu > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and > http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From aisaac at american.edu Mon Dec 17 08:04:01 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 17 Dec 2007 08:04:01 -0500 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <47662A12.7060107@slac.stanford.edu> References: <20071215113142.GH7167@mentat.za.net> <4765C29E.6070306@slac.stanford.edu> <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> <476604B7.4060400@slac.stanford.edu><47662A12.7060107@slac.stanford.edu> Message-ID: On Sun, 16 Dec 2007, Johann Cohen-Tanugi apparently wrote: > Assuming that this natural linear algebra could be > retrieved when, and only when, the array is 1D You appear to be talking about a "natural" linear algebra for 2D objects. For example, when textbooks talk of row vectors and column vectors, they are talking about 2D objects, even though they use a simplified indexing. (This vocabulary is only slightly misleading: after all, matrices of a given dimension are also vectors and can be indexed with a single index---Matlab allows this (column major) and NumPy allows this too (row major, via the `flat` attribute).) If that is what you always seek, stick with matrices:: >>> x=numpy.mat([0,1,2]) >>> x matrix([[0, 1, 2]]) >>> x.T matrix([[0], [1], [2]]) >>> In the meantime, you can always add an axis if you wish:: >>> x array([0, 1, 2]) >>> y=x[:,None] >>> y array([[0], [1], [2]]) hth, Alan Isaac From matthieu.brucher at gmail.com Mon Dec 17 09:01:23 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 17 Dec 2007 15:01:23 +0100 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <47662A12.7060107@slac.stanford.edu> References: <20071215113142.GH7167@mentat.za.net> <4765C29E.6070306@slac.stanford.edu> <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> <476604B7.4060400@slac.stanford.edu> <47662A12.7060107@slac.stanford.edu> Message-ID: > > > In object oriented code, this can happen often, but it is not a > > problem. It does what you want : inverse the axis, even if there is > > only one axis. > hmmm...... okay... What I wanted was to transpose a 1D array into a > vector, or vice-versa, with the linear algebra behavior in mind. I > understand that numpy does not follow this, but I cannot believe that > this behavior *is* what everybody wants! Tom's initial email was > symptomatic, and Stefan's response, with the proposal to use the T > method even more so! I think that waht you really want is to use the matrix class instead of the array. Matrix always has 2 dimensions. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From bblais at bryant.edu Mon Dec 17 10:29:13 2007 From: bblais at bryant.edu (Brian Blais) Date: Mon, 17 Dec 2007 10:29:13 -0500 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <47662A12.7060107@slac.stanford.edu> References: <20071215113142.GH7167@mentat.za.net> <4765C29E.6070306@slac.stanford.edu> <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> <476604B7.4060400@slac.stanford.edu> <47662A12.7060107@slac.stanford.edu> Message-ID: <320E5E02-63B0-47B4-9780-B77BE578E786@bryant.edu> On Dec 17, 2007, at Dec 17:2:49 AM, Johann Cohen-Tanugi wrote: > Matthieu Brucher wrote: >> >> >> 2007/12/17, Johann Cohen-Tanugi > >: >> >> thanks for these precisions, David. Reading it, I still come >> to think >> that it is a potential source of confusion to let a "row >> array" have a >> transpose or T method, that essentially does nothing. >> >> >> >> In object oriented code, this can happen often, but it is not a >> problem. It does what you want : inverse the axis, even if there is >> only one axis. > hmmm...... okay... What I wanted was to transpose a 1D array into a > vector, or vice-versa, with the linear algebra behavior in mind. if you have linear algebra in mind, then you can use a matrix...it works the way you expect: In [1]:from numpy import * In [2]:a=array([1,2,3,4]) In [3]:a.shape Out[3]:(4,) In [4]:b=a.T In [5]:b.shape Out[5]:(4,) In [6]:a=matrix([1,2,3,4]) In [7]:a.shape Out[7]:(1, 4) In [8]:b=a.T In [9]:b.shape Out[9]:(4, 1) In [10]:a Out[10]:matrix([[1, 2, 3, 4]]) In [11]:b Out[11]: matrix([[1], [2], [3], [4]]) bb -- Brian Blais bblais at bryant.edu http://web.bryant.edu/~bblais -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Mon Dec 17 11:30:20 2007 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Mon, 17 Dec 2007 08:30:20 -0800 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <320E5E02-63B0-47B4-9780-B77BE578E786@bryant.edu> References: <20071215113142.GH7167@mentat.za.net> <4765C29E.6070306@slac.stanford.edu> <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> <476604B7.4060400@slac.stanford.edu> <47662A12.7060107@slac.stanford.edu> <320E5E02-63B0-47B4-9780-B77BE578E786@bryant.edu> Message-ID: <4766A41C.4040200@slac.stanford.edu> hi Brian, point well taken. Thank you! I definitely use array more than I should then. best, Johann Brian Blais wrote: > > On Dec 17, 2007, at Dec 17:2:49 AM, Johann Cohen-Tanugi wrote: > >> Matthieu Brucher wrote: >>> >>> >>> 2007/12/17, Johann Cohen-Tanugi >> >>> >: >>> >>> thanks for these precisions, David. Reading it, I still come to >>> think >>> that it is a potential source of confusion to let a "row array" >>> have a >>> transpose or T method, that essentially does nothing. >>> >>> >>> >>> In object oriented code, this can happen often, but it is not a >>> problem. It does what you want : inverse the axis, even if there is >>> only one axis. >> hmmm...... okay... What I wanted was to transpose a 1D array into a >> vector, or vice-versa, with the linear algebra behavior in mind. > > if you have linear algebra in mind, then you can use a matrix...it > works the way you expect: > > In [1]:from numpy import * > > In [2]:a=array([1,2,3,4]) > > In [3]:a.shape > Out[3]:(4,) > > In [4]:b=a.T > > In [5]:b.shape > Out[5]:(4,) > > In [6]:a=matrix([1,2,3,4]) > > In [7]:a.shape > Out[7]:(1, 4) > > In [8]:b=a.T > > In [9]:b.shape > Out[9]:(4, 1) > > In [10]:a > Out[10]:matrix([[1, 2, 3, 4]]) > > In [11]:b > Out[11]: > matrix([[1], > [2], > [3], > [4]]) > > > > > bb > -- > Brian Blais > bblais at bryant.edu > http://web.bryant.edu/~bblais > > > From cscheit at lstm.uni-erlangen.de Mon Dec 17 12:12:11 2007 From: cscheit at lstm.uni-erlangen.de (Christoph Scheit) Date: Mon, 17 Dec 2007 18:12:11 +0100 Subject: [SciPy-user] read_array problem In-Reply-To: References: Message-ID: <200712171812.11940.cscheit@lstm.uni-erlangen.de> Hello everybody, just for curiosity I have a question regarding read_array. When I use the scipy.io read_array function I observe some behaviour which I don't understand... First I call the function like this: self.varVector = read_array(f, columns=cols, lines=((0, 0 + bsize), )) with cols = ((1,3), ) bsize = 5832 and I get an exception: ValueError: Argument lines must be a sequence of integers and/or range tuples. Calling now read_array like this self.varVector = read_array(f, columns=cols, lines=((0, 0 + 5832), )) works fine to my surprise... cols is here the same like before. Thanks in advance, Christoph From whenney at gmail.com Mon Dec 17 14:01:34 2007 From: whenney at gmail.com (Will Henney) Date: Mon, 17 Dec 2007 19:01:34 +0000 (UTC) Subject: [SciPy-user] Mathematica Element-wise Multiplication References: <20071215113142.GH7167@mentat.za.net> <4765C29E.6070306@slac.stanford.edu> <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> <476604B7.4060400@slac.stanford.edu> <47662A12.7060107@slac.stanford.edu> Message-ID: Johann Cohen-Tanugi slac.stanford.edu> writes: > hmmm...... okay... What I wanted was to transpose a 1D array into a > vector, or vice-versa, with the linear algebra behavior in mind. I > understand that numpy does not follow this, but I cannot believe that > this behavior *is* what everybody wants! Tom's initial email was > symptomatic, and Stefan's response, with the proposal to use the T > method even more so! I think you want to be using numpy.matrix, rather than numpy.array: In [17]: m = numpy.matrix([1, 2, 3]) In [18]: m.T Out[18]: matrix([[1], [2], [3]]) Cheers Will From stefan at sun.ac.za Mon Dec 17 14:04:56 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 17 Dec 2007 21:04:56 +0200 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <47662A12.7060107@slac.stanford.edu> References: <20071215113142.GH7167@mentat.za.net> <4765C29E.6070306@slac.stanford.edu> <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> <476604B7.4060400@slac.stanford.edu> <47662A12.7060107@slac.stanford.edu> Message-ID: <20071217190455.GM21448@mentat.za.net> On Sun, Dec 16, 2007 at 11:49:38PM -0800, Johann Cohen-Tanugi wrote: > Matthieu Brucher wrote: > > > > > > 2007/12/17, Johann Cohen-Tanugi > >: > > > > thanks for these precisions, David. Reading it, I still come to think > > that it is a potential source of confusion to let a "row array" have a > > transpose or T method, that essentially does nothing. > > > > > > > > In object oriented code, this can happen often, but it is not a > > problem. It does what you want : inverse the axis, even if there is > > only one axis. > hmmm...... okay... What I wanted was to transpose a 1D array into a > vector, or vice-versa, with the linear algebra behavior in mind. I > understand that numpy does not follow this, but I cannot believe that > this behavior *is* what everybody wants! Tom's initial email was > symptomatic, and Stefan's response, with the proposal to use the T > method even more so! I'm not sure what my response was symptomatic of, but if you don't like the behaviour of ndarrays, you may consider using 'matlib': In [1]: from numpy import matlib In [2]: x = matlib.matrix([1,2,3]) In [3]: x Out[3]: matrix([[1, 2, 3]]) In [4]: x.T Out[4]: matrix([[1], [2], [3]]) In [5]: x * x.T Out[5]: matrix([[14]]) > Assuming that this natural linear algebra could be retrieved when, and > *only* when, the array is 1D, I do not see how such an implementation > could break codes that depend on it, because I don't see why someone > would call 'a.T' just to have 'a' again.... But it is probably my lack > of imagination. When I call "transpose", I don't expect an array to change dimensions. Like Matthieu wrote, it swaps the axes, and if you have only one axis there is nothing to swap. Start with a two-dimensional array, and the transpose will do what you expect. Regards St?fan From lev at columbia.edu Mon Dec 17 19:38:18 2007 From: lev at columbia.edu (Lev Givon) Date: Mon, 17 Dec 2007 19:38:18 -0500 Subject: [SciPy-user] illegal instruction error in scipy.linalg.decomp.svd Message-ID: <20071218003816.GE15380@localhost.cc.columbia.edu> On a Pentium 4 Linux box running python 2.5.1, scipy 0.6.0, numpy 1.0.4, and lapack 3.0, I recently noticed that scipy.linalg.decomp.svd() fails (and causes python to crash) with an "illegal instruction" error. A bit of debugging revealed that the error occurs in the line lwork = calc_lwork.gesdd(gesdd.prefix,m,n,compute_uv)[1] in scipy/linalg/decomp.py. Interestingly, scipy 0.5.2.1 on the same box (with all of the other software packages unchanged) does not exhibit this problem. Moreover, when I install scipy 0.6.0 along with all of the other packages on a Linux machine containing a Pentium D CPU, I do not observe the problem either. Being that I am running Mandriva Linux, closed scipy bug 540 caught my eye. I'm not sure how it could be related to the above problem, though (and I also do not know what the lapack patch mentioned in the ticket could have been - even though I have been maintaining Mandriva's lapack lately :-). Thoughts? L.G. From david at ar.media.kyoto-u.ac.jp Mon Dec 17 22:46:39 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 18 Dec 2007 12:46:39 +0900 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <47662A12.7060107@slac.stanford.edu> References: <20071215113142.GH7167@mentat.za.net> <4765C29E.6070306@slac.stanford.edu> <4765DEE0.9030401@ar.media.kyoto-u.ac.jp> <476604B7.4060400@slac.stanford.edu> <47662A12.7060107@slac.stanford.edu> Message-ID: <4767429F.5090802@ar.media.kyoto-u.ac.jp> Johann Cohen-Tanugi wrote: > Matthieu Brucher wrote: > >> 2007/12/17, Johann Cohen-Tanugi > >: >> >> thanks for these precisions, David. Reading it, I still come to think >> that it is a potential source of confusion to let a "row array" have a >> transpose or T method, that essentially does nothing. >> >> >> >> In object oriented code, this can happen often, but it is not a >> problem. It does what you want : inverse the axis, even if there is >> only one axis. >> > hmmm...... okay... What I wanted was to transpose a 1D array into a > vector, or vice-versa, with the linear algebra behavior in mind. I > understand that numpy does not follow this, but I cannot believe that > this behavior *is* what everybody wants! Tom's initial email was > symptomatic, and Stefan's response, with the proposal to use the T > method even more so! > > Assuming that this natural linear algebra could be retrieved when, and > *only* when, the array is 1D, I do not see how such an implementation > could break codes that depend on it, because I don't see why someone > would call 'a.T' just to have 'a' again.... But it is probably my lack > of imagination. > Indeed, it is a lack of imagination :) I can assure you that it would break a lot of my code, for example. I really understand your problem: you can see that on numpy ML that 18 months ago, I was exactly in the same situation as you. I used matlab a lot, was quite proficient in it, and was confused by this. But it really makes sense once you forget about matlab. I personally do not use numpy.matrix; not that it does not make sense to use them (when you really want to do linear algebra), but I think it just prevents you from using numpy at 100 %. > Anyway, enough of this. I am sure the developers know better than me... > IMHO, the 'problem' is coming from the fact that arrays are used for two things in matlab/numpy/etc....: for linear algebra, of course, but also, perhaps even more important, for speed reasons. For various reasons, it is really hard to do a fast script language. In particular when looping, and vectorizing code is a good way to avoid most of them. When for example I use a rank 2 array to compute some statistics on it, I do not really think about it as an array (the maths on it is not, for sure), but as a list of arrays; I use arrays because it is much faster. If I had a choice, I would write the relevant code with loops (most of the time, it is clearer using loops). This is in clear contrast with linear algebra code where handling directly matrices and vectors is much clearer than doing loops manually. Matlab, for historical reasons I guess, is more geared towards linear algebra, whereas numpy is more geared toward 'using arrays for speed reasons'; in particular, broadcasting is much more powerful in numpy that it is in matlab. cheers, David From wnbell at gmail.com Mon Dec 17 23:41:32 2007 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 18 Dec 2007 04:41:32 +0000 (UTC) Subject: [SciPy-user] Sparse with fast element-wise multiply? References: <27261D40-C6F1-402B-AFE3-D6B333CDA4FB@cs.toronto.edu> <80c99e790712161103j4f5ae3ecj4b53d6100403dea8@mail.gmail.com> <65C83323-C9E0-487A-8CFE-A5238B0A4C57@cs.toronto.edu> Message-ID: David Warde-Farley cs.toronto.edu> writes: > Thanks for your reply. Actually, though, I'm not looking for matrix- > multiply; I know CSR/CSC is faster for that. > > I'm looking for elementwise multiply, i.e. take two matrices of the > same size, multiply each element in matrix 1 with the corresponding > element in matrix 2, and put the result in the same position in the > result matrix (as with matlab's a .* b syntax). It seems lil_matrix is > the only sparse type that implements .multiply(), and converting to > lil_matrix is slow, so I've written my own function for multiplying > two coo_matrix's together. It's a LITTLE slower than .multiply() but > faster than converting both matrices to lil_matrix and then multiplying. > > I'm not entirely sure how CSR and CSC work so it might be possible to > implement faster elementwise multiplication on top of them. > > David > Currently elementwise multiplication is exposed through A**B where A and B are csr_matrix or csc_matrix objects. You can expect similar performance to A+B. I don't know why ** was chosen, it was that way before I started working on scipy.sparse. I've added a .multiply() method to the sparse matrix base class that goes through csr_matrix: http://projects.scipy.org/scipy/scipy/changeset/3682 From wnbell at gmail.com Mon Dec 17 23:52:37 2007 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 18 Dec 2007 04:52:37 +0000 (UTC) Subject: [SciPy-user] Creating a sparse matrix References: <47612D92.4020408@ntc.zcu.cz> <47613329.4020603@ntc.zcu.cz> <476153A7.1020707@ntc.zcu.cz> Message-ID: Nils Wagner iam.uni-stuttgart.de> writes: > Many thanks. > The file contains only the upper triangular part of the > symmetric matrix. How can I add the lower part to A ? starting with row,col,data do the following: mask = row != col #off-diagonal mask new_row = concatenate((row,col[mask])) new_col = concatenate((col,row[mask])) new_data = concatenate((data,data[mask])) This adds the transpose of the off-diagonal part of the matrix to the matrix. From wnbell at gmail.com Tue Dec 18 00:02:08 2007 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 18 Dec 2007 05:02:08 +0000 (UTC) Subject: [SciPy-user] scipy 0.6.0 ImportError: cannot import name cscmux References: <338357900711180250u1fd08b86ide9a504d664e0cc1@mail.gmail.com> Message-ID: youngsu park gmail.com> writes: > But it is not the members of sparsetools.py > I think it is the members of sparsetools.pyd. > > Move sparsetools.pyd to some backup directory. > Then it will works well. > > In [21]: from scipy import * > > In [22]: I believe sparsetools.pyd is from a previous scipy installation. Deleting it should work (as you've done). To be safe I would delete /site-packages/scipy and reinstall scipy. From wnbell at gmail.com Tue Dec 18 00:06:58 2007 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 18 Dec 2007 05:06:58 +0000 (UTC) Subject: [SciPy-user] scipy.sparse problem References: Message-ID: Donald Fredkin ucsd.edu> writes: > Can anyone tell me how to fix this? It was OK with an earlier version > of scipy. > > This is, by the way, on Windows XP. Try the steps at the bottom of this thread: http://projects.scipy.org/scipy/scipy/ticket/516 From wnbell at gmail.com Tue Dec 18 00:16:08 2007 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 18 Dec 2007 05:16:08 +0000 (UTC) Subject: [SciPy-user] sparse matrix dtype References: Message-ID: Robin gmail.com> writes: > > On 10/9/07, Robin gmail.com> wrote: > I am trying to make a large sparse matrix - the values are all integers (in fact all non-zeros will be 1) so it would save me a lot of memory if I could use dtype=byte. > > I added 'b' to the string of allowed dtypes in getdtype() on line 2791 of > sparse.py.It now seems to behave as I would expect (hope), but it can't be that simple can it?Is it likely that doing this will break something else? Why are the dtypes restricted in the first place.ThanksRobin > FWIW integer dtypes are supported in the next release (and the current SVN) http://projects.scipy.org/scipy/scipy/ticket/225 The problem before was not on the python side. It was the backend code (sparse.sparsetools) that didn't support integer types. From amg at iri.columbia.edu Tue Dec 18 10:51:24 2007 From: amg at iri.columbia.edu (Arthur M. Greene) Date: Tue, 18 Dec 2007 10:51:24 -0500 Subject: [SciPy-user] Estimating AR(1) model? In-Reply-To: References: Message-ID: <4767EC7C.8090600@iri.columbia.edu> Well, it's not scipy, but this small Python package has a convenient routine for estimating the parameters of an autoregressive process: http://geosci.uchicago.edu/~cdieterich/span/ It is based on Numeric; you'll have to probe a little to figure out what algorithm is used for the estimation. Cheers, AMG Webb Sprague wrote: > Hi all, > > Is there a quick approach for estimating a simple AR(1) time series > model from a vector of data points? Currently I use lingregress on > lagged data and roll my own standard error computations. It would be > great if someone smarter than I has done this already, especially if > they use MLE or Yule-Walker. > > I am using Gentoo, scipy.__version__ == '0.6.0', and I can't find a > scipy/stats/models directory in my installation, if that matters. > > Thx > W > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From timmichelsen at gmx-topmail.de Tue Dec 18 15:42:25 2007 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Tue, 18 Dec 2007 21:42:25 +0100 Subject: [SciPy-user] flagging no data in timeseries Message-ID: Hello, I while ago I was pointed to the timeseries module from sandbox by one of its developers. Thanks again for that. I am still having some troubles and questions: Since I am still very new to the package I need some hints. In my original data nodata values are marked with "-999". How can I import the data or create the time series and exclude these no data points from further processing? I presume it has to do with the mask keyword. But I cannot get it. Here is the code: data = N.loadtxt(mydata_file, comments='#', delimiter='\t', converters=None, skiprows=2, usecols=None, unpack=False) # define start date D_hr = TS.Date(freq='HR', year=1995, month=2, day=1, hour=1) # mask ## prepare time series # take only needed values myvalues = data[:,5] # hourly values myvalues_ts_hourly = TS.time_series(myvalues, start_date=D_hr) I will appreciate every help. For further readability and archivability I write my other questions in separate emails. Kind regards, Timmie From timmichelsen at gmx-topmail.de Tue Dec 18 16:09:05 2007 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Tue, 18 Dec 2007 22:09:05 +0100 Subject: [SciPy-user] get result array after timeseries operations Message-ID: Hi, this is meant as a continuation to How can one save the read the result of time series operations into a array? For instance, if I convert data in an hourly frequency to daily averages how to I read the daily averages into a array for further processing? when I print out my daily timeseries converted from hourly data I get something like this: In: myvalues_ts_daily Out: timeseries([ 1.4 89.4 3.5 ..., 11.5 1.6 0. ], dates = [01-Feb-1995 ... 01-Dec-2006], freq = D) What I would like is an array with just the values of the daily averages . Additional a report-like array output with the format day value 1 3 2 11 would be nice. How can I achieve this? Do I need to use the report functions [1]? Thanks again for the ice package. Things are looking nice! Regards, Timmie [1] Reports - http://www.scipy.org/SciPyPackages/TimeSeries#head-e7ce84a06d526536a962deb356ba6258e52b391d From whenney at gmail.com Wed Dec 19 01:10:59 2007 From: whenney at gmail.com (William Henney) Date: Wed, 19 Dec 2007 06:10:59 +0000 (UTC) Subject: [SciPy-user] Mayavi2 installation woes (OS X 10.4.11, Python 2.5) Message-ID: Hi list, I have been trying to install MayaVi2 on an intel Mac running 10.4.11, but I am running into a brick wall. I have been trying to follow the instructions I have found in various threads on this list and at https://svn.enthought.com/enthought/wiki/IntelMacPython25. I have managed to install all the dependencies (vtk, pyrex, wxpython, etc) but I just can't get the enthought.traits package to build. I have tried the versions from the SVN repo and also the tar balls from http://code.enthought.com/downloads/source/ets2.6/ but I always get the error when building enthought.traits: TypeError: swig_sources() takes exactly 3 arguments (2 given) An example of a full stack trace is given below. It seems that the problem has something to do with setuptools, and possibly with pyrex. I was initially using setuptools 0.6c7 and Pyrex 0.9.6.4, but after reading negative comments about 0.9.6, I downgraded to 0.9.5.1a, although this makes no difference. I have also tried using the latest development version of setuptools (0.7a1dev-r58661), but this doesn't work at all due to problems with "Namespace Packages" (with 0.6 these just give a warning, as seen in the output below). Does anyone have any idea how to solve this? Cheers Will Henney $ python setup.py build running build running build_py WARNING: enthought is a namespace package, but its __init__.py does not declare_namespace(); setuptools 0.7 will REQUIRE this! (See the setuptools manual under "Namespace Packages" for details.) WARNING: enthought.traits is a namespace package, but its __init__.py does not declare_namespace(); setuptools 0.7 will REQUIRE this! (See the setuptools manual under "Namespace Packages" for details.) WARNING: enthought.traits.ui is a namespace package, but its __init__.py does not declare_namespace(); setuptools 0.7 will REQUIRE this! (See the setuptools manual under "Namespace Packages" for details.) running egg_info writing requirements to enthought.traits.egg-info/requires.txt writing enthought.traits.egg-info/PKG-INFO writing namespace_packages to enthought.traits.egg-info/namespace_packages.txt writing top-level names to enthought.traits.egg-info/top_level.txt writing dependency_links to enthought.traits.egg-info/dependency_links.txt reading manifest file 'enthought.traits.egg-info/SOURCES.txt' writing manifest file 'enthought.traits.egg-info/SOURCES.txt' running build_ext building 'enthought.traits.ctraits' extension Traceback (most recent call last): File "setup.py", line 72, in zip_safe = False, File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ distutils/core.py", line 151, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ distutils/dist.py", line 994, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ distutils/dist.py", line 994, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ site-packages/setuptools-0.6c7-py2.5.egg/setuptools/command/build_ext.py", line 46, in run File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ distutils/command/build_ext.py", line 290, in run self.build_extensions() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ site-packages/Pyrex/Distutils/build_ext.py", line 82, in build_extensions self.build_extension(ext) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ site-packages/setuptools-0.6c7-py2.5.egg/setuptools/command/build_ext.py", line 175, in build_extension File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ distutils/command/build_ext.py", line 453, in build_extension sources = self.swig_sources(sources, ext) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ site-packages/setuptools-0.6c7-py2.5.egg/setuptools/command/build_ext.py", line 77, in swig_sources TypeError: swig_sources() takes exactly 3 arguments (2 given) From fie.pye at atlas.cz Wed Dec 19 13:18:01 2007 From: fie.pye at atlas.cz (Pye Fie) Date: Wed, 19 Dec 2007 18:18:01 GMT Subject: [SciPy-user] FW: Re: Building VTK Message-ID: <90ec234f72464a65b7cf0ae9a1e9398e@837e8f6af700492f9509f88823a3a0c0> Hi. Finaly I managed to build and install VTK. Solution was to call g++34 insted of gcc34 CMAKE_CXX_COMPILER:FILEPATH=/usr/bin/g++34 fie pye >--------------------------------------------------------- >Od: Pye Fie >P?ijato: 12.12.2007 23:59:38 >P?edm?t: Re: [SciPy-user] Building VTK > > >Hi Metthieu. >Than you for your response. I posted my question on VTK ML first. >http://public.kitware.com/pipermail/vtkusers/2007-December/093627.html >Unfortunately I didn't get any answer until now. > >You asker about my compiler. >CentOS 5.0 is compiled with gcc 4.1. I installed also gcc 4.2 and gcc 3.4.6 on the PC. > >I would like to have a computational environment based on Python and its module. I compiled Python and module Ipython, SciPy, NumPy, tables, matplotlib, wxpython ......... and libraries such as BLAS, LAPACK, ATLAS, HDF5, ........ with gcc3.4.6. Now I continue with compiling VTK with gcc3.4.6 too. More information about compilation setup is the file CMakeCache.txt. > >Best regards. >Fie pye > > >Od: Matthieu Brucher >P?ijato: 11.12.2007 21:18:04 >P?edm?t: Re: [SciPy-user] Building VTK >Besides, you tell us you use GCC 3.4.6 but in your description, it is GCC 4.2... >The issue may be solved by linking against GCC libraries, but again, the VTK ML may be a better place for the solution. >Matthieu >2007/12/11, Matthieu Brucher : Hi, >This issue is related to a C++ problem in VTK, not Python, so we can't say much. I advise you to post this on the VTK ML. >Matthieu >2007/12/11, Pye Fie < fie.pye at atlas.cz>:Hello. >I would like to build and install VTK and MayaVi2. >PC: 2x Dual-core Opteron >275 >OS: CentOS 5.0, kernel 2.6.18-8.1.15.el5 >OS compiler: gcc version 4.2.0, gfortran >I >have already built and installed python2.5.1 and other module with compiler gcc3.4.6 . >Now I can't build VTK. I am building with gcc3.4.6. Building stops on the following >error: >[ 74%] Building CXX object IO/CMakeFiles/vtkIOTCL.dir/vtkZLibDataCompressorTcl.o >[ >74%] Building CXX object IO/CMakeFiles/vtkIOTCL.dir/vtkIOTCLInit.o >Linking CXX shared >library ../bin/libvtkIOTCL.so >[ 74%] Built target vtkIOTCL >Scanning dependencies >of target vtkParseOGLExt >[ 74%] Building CXX object Utilities/ParseOGLExt/CMakeFiles/vtkParseOGLExt.dir/ParseOGLExt.o >[ >74%] Building CXX object Utilities/ParseOGLExt/CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o >Linking >CXX executable ../../bin/vtkParseOGLExt >CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o: >In function `std::__verify_grouping(char const*, unsigned long, std::basic_string >std::char_traits, std::allocator > const&)': >Tokenizer.cxx:(.text+0x19): >undefined reference to `std::basic_string, std::allocator >>::size() const' >Tokenizer.cxx:(.text+0x70): undefined reference to `std::basic_string >std::char_traits, std::allocator >::operator[](unsigned long) const' >Tokenizer.cxx : (.text+0xb0): >undefined reference to `std::basic_string, std::allocator >>::operator[](unsigned long) const' >Tokenizer.cxx:(.text+0xdd): undefined reference >to `std::basic_string, std::allocator >::operator[](unsigned >long) const' >CMakeFiles/vtkParseOGLExt.dir/Tokenizer.o: In function `Tokenizer::Tokenizer(char >const*, char const*)': >Tokenizer.cxx:(.text+0x12a): undefined reference to `std::allocator::allocator()' >Tokenizer.cxx:(.text+0x13b): >undefined reference to `std::basic_string, std::allocator >>::basic_string(char const*, std::allocator const&)' >Tokenizer.cxx:(.text+0x14e): >undefined reference to `std::allocator::~allocator()' >Tokenizer.cxx:(.text+0x164): >undefined reference to `std::allocator::~allocator()' >Tokenizer.cxx:(.text+0x16d): >undefined reference to `std::allocator::allocator()' >Tokenizer.cxx:(.text+0x182): >undefined reference to `std::basic_string, std::allocator >>::basic_string(char const*, std::allocator const&)' >The list continues. It >seams that there is something wrong with OpenGL but I can't find what is wrong. Could >anybody help me? >I have attached more detailed information about VTK configuration, >building output and building script. >Best regards >fie pye >------------------------------------------ >http://search.atlas.cz/ >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user >-- >French PhD student >Website : http://matthieu-brucher.developpez.com/ >Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 >LinkedIn : http://www.linkedin.com/in/matthieubrucher >-- >French PhD student >Website : http://matthieu-brucher.developpez.com/ >Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 >LinkedIn : http://www.linkedin.com/in/matthieubrucher > >------------------------------------------ > > > >http://search.atlas.cz/ ------------------------------------------ http://search.atlas.cz/ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: attachment2.txt URL: From dwf at cs.toronto.edu Wed Dec 19 16:21:08 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 19 Dec 2007 16:21:08 -0500 Subject: [SciPy-user] arraysetops & string arrays Message-ID: <8C1A7051-36CA-4B8E-A1F0-0E4C6834BD7C@cs.toronto.edu> Hi there, I noticed that setdiff1d() fails on string arrays. (numpy 1.0.3.1). There's nothing in the documentation that suggests it's for numeric dtypes only; should there be, or is this a bug? The problem appears to be the line tt = nm.concatenate( (zlike( ar1 ), zlike( ar2 ) + 1) ) where zlike() is returning an array of empty strings, and thus adding 1 is causing problems. I tried modifying it to cast to int32 without much success. David From erik.volz at gmail.com Wed Dec 19 19:32:12 2007 From: erik.volz at gmail.com (Erik Volz) Date: Wed, 19 Dec 2007 16:32:12 -0800 Subject: [SciPy-user] Mathematica Element-wise Multiplication Message-ID: There are several aspects of scipy that are counterintuitive to typical users. Tranpose of 1D arrays is a big one. Another example is z+=z.T # where dimension z > 1 which does not do what a naive user would think it does (try it). I recently introduced very damaging bugs into my code by making both of these mistakes; it took a long time for me to track down the problem. Of course, the behavior is very logical from a developer's perspective, but the veterans of this list have been very unsympathetic. At a minimum, there should be warnings in the doc-strings about this behavior. Erik Volz From: Johann Cohen-Tanugi slac.stanford.edu> > Subject: Re: Mathematica Element-wise Multiplication > Newsgroups: gmane.comp.python.scientific.user > Date: 2007-12-17 07:49:38 GMT (2 days, 16 hours and 31 minutes ago) > > Matthieu Brucher wrote: > > > > > > 2007/12/17, Johann Cohen-Tanugi slac.stanford.edu > > slac.stanford.edu>>: > > > > thanks for these precisions, David. Reading it, I still come to think > > that it is a potential source of confusion to let a "row array" have a > > transpose or T method, that essentially does nothing. > > > > > > > > In object oriented code, this can happen often, but it is not a > > problem. It does what you want : inverse the axis, even if there is > > only one axis. > hmmm...... okay... What I wanted was to transpose a 1D array into a > vector, or vice-versa, with the linear algebra behavior in mind. I > understand that numpy does not follow this, but I cannot believe that > this behavior **is** what everybody wants! Tom's initial email was > symptomatic, and Stefan's response, with the proposal to use the T > method even more so! > > Assuming that this natural linear algebra could be retrieved when, and > **only** when, the array is 1D, I do not see how such an implementation > could break codes that depend on it, because I don't see why someone > would call 'a.T' just to have 'a' again.... But it is probably my lack > of imagination. > Anyway, enough of this. I am sure the developers know better than me... > > cheers, > Johann > > > > > > > I guess there is a > > reason behind this situation, but given the fact that it is there, > > I am > > wondering whether T or transpose of a row array could in fact return > > what it is expected to, aka the 2d column array. Is there any > > reason not > > to have this functionality? > > > > > > > > More code in a simple function (thus slower) ? Breaking some code that > > depend on it ? Not doing what the documentation says ? > > I think the method should not be changed, it does exactly what it says > > it does. Changing its behaviour because of Matlab is not a good > > solution, Python is not Matlab. > > > > Matthieu > > -- > > French PhD student > > Website : > http://matthieu-brucher.developpez.com/ > > Blogs : http://matt.eifelle.com and > > http://blog.developpez.com/?blog=92 > > LinkedIn : > http://www.linkedin.com/in/matthieubrucher > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Wed Dec 19 20:26:35 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 19 Dec 2007 19:26:35 -0600 Subject: [SciPy-user] Mailing list was not sending out new mails for awhile. Message-ID: <4769C4CB.7080609@enthought.com> Hi all, The postfix service on the server hosting several of the mailing lists was down since Monday. Mails to the list were preserved and archived but were not being distributed to subscribers. We restarted the postfix service and messages should now be going out. Apologies for the problem. -Travis O. From oliphant at enthought.com Wed Dec 19 20:50:20 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 19 Dec 2007 19:50:20 -0600 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: References: Message-ID: <4769CA5C.70205@enthought.com> Erik Volz wrote: > There are several aspects of scipy that are counterintuitive to > typical users. Tranpose of 1D arrays is a big one. Another example is > z+=z.T # where dimension z > 1 > which does not do what a naive user would think it does (try it). > > I recently introduced very damaging bugs into my code by making both > of these mistakes; it took a long time for me to track down the > problem. Of course, the behavior is very logical from a developer's > perspective, but the veterans of this list have been very > unsympathetic. At a minimum, there should be warnings in the > doc-strings about this behavior. Yes, absolutely, warnings should be there in the docstring. Your perspective is not illogical nor are many of us unsympathetic to it. But, following your rule would be an "exceptional" case to an otherwise easy to explain rule. Our approach thus far has been to educate users coming from other languages that there is such a thing as a 1D array that is neither a "row" nor a "column" vector and so in fact a transpose operation is not really what you meant (as it does nothing to a 1D array). Instead what you wanted to do was convert the 1-D array to a 2-D array with 1 row and then transpose that to a 2D array with 1 column. Doing this "automatically" for you is assuming a particular convention which we at least try not to do that often. The problems it causes, come down to introducing oddities in code that could otherwise work over arbitrary dimensions. Finding the right balance between least-surprise and rule-exception headaches is not an easy one (nor will I ever claim that NumPy has made all the right choices). Because NumPy is community developed, there is always the possibility that the behavior will change. Of course, after something has come into use, it needs more of an argument than subjective points about a typical user (although I like hearing those kinds of suggestions when deciding what a *new* or *revamped* thing should do). Thanks for sharing your perspective. I'm very aware that for every piece of "negative" feedback (and yours really wasn't) there are 100 people who feel the same way but don't have the time to tell us. I'm very sorry to hear of the bugs you couldn't find because of this mis-communication in the .T attribute. Best regards, -Travis Oliphant From robert.kern at gmail.com Wed Dec 19 21:01:33 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Dec 2007 20:01:33 -0600 Subject: [SciPy-user] Mayavi2 installation woes (OS X 10.4.11, Python 2.5) In-Reply-To: References: Message-ID: <4769CCFD.4080301@gmail.com> William Henney wrote: > Hi list, > > I have been trying to install MayaVi2 on an intel Mac running 10.4.11, but I am > running into a brick wall. Can you join us over on enthought-dev? It's a more appropriate forum. https://mail.enthought.com/mailman/listinfo/enthought-dev > I have been trying to follow the instructions I have found in various threads on > this list and at https://svn.enthought.com/enthought/wiki/IntelMacPython25. I > have managed to install all the dependencies (vtk, pyrex, wxpython, etc) but I > just can't get the enthought.traits package to build. > > I have tried the versions from the SVN repo and also the tar balls from > http://code.enthought.com/downloads/source/ets2.6/ but I always get the error > when building enthought.traits: > > TypeError: swig_sources() takes exactly 3 arguments (2 given) > > An example of a full stack trace is given below. It seems that the problem has > something to do with setuptools, and possibly with pyrex. I was initially using > setuptools 0.6c7 and Pyrex 0.9.6.4, but after reading negative comments about > 0.9.6, I downgraded to 0.9.5.1a, although this makes no difference. Can you double-check that the Pyrex that you can import is the version that you think it is? One of the changes in 0.9.6 causes exactly this problem. >>> from Pyrex.Compiler import Version >>> Version.version '0.9.5.1' > I have also tried using the latest development version of setuptools > (0.7a1dev-r58661), but this doesn't work at all due to problems with "Namespace > Packages" (with 0.6 these just give a warning, as seen in the output below). The tarballs you grabbed are experimental ones for people trying to package ETS for Linux distributions. The people making them wanted to remove the namespace package features, but it appears that they did so incompletely. The tarballs that you want are here: http://code.enthought.com/enstaller/eggs/source/ Alternately, we have a script that can check out the SVN for a given ETS package and all of its dependencies. https://svn.enthought.com/svn/enthought/branches/ets_scripts_0.1 Install that like normal ("python setup.py install"). Then you will have a script called "etsco": $ cd ~/src $ etsco enthought.mayavi Initializing list of projects found in svn repositories. Processing repository: https://svn.enthought.com/svn/enthought Finished initializing svn repository projects. Checking out "enthought.util" source from "https://svn.enthought.com/svn/enthought/tags/enthought.util_2.0.1b2" A /Users/rkern/src/enthought.mayavi_2.0.3a1/enthought.util_2.0.1b2/LICENSE.txt A /Users/rkern/src/enthought.mayavi_2.0.3a1/enthought.util_2.0.1b2/extras.map .... A /Users/rkern/src/enthought.mayavi_2.0.3a1/enthought.persistence_2.1.0a1/setup.cfg U /Users/rkern/src/enthought.mayavi_2.0.3a1/enthought.persistence_2.1.0a1 Checked out revision 16353. All project source has been checked out to "/Users/rkern/src/enthought.mayavi_2.0.3a1" WARNING: YOUR CHECKOUT INCLUDES PROJECTS FROM SVN TAGS -- PLEASE BE CAREFUL WHEN CHECKING IN CHANGES AS TAGS SHOULD NOT BE CHANGED! You can svn copy a tag to a branch, then svn switch to that, and then check in your changes for a future release. Now, in enthought.mayavi_2.0.3a1/ you have all of the current SVN for enthought.mayavi and everything from ETS that it needs. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Thu Dec 20 03:56:27 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 20 Dec 2007 03:56:27 -0500 Subject: [SciPy-user] Sparse with fast element-wise multiply? In-Reply-To: References: <27261D40-C6F1-402B-AFE3-D6B333CDA4FB@cs.toronto.edu> <80c99e790712161103j4f5ae3ecj4b53d6100403dea8@mail.gmail.com> <65C83323-C9E0-487A-8CFE-A5238B0A4C57@cs.toronto.edu> Message-ID: On 17-Dec-07, at 11:41 PM, Nathan Bell wrote: > Currently elementwise multiplication is exposed through A**B where A > and B are > csr_matrix or csc_matrix objects. You can expect similar > performance to A+B. Whoa, you're not kidding: In [19]: time multiply_coo(k1,k2) CPU times: user 0.77 s, sys: 0.08 s, total: 0.85 s Wall time: 0.86 Out[19]: <7083x7083 sparse matrix of type '' with 24226 stored elements in COOrdinate format> In [20]: time k1csr ** k2csr CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s Wall time: 0.02 Out[20]: <7083x7083 sparse matrix of type '' with 24226 stored elements in Compressed Sparse Row format> Actually it's about 5 times FASTER than adding the two of them. Probably because in the latter it's the union of the elements that is the result, rather than the (typically sparser) intersection. > I don't know why ** was chosen, it was that way before I started > working on > scipy.sparse. It seems sensible enough to me; I don't know how often I've had to exponentiate a matrix (much less a sparse one) and if you want to do any serious exponentiation it's usually cheaper (and I'd expect more numerically stable) to diagonalize it once & exponentiate the eigenvalues. Is this ** behaviour documented anywhere? > I've added a .multiply() method to the sparse matrix base class > that goes through csr_matrix: > http://projects.scipy.org/scipy/scipy/changeset/3682 Muchos gracias. DWF From wnbell at gmail.com Thu Dec 20 04:21:21 2007 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 20 Dec 2007 03:21:21 -0600 Subject: [SciPy-user] Sparse with fast element-wise multiply? In-Reply-To: References: <27261D40-C6F1-402B-AFE3-D6B333CDA4FB@cs.toronto.edu> <80c99e790712161103j4f5ae3ecj4b53d6100403dea8@mail.gmail.com> <65C83323-C9E0-487A-8CFE-A5238B0A4C57@cs.toronto.edu> Message-ID: On Dec 20, 2007 2:56 AM, David Warde-Farley wrote: > Is this ** behaviour documented anywhere? I don't believe so. In any case, I've made ** implement exponentiation as it does for dense matrices. You'll get a deprecation warning for (sparse ** sparse) in SciPy 0.7 telling you to use sparse.multiply(sparse) instead. -- Nathan Bell wnbell at gmail.com From Alexander.Dietz at astro.cf.ac.uk Thu Dec 20 04:58:01 2007 From: Alexander.Dietz at astro.cf.ac.uk (Alexander Dietz) Date: Thu, 20 Dec 2007 09:58:01 +0000 Subject: [SciPy-user] Questions to optimize.leastsq Message-ID: <9cf809a00712200158x1d7348b0hc5a79eeaec2a6878@mail.gmail.com> Hi, I am using optimize.leastsq for the first time, and I seem to have some trouble with it. I am following the example given on the scipy webpage to fit a either 3 or 4 parameter function. The fit itself works fine, and I get reasonable results. My problem is to extract the errors on my 3 or 4 fitting parameters. Here is what I do exactly: q,cov,w,e, success = optimize.leastsq(errfunc, p[:], args = (xData, yData), full_output=1 ) if success==1: sigma = sqrt(diag(cov)) For the errors on my fitting parameters 'q' I take the square-root in the diagonal elements of the covariance matrix. For one or two parameters I get reasonable errors (i.e. the relative error is in the range of 30%), but for other parameters the errors are a million times larger than the parameter itself! Example: bestfit parameter 0: -1.743351 +- 3226113.032304 bestfit parameter 1: 0.778228 +- 0.145904 bestfit parameter 2: 4.261635 +- 48185218.007348 bestfit parameter 3: 21183.401658 +- 23799.387162 I checked the function I am fitting and when varying one of these parameters everything looks 'ok' (i.e. no sudden change of the values in the order of millions or so). I my sense, these errors (except parameter 1) do not make sense. So how can I fit a function to data and obtain reasonable errors? Maybe I need to set one of the extra options in leastsq? I would appreciate any help in this. Cheers Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Dec 20 05:31:41 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 20 Dec 2007 11:31:41 +0100 Subject: [SciPy-user] arraysetops & string arrays In-Reply-To: <8C1A7051-36CA-4B8E-A1F0-0E4C6834BD7C@cs.toronto.edu> References: <8C1A7051-36CA-4B8E-A1F0-0E4C6834BD7C@cs.toronto.edu> Message-ID: <476A448D.3040206@ntc.zcu.cz> Hi David! David Warde-Farley wrote: > Hi there, > > I noticed that setdiff1d() fails on string arrays. (numpy 1.0.3.1). > There's nothing in the documentation that suggests it's for numeric > dtypes only; should there be, or is this a bug? > > The problem appears to be the line > > tt = nm.concatenate( (zlike( ar1 ), zlike( ar2 ) + 1) ) > > where zlike() is returning an array of empty strings, and thus adding > 1 is causing problems. I tried modifying it to cast to int32 without > much success. Fixed in SVN. It was more a missing feature than a bug - I did not realized that it should work for non-numerical arrays, too. If you do not use SVN, I can mail you the fix, just tell me. If other functions have the same problems, let me know, please. r. From dmitrey.kroshko at scipy.org Thu Dec 20 06:30:58 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 20 Dec 2007 13:30:58 +0200 Subject: [SciPy-user] Could anyone send me copy Python Magazine 2007/11? Message-ID: <476A5272.2060100@scipy.org> hi all, could anyone send me copy of the journal Python Magazine 2007/11 (I mean zipped file) mentioned in scipy.org main page? Or provide an url where it could be downloaded. There is only 2007/10 available for free download. Thank you in advance, D. From matthieu.brucher at gmail.com Thu Dec 20 06:57:18 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 20 Dec 2007 12:57:18 +0100 Subject: [SciPy-user] Could anyone send me copy Python Magazine 2007/11? In-Reply-To: <476A5272.2060100@scipy.org> References: <476A5272.2060100@scipy.org> Message-ID: Hi, The issue is password-protected and the mail of the subscriber is included inside the pdf, so I don't think you will get one. Sorry... Matthieu 2007/12/20, dmitrey : > > hi all, > could anyone send me copy of the journal Python Magazine 2007/11 (I mean > zipped file) mentioned in scipy.org main page? Or provide an url where > it could be downloaded. > There is only 2007/10 available for free download. > > Thank you in advance, D. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Thu Dec 20 07:01:01 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 20 Dec 2007 14:01:01 +0200 Subject: [SciPy-user] Could anyone send me copy Python Magazine 2007/11? In-Reply-To: References: <476A5272.2060100@scipy.org> Message-ID: <476A597D.3080709@scipy.org> Hi Matthieu, thank you for the tip, I'll wait till free version will be available. Regards, D. Matthieu Brucher wrote: > Hi, > > The issue is password-protected and the mail of the subscriber is > included inside the pdf, so I don't think you will get one. Sorry... > > Matthieu > > 2007/12/20, dmitrey < dmitrey.kroshko at scipy.org > >: > > hi all, > could anyone send me copy of the journal Python Magazine 2007/11 > (I mean > zipped file) mentioned in scipy.org main page? > Or provide an url where > it could be downloaded. > There is only 2007/10 available for free download. > > Thank you in advance, D. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From pgmdevlist at gmail.com Thu Dec 20 11:37:34 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 20 Dec 2007 11:37:34 -0500 Subject: [SciPy-user] flagging no data in timeseries In-Reply-To: References: Message-ID: <200712201137.35658.pgmdevlist@gmail.com> Hello, > Since I am still very new to the package I need some hints. > In my original data nodata values are marked with "-999". > How can I import the data or create the time series and exclude these no > data points from further processing? The following solutions should be equivalent: * use masked_where from maskedarray >>>myvalues_ts_hourly = masked_where(myvalues_ts_hourly , -999) * Use indexing >>>myvalues_ts_hourly[myvalues_ts_hourly==-999] = M.masked From pgmdevlist at gmail.com Thu Dec 20 11:43:27 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 20 Dec 2007 11:43:27 -0500 Subject: [SciPy-user] get result array after timeseries operations In-Reply-To: References: Message-ID: <200712201143.28092.pgmdevlist@gmail.com> On Tuesday 18 December 2007 16:09:05 Tim Michelsen wrote: > Hi, > this is meant as a continuation to > For instance, if I convert data in an hourly frequency to daily averages > how to I read the daily averages into a array for further processing? * possibility #1: use the keyword func while converting. avgdata = convert(data,'D', func=maskedarray.mean) * possibility #2: If you don't use the keyword func, you end up with a 2d array, each row being a day, each column an hour. Just use maskedarray.mean on each row avgdata = convert(data,'D').mean(-1) If you only want the values, use the .series attribute, it will give you a view of the array as a MaskedArray. > What I would like is an array with just the values of the daily averages . > Additional a report-like array output with the format > day value > 1 3 > 2 11 > > would be nice. Mmh, contact me offlist, I have no idea what you're trying to do here. > Thanks again for the ice package. Things are looking nice! thanks a lot for using it, that's the best way to improve it ! From fie.pye at atlas.cz Thu Dec 20 13:12:46 2007 From: fie.pye at atlas.cz (Pye Fie) Date: Thu, 20 Dec 2007 18:12:46 GMT Subject: [SciPy-user] MayaVi 2 instalation Message-ID: <2ca156948383499cb604e64687b32ea0@db3381b8eaa54a2faa19d0073d9697e1> Hello. I have got : PC: 2x Dual-core Opteron 275 OS: CentOS 5.0, kernel 2.6.18-8.1.15.el5 I would like to prepare a computational environment based on Python and Python modules. I have already compiled and installed Python2.5.1 and modules such as Ipython, SciPy, NumPy, tables, mpi4py, matplotlib, wxpython and so on, libraries such as BLAS, LAPACK, ATLAS, HDF5, MPICH, VTK. Now I would like to continue with compilation and installation of MayaVi 2. According to instructions in https://svn.enthought.com/enthought/wiki/GrabbingAndBuilding I should install: Setuptools Traits TVTK MayaVi 2 As I understand the instructions in Setuptools README, Setuptools will install a new Python and new modules. But I don't need a new Python installation. I have one in /usr/local/python/python2.5.1 directory. I studied SciPy ML discussion http://projects.scipy.org/pipermail/scipy-user/2007-November/014374.html and downloaded files from http://code.enthought.com/downloads/source/ets2.6/ with intention that it helps. Unfortunately it also requires Setuptools. 1. Is there a solution to my problem? or 2. Am I wrong in understanding installation instruction? Could anybody help me? Best regards. fie pie ------------------------------------------ http://search.atlas.cz/ From robert.kern at gmail.com Thu Dec 20 16:25:47 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Dec 2007 15:25:47 -0600 Subject: [SciPy-user] MayaVi 2 instalation In-Reply-To: <2ca156948383499cb604e64687b32ea0@db3381b8eaa54a2faa19d0073d9697e1> References: <2ca156948383499cb604e64687b32ea0@db3381b8eaa54a2faa19d0073d9697e1> Message-ID: <476ADDDB.30100@gmail.com> Pye Fie wrote: > Hello. > > I have got : > PC: 2x Dual-core Opteron 275 > OS: CentOS 5.0, kernel 2.6.18-8.1.15.el5 > > I would like to prepare a computational environment based on Python and Python modules. I have already compiled and installed Python2.5.1 and modules such as Ipython, SciPy, NumPy, tables, mpi4py, matplotlib, wxpython and so on, libraries such as BLAS, LAPACK, ATLAS, HDF5, MPICH, VTK. Now I would like to continue with compilation and installation of MayaVi 2. Hi, you will want to join us on enthought-dev rather than scipy-user. https://mail.enthought.com/mailman/listinfo/enthought-dev > According to instructions in > https://svn.enthought.com/enthought/wiki/GrabbingAndBuilding > I should install: > Setuptools > Traits > TVTK > MayaVi 2 > > As I understand the instructions in Setuptools README, Setuptools will install a new Python and new modules. This is incorrect. Just follow the instructions here: http://pypi.python.org/pypi/setuptools -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Thu Dec 20 17:15:47 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 20 Dec 2007 17:15:47 -0500 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <4769CA5C.70205@enthought.com> References: <4769CA5C.70205@enthought.com> Message-ID: <7047671E-184F-4160-9562-027C68C25948@cs.toronto.edu> On 19-Dec-07, at 8:50 PM, Travis E. Oliphant wrote: > Yes, absolutely, warnings should be there in the docstring. Your > perspective is not illogical nor are many of us unsympathetic to it. > But, following your rule would be an "exceptional" case to an > otherwise easy to explain rule. > > Our approach thus far has been to educate users coming from other > languages that there is such a thing as a 1D array that is neither a > "row" nor a "column" vector and so in fact a transpose operation is > not > really what you meant (as it does nothing to a 1D array). Instead > what > you wanted to do was convert the 1-D array to a 2-D array with 1 > row and > then transpose that to a 2D array with 1 column. Would it be totally crazy to throw an exception if you try to access .T on a 1D array? Since it's not meaningful, I don't see a compelling reason why it should need to succeed. Also, would more... aggressively encouraging use of the "matrix" type help alleviate the migration pains? David From matthieu.brucher at gmail.com Thu Dec 20 17:21:56 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 20 Dec 2007 23:21:56 +0100 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: <7047671E-184F-4160-9562-027C68C25948@cs.toronto.edu> References: <4769CA5C.70205@enthought.com> <7047671E-184F-4160-9562-027C68C25948@cs.toronto.edu> Message-ID: > > Would it be totally crazy to throw an exception if you try to > access .T on a 1D array? Since it's not meaningful, I don't see a > compelling reason why it should need to succeed. > A really bad idea, I think. There are some cases where you might want to do this in generic code, so this will only be slower and cumbersome for people that want to do things correctly. It IS meaningful. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Thu Dec 20 17:39:30 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 20 Dec 2007 17:39:30 -0500 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: References: <4769CA5C.70205@enthought.com> <7047671E-184F-4160-9562-027C68C25948@cs.toronto.edu> Message-ID: On 20-Dec-07, at 5:21 PM, Matthieu Brucher wrote: >> >> Would it be totally crazy to throw an exception if you try to >> access .T on a 1D array? Since it's not meaningful, I don't see a >> compelling reason why it should need to succeed. >> I can't think of a situation where code would need to be generic enough to need to acceptably call what's basically a no-op on a 1D array. But you might be right. Still, I think it should be discouraged somehow, for the benefit of new users; perhaps a warn()? David From dwf at cs.toronto.edu Thu Dec 20 17:40:48 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 20 Dec 2007 17:40:48 -0500 Subject: [SciPy-user] arraysetops & string arrays In-Reply-To: <476A448D.3040206@ntc.zcu.cz> References: <8C1A7051-36CA-4B8E-A1F0-0E4C6834BD7C@cs.toronto.edu> <476A448D.3040206@ntc.zcu.cz> Message-ID: <2300E41E-DBF2-485D-9159-EC602A6BCC22@cs.toronto.edu> On 20-Dec-07, at 5:31 AM, Robert Cimrman wrote: > > Fixed in SVN. It was more a missing feature than a bug - I did not > realized that it should work for non-numerical arrays, too. > > If you do not use SVN, I can mail you the fix, just tell me. Wonderful! I haven't been using the bleeding edge SVN version of numpy yet, but I can grab just this file, no need to mail it. Thanks very much! David From matthieu.brucher at gmail.com Fri Dec 21 01:56:13 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 21 Dec 2007 07:56:13 +0100 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: References: <4769CA5C.70205@enthought.com> <7047671E-184F-4160-9562-027C68C25948@cs.toronto.edu> Message-ID: > > I can't think of a situation where code would need to be generic > enough to need to acceptably call what's basically a no-op on a 1D > array. But you might be right. Still, I think it should be > discouraged somehow, for the benefit of new users; perhaps a warn()? > This remains the same : why do you want to change something that is correct to something that is not ? Why do you want to put a warning when people want to do generic code ? I do a lot of generic code, and the fact that .T does nothing for a 1D array and does not put a warning is the behaviour I expect from Numpy. The behaviour in Matlab is not the correct one, I can't count the number of times I had to explain to a student why what he does does not give the expected result. Python is about to make things simple and clear, let's jus do not the opposite in Numpy. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Fri Dec 21 12:07:30 2007 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Fri, 21 Dec 2007 09:07:30 -0800 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: References: <4769CA5C.70205@enthought.com> <7047671E-184F-4160-9562-027C68C25948@cs.toronto.edu> Message-ID: <476BF2D2.8050406@slac.stanford.edu> IMHO I think that we are reaching noise level on this thread.... Erik and Travis's option to put a warning in the doc string seems reasonable, easy, and efficient to me. numpy.array can but is not meant to do LinA lg matrix computation, acknowledged! That should be CAPITALIZED in tutorials, docstrings, whathever, and that should be enough. cheers, Johann Matthieu Brucher wrote: > > I can't think of a situation where code would need to be generic > enough to need to acceptably call what's basically a no-op on a 1D > array. But you might be right. Still, I think it should be > discouraged somehow, for the benefit of new users; perhaps a warn()? > > > This remains the same : why do you want to change something that is > correct to something that is not ? Why do you want to put a warning > when people want to do generic code ? > I do a lot of generic code, and the fact that .T does nothing for a 1D > array and does not put a warning is the behaviour I expect from Numpy. > The behaviour in Matlab is not the correct one, I can't count the > number of times I had to explain to a student why what he does does > not give the expected result. > Python is about to make things simple and clear, let's jus do not the > opposite in Numpy. > > Matthieu > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From aisaac at american.edu Fri Dec 21 12:34:00 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 21 Dec 2007 12:34:00 -0500 Subject: [SciPy-user] Mathematica Element-wise Multiplication In-Reply-To: References: Message-ID: On Wed, 19 Dec 2007, Erik Volz apparently wrote: > z+=z.T # where dimension z > 1 > which does not do what a naive user would think it does (try it). You mean like this? :: >>> x=N.ones( (2,2) ) >>> x array([[ 1., 1.], [ 1., 1.]]) >>> x += x.T >>> x array([[ 2., 2.], [ 3., 2.]]) That just shows you that the in-place updating takes place in row major order. You have to have something like this or create temporary arrays, but not creating temporary arrays is a core value of the in-place operators. If you do not like them, do not use them. Or force the creation of a temporary array on the RHS:: >>> x=N.ones( (2,2) ) >>> x += x.T + 0 >>> x array([[ 2., 2.], [ 2., 2.]]) Cheers, Alan Isaac From nikolaskaralis at gmail.com Sun Dec 23 11:37:03 2007 From: nikolaskaralis at gmail.com (Nikolas Karalis) Date: Sun, 23 Dec 2007 18:37:03 +0200 Subject: [SciPy-user] Characteristic polynomial In-Reply-To: References: <85b2e0230712110434x3b71be92sa2607f75886410ed@mail.gmail.com> <475EAEC1.4020501@enthought.com> <85b2e0230712110838n19e0e229m73f80d1a61fd3cc3@mail.gmail.com> Message-ID: <85b2e0230712230837h2356c230h431af5f9c283fbdf@mail.gmail.com> Hello again. I return with another question regarding characteristic polynomials. I am computing the characteristic polynomials for matrices. And while (for millions of them) i get the right result, i found some cases, where i get the "wrong". Let me give you the examples, and then i will explain. >>> A=matrix([[290, 324, 323, 364, 340, 341, 365, 336, 342, 326], [324, 290, 322, 338, 366, 341, 365, 336, 344, 324], [323, 322, 286, 337, 337, 366, 364, 361, 320, 323], [345, 327, 326, 315, 343, 344, 342, 363, 343, 312], [329, 347, 325, 343, 319, 345, 343, 364, 327, 327], [329, 329, 347, 344, 345, 323, 344, 342, 348, 328], [347, 347, 346, 342, 343, 344, 321, 341, 326, 311], [325, 325, 343, 363, 364, 342, 341, 313, 324, 309], [342, 344, 320, 362, 338, 366, 338, 335, 286, 307], [340, 339, 338, 330, 361, 362, 333, 329, 316, 265]]) >>> B=matrix([[290, 324, 336, 365, 341, 340, 364, 323, 342, 326], [324, 290, 336, 339, 367, 340, 364, 322, 344, 324], [325, 325, 313, 341, 342, 364, 363, 343, 324, 309], [346, 328, 341, 319, 346, 343, 342, 347, 345, 312], [330, 348, 342, 346, 321, 345, 344, 346, 329, 327], [328, 328, 364, 343, 345, 321, 341, 326, 346, 328], [346, 346, 363, 342, 344, 341, 317, 325, 324, 311], [323, 322, 361, 365, 365, 338, 336, 286, 320, 323], [342, 344, 335, 364, 340, 364, 336, 320, 286, 307], [340, 339, 329, 332, 363, 360, 331, 338, 316, 265]]) >>> poly(A) array([ 1.00000000e+00, -3.00800000e+03, -1.10612600e+06, -1.55679754e+08, -1.10343405e+10, -4.15092661e+11, -7.89507268e+12, -6.51631023e+13, -1.80794492e+14, -2.21111889e+00, -2.95926167e-13]) >>> poly(B) array([ 1.00000000e+00, -3.00800000e+03, -1.10612600e+06, -1.55679754e+08, -1.10343405e+10, -4.15092661e+11, -7.89507268e+12, -6.51631023e+13, -1.80794492e+14, 4.59224482e+00, 3.39308585e-14]) As you can see, the 2 polynomials are the same (up to the last 2 terms). The last term is considered to be "almost" 0, so we can see that the difference is the coefficient of x^1. If we compute the exact same thing with Maple and Sage, we get (for both matrices) : *x^10 - 3008*x^9 - 1106126*x^8 - 155679754*x^7 - 11034340454*x^6 - 415092661064*x^5 - 7895072675601*x^4 - 65163102265268*x^3 - 180794492489124*x^2* so, it is the same, since it doesn't "compute" the x^1 term. This also happens for a few other matrices... Can anybody help me with this? What is the right answer for the char. poly (i guess it is the Sage's and Maple's one, since they agree). Why does this difference occur? Is it "sage" to ignore the last 2 terms? Thank you in advance. Nikolas On Dec 11, 2007 10:05 PM, Fernando Perez wrote: > On Dec 11, 2007 9:45 AM, Matthieu Brucher > wrote: > > You can't expect scipy to find the exact coefficients. sage is a > symbolic > > package, it will find the correct answers, but scipy will find only an > > approximated one, up to the machine precision. This is what you see in > your > > exemple. > > If you have integers, you could expect scipy to return long integers > (exact > > result), but this is not the case as almost everything is converted into > a > > float array before the actual C or Fortran routine is run. > > In this case Sage isn't using anything symbolic though: the issue is > that Sage has from the ground up defined a type system where it knows > what field its inputs are being computed over. So if a matrix is made > up of only integers, it knows that computations are to be performed in > exact arithmetic over Z, and does so accordingly. > > Sage's origins are actually number theory, not symbolic computing, and > its strongest area is precisely in exact arithmetic, with enormous > sophistication for some of its algorithms. Initially its (free) > symbolic capabilities were all obtained via calls to Maxima, though > now that Ondrej is actively helping with SymPy integration into Sage, > that is changing and SymPy is natively available as well, which means > that over time there will be more and more native (python) symbolic > support as well. > > I'd agree though that for this kind of calculation, Sage is the tool to > use. > > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Nikolas Karalis Applied Mathematics and Physics Undergraduate National Technical University of Athens, Greece http://users.ntua.gr/ge04042 -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Sun Dec 23 13:53:09 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 23 Dec 2007 13:53:09 -0500 Subject: [SciPy-user] Characteristic polynomial In-Reply-To: <85b2e0230712230837h2356c230h431af5f9c283fbdf@mail.gmail.com> References: <85b2e0230712110434x3b71be92sa2607f75886410ed@mail.gmail.com> <475EAEC1.4020501@enthought.com> <85b2e0230712110838n19e0e229m73f80d1a61fd3cc3@mail.gmail.com> <85b2e0230712230837h2356c230h431af5f9c283fbdf@mail.gmail.com> Message-ID: On 23/12/2007, Nikolas Karalis wrote: > >>> A=matrix([[290, 324, 323, 364, 340, 341, 365, 336, 342, 326], > [324, 290, 322, 338, 366, 341, 365, 336, 344, 324], > [323, 322, 286, 337, 337, 366, 364, 361, 320, 323], > [345, 327, 326, 315, 343, 344, 342, 363, 343, 312], > [329, 347, 325, 343, 319, 345, 343, 364, 327, 327], > [329, 329, 347, 344, 345, 323, 344, 342, 348, 328], > [347, 347, 346, 342, 343, 344, 321, 341, 326, 311], > [325, 325, 343, 363, 364, 342, 341, 313, 324, 309], > [342, 344, 320, 362, 338, 366, 338, 335, 286, 307], > [340, 339, 338, 330, 361, 362, 333, 329, 316, 265]]) > > >>> B=matrix([[290, 324, 336, 365, 341, 340, 364, 323, 342, 326], > [324, 290, 336, 339, 367, 340, 364, 322, 344, 324], > [325, 325, 313, 341, 342, 364, 363, 343, 324, 309], > [346, 328, 341, 319, 346, 343, 342, 347, 345, 312], > [330, 348, 342, 346, 321, 345, 344, 346, 329, 327], > [328, 328, 364, 343, 345, 321, 341, 326, 346, 328], > [346, 346, 363, 342, 344, 341, 317, 325, 324, 311], > [323, 322, 361, 365, 365, 338, 336, 286, 320, 323], > [342, 344, 335, 364, 340, 364, 336, 320, 286, 307], > [340, 339, 329, 332, 363, 360, 331, 338, 316, 265]]) > > >>> poly(A) > array([ 1.00000000e+00, -3.00800000e+03, -1.10612600e+06, > -1.55679754e+08, -1.10343405e+10, -4.15092661e+11, > -7.89507268e+12, -6.51631023e+13, -1.80794492e+14, > -2.21111889e+00, -2.95926167e-13]) > > >>> poly(B) > array([ 1.00000000e+00, -3.00800000e+03, -1.10612600e+06, > -1.55679754e+08 , -1.10343405e+10, -4.15092661e+11, > -7.89507268e+12, -6.51631023e+13, -1.80794492e+14, > 4.59224482e+00, 3.39308585e-14]) > > As you can see, the 2 polynomials are the same (up to the last 2 terms). The > last term is considered to be "almost" 0, so we can see that the difference > is the coefficient of x^1. > > If we compute the exact same thing with Maple and Sage, we get (for both > matrices) : > x^10 - 3008*x^9 - 1106126*x^8 - 155679754*x^7 - 11034340454*x^6 - > 415092661064*x^5 - 7895072675601*x^4 - 65163102265268*x^3 - > 180794492489124*x^2 > > so, it is the same, since it doesn't "compute" the x^1 term. > > This also happens for a few other matrices... > > Can anybody help me with this? What is the right answer for the char. poly > (i guess it is the Sage's and Maple's one, since they agree). > Why does this difference occur? Is it "sage" to ignore the last 2 terms? When you do all but the simplest matrix computations with numpy, it treats the matrices as containing floating-point numbers. Since floating-point numbers are as a rule only approximations to the exact value you would like ot be working with, the answers you get out are only approximations. If you look at the coefficients you're getting from numpy's characteristic polynomial calculation, you will see that the coefficient of x^1 is about 10^-14 times the coefficient of x^2. A good rule of thumb is that you can consider as almost zero any number that is many orders of magnitude less than the biggest number in your answer. So 4, or 2, or whatever, is "close to zero" in this context. More specifically, when you compute a small number as the difference of two very large numbers, that number has far fewer digits of accuracy than the original two numbers. Many matrix computations, and in particular eigenvalue and characteristic polynomial calculations, involve just this kind of delicate cancellation, often many times over. So you have to expect errors to accumulate. Some problems, like finding the roots of a polynomial whose coefficients you know, are just inherently "unstable", that is, if you change the coefficients by one in the last significant figure, the roots move around wildly. These problems are basically hopeless, and you will need to reformulate your problem in a way that doesn't require you to caluclate the roots. Matrix inversion can be like this, for example, so rather than solve Ax=b as x=A^-1 b, you should use a specialized solver (of which there are many). For your problem, the answer depends on what you are doing with the characteristic polynomials. If you are (for example) trying to find all the eigenvalues, use numpy's eigenvalue solvers directly. If you only want certain coefficients - the x^0 and x^(n-1) are the easiest - think about whether there's a better formula (e.g. determinant or trace) for them. Your case is a bit special: your input numbers are not floating-point. They are integers, and (I assume) they represent the input values *exactly*. Thus in principle it should be possible to compute the characteristic polynomial exactly. Unfortunately, using floating-point is almost certainly a bad way to do this. (MAPLE might allow you to use floating-point with (say) several hundred decimal places, which might result in very nearly integral coefficients, but this is going to be *slow*.) Instead, you should use a special-purpose integer linear algebra routine. These integer linear algebra routines are specialized, computationally expensive, and not part of numpy. They are part of SAGE and MAPLE. More generally, this kind of exact computation belongs to the domain of "symbolic computation", which is not something numpy is for. In summary: use SAGE, or think about your problem more. Anne From robert.kern at gmail.com Sun Dec 23 13:42:30 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 23 Dec 2007 12:42:30 -0600 Subject: [SciPy-user] Characteristic polynomial In-Reply-To: <85b2e0230712230837h2356c230h431af5f9c283fbdf@mail.gmail.com> References: <85b2e0230712110434x3b71be92sa2607f75886410ed@mail.gmail.com> <475EAEC1.4020501@enthought.com> <85b2e0230712110838n19e0e229m73f80d1a61fd3cc3@mail.gmail.com> <85b2e0230712230837h2356c230h431af5f9c283fbdf@mail.gmail.com> Message-ID: <476EAC16.1040908@gmail.com> Nikolas Karalis wrote: > Hello again. I return with another question regarding characteristic > polynomials. > > I am computing the characteristic polynomials for matrices. And while > (for millions of them) i get the right result, i found some cases, where > i get the "wrong". > Let me give you the examples, and then i will explain. > >>>> A=matrix([[290, 324, 323, 364, 340, 341, 365, 336, 342, 326], > [324, 290, 322, 338, 366, 341, 365, 336, 344, 324], > [323, 322, 286, 337, 337, 366, 364, 361, 320, 323], > [345, 327, 326, 315, 343, 344, 342, 363, 343, 312], > [329, 347, 325, 343, 319, 345, 343, 364, 327, 327], > [329, 329, 347, 344, 345, 323, 344, 342, 348, 328], > [347, 347, 346, 342, 343, 344, 321, 341, 326, 311], > [325, 325, 343, 363, 364, 342, 341, 313, 324, 309], > [342, 344, 320, 362, 338, 366, 338, 335, 286, 307], > [340, 339, 338, 330, 361, 362, 333, 329, 316, 265]]) > >>>> B=matrix([[290, 324, 336, 365, 341, 340, 364, 323, 342, 326], > [324, 290, 336, 339, 367, 340, 364, 322, 344, 324], > [325, 325, 313, 341, 342, 364, 363, 343, 324, 309], > [346, 328, 341, 319, 346, 343, 342, 347, 345, 312], > [330, 348, 342, 346, 321, 345, 344, 346, 329, 327], > [328, 328, 364, 343, 345, 321, 341, 326, 346, 328], > [346, 346, 363, 342, 344, 341, 317, 325, 324, 311], > [323, 322, 361, 365, 365, 338, 336, 286, 320, 323], > [342, 344, 335, 364, 340, 364, 336, 320, 286, 307], > [340, 339, 329, 332, 363, 360, 331, 338, 316, 265]]) > >>>> poly(A) > array([ 1.00000000e+00, -3.00800000e+03, -1.10612600e+06, > -1.55679754e+08, -1.10343405e+10, -4.15092661e+11, > -7.89507268e+12, -6.51631023e+13, -1.80794492e+14, > -2.21111889e+00, -2.95926167e-13]) > >>>> poly(B) > array([ 1.00000000e+00, -3.00800000e+03, -1.10612600e+06, > -1.55679754e+08 , -1.10343405e+10, -4.15092661e+11, > -7.89507268e+12, -6.51631023e+13, -1.80794492e+14, > 4.59224482e+00, 3.39308585e-14]) > > As you can see, the 2 polynomials are the same (up to the last 2 terms). > The last term is considered to be "almost" 0, so we can see that the > difference is the coefficient of x^1. > > If we compute the exact same thing with Maple and Sage, we get (for both > matrices) : > > *x^10 - 3008*x^9 - 1106126*x^8 - 155679754*x^7 - 11034340454*x^6 - 415092661064*x^5 - 7895072675601*x^4 - 65163102265268*x^3 - 180794492489124*x^2 > * > > so, it is the same, since it doesn't "compute" the x^1 term. > > This also happens for a few other matrices... > > Can anybody help me with this? What is the right answer for the char. > poly (i guess it is the Sage's and Maple's one, since they agree). Yes, also because they are using exact arithmetic rather than floating point. > Why does this difference occur? Is it "sage" to ignore the last 2 terms? Remember that floating point errors are usually relative, not absolute. A value of 1e-14 is not always "almost" 0, and a value of 1e0 can sometimes be "almost" 0 when the other values are around 1e+14. It should make you suspicious that the exponent steadily increases then suddenly drops to 0. You can do a little bit of cleanup on the floating-point results by looking at the eigenvalues directly. In [14]: ev = linalg.eigvals(A) In [15]: ev Out[15]: array([ 3.35212801e+03 +0.00000000e+00j, -9.14584932e+01 +0.00000000e+00j, -7.67326852e+01 +0.00000000e+00j, -6.72710568e+01 +0.00000000e+00j, -5.92400278e+01 +0.00000000e+00j, -3.36474895e+01 +0.00000000e+00j, -1.01081162e+01 +0.00000000e+00j, -5.67013979e+00 +0.00000000e+00j, 9.77724356e-15 +8.22960451e-15j, 9.77724356e-15 -8.22960451e-15j]) You can see that the last two terms are "almost" 0 in the relative sense. Therefore, you can set them to zero before you construct the polynomial: In [24]: aev = absolute(ev) In [25]: ev[(aev / aev.max()) < 1e-16] = 0.0 In [26]: poly(real_if_close(ev)) Out[26]: array([ 1.00000000e+00, -3.00800000e+03, -1.10612600e+06, -1.55679754e+08, -1.10343405e+10, -4.15092661e+11, -7.89507268e+12, -6.51631023e+13, -1.80794492e+14, 0.00000000e+00, 0.00000000e+00]) But if you want really reliable results for largish integer matrices, you'll want to use SAGE. Your matrices are large enough that the floating point error will be significant even if there are no "almost" 0s to clean up. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ac1201 at gmail.com Sun Dec 23 19:42:21 2007 From: ac1201 at gmail.com (Andrew Charles) Date: Mon, 24 Dec 2007 11:42:21 +1100 Subject: [SciPy-user] reading data into a RecordArray Message-ID: I'm relatively new to python, and new to this list. Hello everyone - I really like what you're doing with scipy. My question relates to code I'm writing to convert my ASCII data to netCDF. I'm trying to: 1. Read in the data file 2. Read in a file containing the column headers 3. Merge the two into a record array I've read the overview (http://www.scipy.org/RecordArrays), and gone back over the introductory material on lists and tuples, but I've stumped myself. The code from scipy import * colfile = open("my_column_spec.txt") columns=[] cols = colfile.readlines() for col in cols: columns = columns + [(col,'f4')] ifile = open("my_ascii_data.txt") data = io.array_import.read_array(ifile) data = array(data,columns) fails on the last line with the error "expected a readable buffer object". In case it's relevant, dt.shape is (1984,22) and len(cols) is 22. Any pointers as to what I'm doing wrong? Cheers, Andrew Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Mon Dec 24 01:02:48 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 23 Dec 2007 23:02:48 -0700 Subject: [SciPy-user] reading data into a RecordArray In-Reply-To: References: Message-ID: On Dec 23, 2007 5:42 PM, Andrew Charles wrote: > I'm relatively new to python, and new to this list. Hello everyone - I > really like what you're doing with scipy. > > My question relates to code I'm writing to convert my ASCII data to netCDF. > I'm trying to: > > 1. Read in the data file > 2. Read in a file containing the column headers > 3. Merge the two into a record array > > I've read the overview (http://www.scipy.org/RecordArrays ), and gone back > over the introductory material on lists and tuples, but I've stumped myself. > The code > > from scipy import * > > colfile = open("my_column_spec.txt") > columns=[] > cols = colfile.readlines() > for col in cols: > columns = columns + [(col,'f4')] Try at this point in the code: columns = tuple(columns) > > ifile = open("my_ascii_data.txt") > data = io.array_import.read_array(ifile) > > data = array(data,columns) > > fails on the last line with the error "expected a readable buffer object". > In case it's relevant, dt.shape is (1984,22) and len(cols) is 22. Any > pointers as to what I'm doing wrong? And let us know if it helps. Cheers, f From fperez.net at gmail.com Mon Dec 24 16:43:51 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 24 Dec 2007 14:43:51 -0700 Subject: [SciPy-user] From the Sage/Jmol lists: "scholarly activity"? Message-ID: Here's something that came up on the jmol list and that William Stein (SAGE lead) forwarded, but which is also a concern to this list. Sage has also had discussions on the matter and the idea of a Sage journal (http://sagemath.org/jsage/) exists already, in a similar vein to previous discussions here on a 'scipy journal'. I don't have any answer to this at the moment, but I'm pretty sure it's something that will recur. I personally know of several people who have been bitten by this very same problem. It's worth noting that the original author of the JMol message is a *tenured* faculty member at U. Wisconsin, so not even tenure seems to be a complete protection against the perception from many academic administrators that contributing to open source scientific projects is "not scholarly work". Cheers, f ---------- Forwarded message ---------- From: William Stein Date: Dec 24, 2007 12:55 PM Subject: [sage-devel] "scholarly activity"? To: sage-devel Hi, I just noticed this email on the jmol developer mailing list. See below. if anybody has any thoughts or ideas -- long or short term -- about how to structure or restructure sage development so the same sort of thing doesn't happen to us, please speak up. I think something like JSAGE (http://sagemath.org/jsage/) -- if it were to take off -- would really help. http://sourceforge.net/mailarchive/forum.php?thread_name=F4FB859D-42D1-4974-B244-E67DE7F3681F%40uwosh.edu&forum_name=jmol-developers " Dear Jmol team: This letter is to notify you that I will not be able to continue participating in Jmol development. I hope this situation will only be temporary. However, my rejoining the project depends on my institution being convinced to give me some kind of credit for these activities. Since my contributions to the Jmol project were not deemed a "form of off-campus, peer reviewed scholarly or artistic product", I cannot afford to put any more time into the project. As it stands, I have been told that my scholarly activity level has not been adequate recently. To help me increase my scholarly output, I will have to teach 3-6 hours more of classes each week, and will be responsible for grading additional work from 20-60 more students. Somehow that is supposed to increase my scholarly activity:) Put simply, this means I will not have time to devote to Jmol or most of my other scholarly activities. Since Jmol is the scholarly activity that doesn't presently count, I need to drop that and use any time I can eke out for scholarship of other kinds. If I can convince my department to include contributions to projects like Jmol on a list of creditable activity, I will be able to rejoin the project. This is likely to take a semester. Thus I hope to be able to rejoin the project next spring, about May. I will stay on the lists and try to stay up-to-date. I also plan to spend some of the holiday break bringing the Wiki up-to-date on the export to web function. I think it will help my case if export to web is better documented and thus gets used more. I have found my involvement in the project intellectually stimulating and am sorry that my situation requires that I halt my participation. Sincerely, Jonathan Dr. Jonathan H. Gutow Chemistry Department gutow at uw... UW-Oshkosh Office:920-424-1326 800 Algoma Boulevard FAX:920-424-2042 Oshkosh, WI 54901 http://www.uwosh.edu/faculty_staff/gutow/ " A response: This is quite unfortunate, Jonathan. Please forward my regrets to your Department Chair. Contributions like yours are hard to come by, and, for heaven's sakes, if this isn't peer reviewed, WHAT IS? I think it's just a sign that people have not caught up with the times. Very unfortunate. Bob Hanson Professor of Chemistry St. Olaf College From gael.varoquaux at normalesup.org Mon Dec 24 16:57:01 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 24 Dec 2007 22:57:01 +0100 Subject: [SciPy-user] From the Sage/Jmol lists: "scholarly activity"? In-Reply-To: References: Message-ID: <20071224215701.GD8488@clipper.ens.fr> On Mon, Dec 24, 2007 at 02:43:51PM -0700, Fernando Perez wrote: > Here's something that came up on the jmol list and that William Stein > (SAGE lead) forwarded, but which is also a concern to this list. > Sage has also had discussions on the matter and the idea of a Sage > journal (http://sagemath.org/jsage/) exists already, in a similar vein > to previous discussions here on a 'scipy journal'. > I don't have any answer to this at the moment, but I'm pretty sure > it's something that will recur. I personally know of several people > who have been bitten by this very same problem. It's worth noting > that the original author of the JMol message is a *tenured* faculty > member at U. Wisconsin, so not even tenure seems to be a complete > protection against the perception from many academic administrators > that contributing to open source scientific projects is "not scholarly > work". Yes, it is a very important problem. Indeed it would be nice to have a way of get scholar-like credit for work on open source projects. If this means creating a journal dedicated to released scientific code (let us just avoid putting "open source" in the title, it's too political), than it may be the way to go. I think it should not be restricted to Python and friends, a broader could increase its visibility. Any ideas welcome, there is a Damocles sword hanging above many contributions. Ga?l From william.ratcliff at gmail.com Mon Dec 24 17:30:24 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Mon, 24 Dec 2007 17:30:24 -0500 Subject: [SciPy-user] From the Sage/Jmol lists: "scholarly activity"? In-Reply-To: <20071224215701.GD8488@clipper.ens.fr> References: <20071224215701.GD8488@clipper.ens.fr> Message-ID: <827183970712241430s2b907f79t951205a0b8a4b730@mail.gmail.com> At my workplace a very successful gui for use in diffraction was developed called EXPGUI. They published a paper describing it and asked people who used it to cite it. It is one of the most highly cited papers of our facility. For some utilities such as jmol (or matplotlib), could that help? Another approach that I have seen used is to require a registration (free of course!) for people who want to download the program (or tracking of download statistics) to show impact to one's supervisors....Another question--how would his department have looked upon an E-journal? Another question might be is whether or not a funding agency such as the NSF could offer grants for work on OSS so that faculty who work on it could get more credit from their departments? Cheers, William On Dec 24, 2007 4:57 PM, Gael Varoquaux wrote: > On Mon, Dec 24, 2007 at 02:43:51PM -0700, Fernando Perez wrote: > > Here's something that came up on the jmol list and that William Stein > > (SAGE lead) forwarded, but which is also a concern to this list. > > > Sage has also had discussions on the matter and the idea of a Sage > > journal (http://sagemath.org/jsage/) exists already, in a similar vein > > to previous discussions here on a 'scipy journal'. > > > I don't have any answer to this at the moment, but I'm pretty sure > > it's something that will recur. I personally know of several people > > who have been bitten by this very same problem. It's worth noting > > that the original author of the JMol message is a *tenured* faculty > > member at U. Wisconsin, so not even tenure seems to be a complete > > protection against the perception from many academic administrators > > that contributing to open source scientific projects is "not scholarly > > work". > > Yes, it is a very important problem. Indeed it would be nice to have a > way of get scholar-like credit for work on open source projects. If this > means creating a journal dedicated to released scientific code (let us > just avoid putting "open source" in the title, it's too political), than > it may be the way to go. I think it should not be restricted to Python > and friends, a broader could increase its visibility. > > Any ideas welcome, there is a Damocles sword hanging above many > contributions. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Wed Dec 26 00:42:21 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 26 Dec 2007 05:42:21 +0000 (UTC) Subject: [SciPy-user] arbitrary submatrices of a sparse matrix? References: <9406141D-E419-4B6C-A4B4-033ACC2E1DF2@cs.toronto.edu> Message-ID: David Warde-Farley cs.toronto.edu> writes: > > I've figured out how to do this for full matrices (either x > > [[4,6,2,7],:][:,1,7,3] or by playing with take()) but I'm at a loss > > for how to do it with a sparse matrix, if it's even possible (not > > sure how matlab pulls it off). > > If there's some easy way to permute the rows and columns of a CSR/CSC > matrix then I would be able to get the job done with slices... Is > there? Or is there some way to do this that I'm not seeing? You can use sparse matrix multiplication can to apply permutations and slice rows: In [1]: from scipy import * In [2]: from scipy.sparse import * In [3]: A = csr_matrix(arange(12).reshape(4,3)) In [5]: A.todense() Out[5]: matrix([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) In [7]: P = coo_matrix( ([1,1,1,1],[[0,1,2,3],[3,1,0,2]]) ) In [9]: (P*A).todense() Out[9]: matrix([[ 9, 10, 11], [ 3, 4, 5], [ 0, 1, 2], [ 6, 7, 8]]) In [10]: P = coo_matrix( ([1,1,1,1],[[0,1,2,3],[0,0,3,3]]) ) In [11]: (P*A).todense() Out[11]: matrix([[ 0, 1, 2], [ 0, 1, 2], [ 9, 10, 11], [ 9, 10, 11]]) The only downside of this approach is that it requires at least O(M) operations for an M,N sparse matrix in CSR format. For permutations this is not a problem, and I believe you'll get near-optimal performance. However, if you only want to slice a few rows, then a special purpose approach is needed I'm working on extending the work Robert has done to the sort of indexing you describe. Keep in mind that the underlying matrix format limits the efficiency of slicing. Slicing rows of a CSR or columns of a CSC is fast. Slicing columns of a CSR is an O( A.nnz ) operation, which is essentially as bad as converting to CSC and then slicing. I believe MATLAB uses CSC to represent sparse matrices, so one would expect A(:,10) to be faster than A(10,:). Here's a simple MATLAB example: >> A = gallery('poisson',500); >> tic; S = A(:,10); toc; Elapsed time is 0.000060 seconds. >> tic; S = A(10,:); toc; Elapsed time is 0.063538 seconds. From dmitrey.kroshko at scipy.org Wed Dec 26 12:29:37 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 26 Dec 2007 19:29:37 +0200 Subject: [SciPy-user] Would anyone connect fortran constrained linear least squares solver to Python? In-Reply-To: References: <475F9EF7.4010509@scipy.org> <4768311C.7030906@scipy.org> <476B7C19.1020003@scipy.org> <476FE3F4.3000802@scipy.org> Message-ID: <47728F81.10906@scipy.org> Hi all, I had noticed (from traffic statistics): lots of people are interested in linear least squares problems (LLSP). However, scipy has only unconstrained LAPACK dGELSS/sGELSS. Could anyone provide connection of the fortran-written solver to Python (or connect it to scipy)? http://netlib3.cs.utk.edu/toms/587 (that one can handle linear eq and ineq constraints) Then I would gladly provide connection of the one to scikits.openopt. Regards, D. http://scipy.org/scipy/scikits/wiki/Dmitrey From lorenzo.isella at gmail.com Wed Dec 26 19:28:12 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 27 Dec 2007 01:28:12 +0100 Subject: [SciPy-user] Scipy for Cluster Detection Message-ID: <4772F19C.6040308@gmail.com> Dear All, I hope this will not sound too off-topic. Without going into details, I am simulating the behavior of a set of particles interacting with a short-ranged, strongly binding potential. I start with a number N of particles which undergo collisions with each other and, in doing so, they stick giving rise to clusters. The code which does that is not written in Python, but it returns the positions of these particles in space. As time goes on, more and more particles coagulate around one of these clusters. Eventually, they will all end up in the same cluster. Two particles are considered to be bounded if their distance falls below a certain threshold d. A cluster is nothing else than a group of particles all directly or indirectly bound together. Having said so, I now need an efficient algorithm to look for clusters, since at the very least I need to be able to count them to study how the number of clusters evolve in time. Is there anything suitable for this already implemented in SciPy? I wonder if people in Astronomy have similar problems if they need e.g. to detect/study clusters of stars. Simply I would like not to re-invent the wheel and it goes without saying that any suggestions here are really appreciated. Many thanks and a nice Xmas break to everybody on this list. Lorenzo From william.ratcliff at gmail.com Wed Dec 26 22:04:02 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Wed, 26 Dec 2007 22:04:02 -0500 Subject: [SciPy-user] Scipy for Cluster Detection In-Reply-To: <4772F19C.6040308@gmail.com> References: <4772F19C.6040308@gmail.com> Message-ID: <827183970712261904l4537914axe54657b7988bf717@mail.gmail.com> Have you tried looking into the hierarchical clustering algorithms that are in the biopython package? http://www.inb.mu-luebeck.de/biosoft/biopython/api/Bio/Tools/Clustering/kMeans.py.html Cheers, William On Dec 26, 2007 7:28 PM, Lorenzo Isella wrote: > Dear All, > I hope this will not sound too off-topic. > Without going into details, I am simulating the behavior of a set of > particles interacting with a short-ranged, strongly binding potential. I > start with a number N of particles which undergo collisions with each > other and, in doing so, they stick giving rise to clusters. > The code which does that is not written in Python, but it returns the > positions of these particles in space. > As time goes on, more and more particles coagulate around one of these > clusters. Eventually, they will all end up in the same cluster. > Two particles are considered to be bounded if their distance falls below > a certain threshold d. > A cluster is nothing else than a group of particles all directly or > indirectly bound together. > Having said so, I now need an efficient algorithm to look for clusters, > since at the very least I need to be able to count them to study how the > number of clusters evolve in time. > Is there anything suitable for this already implemented in SciPy? I > wonder if people in Astronomy have similar problems if they need e.g. to > detect/study clusters of stars. > Simply I would like not to re-invent the wheel and it goes without > saying that any suggestions here are really appreciated. > Many thanks and a nice Xmas break to everybody on this list. > > Lorenzo > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Thu Dec 27 01:24:26 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 27 Dec 2007 01:24:26 -0500 Subject: [SciPy-user] Scipy for Cluster Detection In-Reply-To: <4772F19C.6040308@gmail.com> References: <4772F19C.6040308@gmail.com> Message-ID: On 26-Dec-07, at 7:28 PM, Lorenzo Isella wrote: > Dear All, > I hope this will not sound too off-topic. > Without going into details, I am simulating the behavior of a set of > particles interacting with a short-ranged, strongly binding > potential. I > start with a number N of particles which undergo collisions with each > other and, in doing so, they stick giving rise to clusters. > The code which does that is not written in Python, but it returns the > positions of these particles in space. > As time goes on, more and more particles coagulate around one of these > clusters. Eventually, they will all end up in the same cluster. > Two particles are considered to be bounded if their distance falls > below > a certain threshold d. > A cluster is nothing else than a group of particles all directly or > indirectly bound together. This doesn't sound like what would typically be considered a Do you have code that produces the (N^2 - N)/2 distances between pairs of particles? How large is N? If it's small enough, turning it into a NumPy array should be straightforward, and then thresholding at d is one line of code. Then what you end up with is a boolean matrix that where the (i,j) entry denotes that i and j are in a cluster together. This can be thought of as a graph or network where the nodes are particles and connections / links / edges exist between those particles that are in a cluster together. Once you have this your problem is what graph theorists would call identifying the "connected components", which is a well-studied problem and can be done fairly easily, Google "finding connected components" or pick up any algorithms textbook at your local library such as the Cormen et al "Introduction to Algorithms". I'm not in a position to comment on whether there's something like this already in SciPy, as I really don't know. However, finding connected components of a graph (i.e. your clusters) is a very well studied problem in the computer science (namely graph theory) literature, and the algorithms are simple enough that less than 20 lines of Python should do the trick provided the distance matrix is manageable. Regards, David From wnbell at gmail.com Thu Dec 27 01:30:01 2007 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 27 Dec 2007 00:30:01 -0600 Subject: [SciPy-user] Scipy for Cluster Detection In-Reply-To: <4772F19C.6040308@gmail.com> References: <4772F19C.6040308@gmail.com> Message-ID: On Dec 26, 2007 6:28 PM, Lorenzo Isella wrote: > Two particles are considered to be bounded if their distance falls below > a certain threshold d. Assuming d is the same for all pairs of contacts you should probably use spatial hashing. I had a similar problem in the simulation of granular materials and found spatial hashing to be the fastest approach. By spatial hashing I mean that to each point (x,y,z) you associate the integer tuple ( floor( x/d ), (floor( y/d ), (floor( z/d ) ). When looking for all the neighbors of a point you simply look at all 27 neighboring tuples. You could do this with Python dictionaries, but you'll probably prefer something like the STL hash_map or Google's sparsehash library. If d varies then a k-D tree is probably your best best. > A cluster is nothing else than a group of particles all directly or > indirectly bound together. > Having said so, I now need an efficient algorithm to look for clusters, > since at the very least I need to be able to count them to study how the > number of clusters evolve in time. > Is there anything suitable for this already implemented in SciPy? I > wonder if people in Astronomy have similar problems if they need e.g. to > detect/study clusters of stars. > Simply I would like not to re-invent the wheel and it goes without > saying that any suggestions here are really appreciated. > Many thanks and a nice Xmas break to everybody on this list. I don't think SciPy has the functionality you need. The cluster module has k-means stuff, but I don't think that will help you. -- Nathan Bell wnbell at gmail.com From dwf at cs.toronto.edu Thu Dec 27 01:36:53 2007 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 27 Dec 2007 01:36:53 -0500 Subject: [SciPy-user] Scipy for Cluster Detection In-Reply-To: References: <4772F19C.6040308@gmail.com> Message-ID: On 27-Dec-07, at 1:24 AM, David Warde-Farley wrote: > > This doesn't sound like what would typically be considered a This should read, "This doesn't sound like what would typically be considered a clustering problem." What I meant is, "clustering" usually means finding a natural way of grouping the data (either hierarchically or non-hierarchically) where you only have some notion of similarity or distance to work with. ] In your problem, your similarity metric is essentially binary ("are you less than d away from each other", yes or no) and so it reduces to a rather graph-theoretic problem. That said, if your number of particles is particularly large, Nathan's suggestion of a kd-tree solution sounds like it's on the right track. "Locality sensitive hashing" comes to mind, which is basically a fast way of finding the nearest neighbour of a given point -- if you knew this then you'd be able to tell whether any point at all was within d, since if the nearest neighbour isn't then none of them are. Take a look at http://web.mit.edu/andoni/www/LSH/index.html Cheers, David From matthieu.brucher at gmail.com Thu Dec 27 01:42:01 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 27 Dec 2007 07:42:01 +0100 Subject: [SciPy-user] Scipy for Cluster Detection In-Reply-To: <4772F19C.6040308@gmail.com> References: <4772F19C.6040308@gmail.com> Message-ID: Hi, Sounds like a pseudo correlation clustering problem, isn't ? It's not too hard to create the clusters once you have the similarity matrix. You can make a loop and for every point not labeled, you cluster around it every point that are similar. This way you get a good approximation of your optimal correlation problem (don't remember which article demonstrates it). Matthieu 2007/12/27, Lorenzo Isella : > > Dear All, > I hope this will not sound too off-topic. > Without going into details, I am simulating the behavior of a set of > particles interacting with a short-ranged, strongly binding potential. I > start with a number N of particles which undergo collisions with each > other and, in doing so, they stick giving rise to clusters. > The code which does that is not written in Python, but it returns the > positions of these particles in space. > As time goes on, more and more particles coagulate around one of these > clusters. Eventually, they will all end up in the same cluster. > Two particles are considered to be bounded if their distance falls below > a certain threshold d. > A cluster is nothing else than a group of particles all directly or > indirectly bound together. > Having said so, I now need an efficient algorithm to look for clusters, > since at the very least I need to be able to count them to study how the > number of clusters evolve in time. > Is there anything suitable for this already implemented in SciPy? I > wonder if people in Astronomy have similar problems if they need e.g. to > detect/study clusters of stars. > Simply I would like not to re-invent the wheel and it goes without > saying that any suggestions here are really appreciated. > Many thanks and a nice Xmas break to everybody on this list. > > Lorenzo > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgarcia at olfac.univ-lyon1.fr Thu Dec 27 03:54:57 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Thu, 27 Dec 2007 09:54:57 +0100 Subject: [SciPy-user] Looking for super-paramagnetic clustering code. Message-ID: <47736861.9020907@olfac.univ-lyon1.fr> Hi list, I am looking for a python implementation of super-paramagnetic clustering (SCP). Here a link to the article : http://www.weizmann.ac.il/home/fedomany/papers.html Does someones known about that ? Is there other name for equivalent methods ? Thanks Samuel -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logistique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From vvinuv at gmail.com Thu Dec 27 10:11:34 2007 From: vvinuv at gmail.com (Vinu Vikram) Date: Thu, 27 Dec 2007 20:41:34 +0530 Subject: [SciPy-user] installation problem Message-ID: <637563b70712270711j1853a42fm554afd7f805e035d@mail.gmail.com> Hi all I am facing problem when trying to install scipy0.6.0. The error I am getting is building 'scipy.integrate._odepack' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-DATLAS_INFO="\"3.6.0\"" -I/usr/local/lib/python2.5/site-packages/numpy/core/include -I/usr/local/include/python2.5 -c' /usr/bin/g77 -g -Wall -shared build/temp.linux-x86_64-2.5/scipy/integrate/_odepackmodule.o -L/home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2 -Lbuild/temp.linux-x86_64-2.5-lodepack -llinpack_lite -lmach -lptf77blas -lptcblas -latlas -lg2c -o build/lib.linux-x86_64-2.5/scipy/integrate/_odepack.so /usr/bin/ld: /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a(ATL_F77wrap_dptscal.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a: could not read symbols: Bad value collect2: ld returned 1 exit status /usr/bin/ld: /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a(ATL_F77wrap_dptscal.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a: could not read symbols: Bad value collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -g -Wall -shared build/temp.linux-x86_64-2.5/scipy/integrate/_odepackmodule.o -L/home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2 -Lbuild/temp.linux-x86_64-2.5-lodepack -llinpack_lite -lmach -lptf77blas -lptcblas -latlas -lg2c -o build/lib.linux-x86_64-2.5/scipy/integrate/_odepack.so" failed with exit status 1 Please help me to solve this. thanks Vinu V -- VINU VIKRAM http://iucaa.ernet.in/~vvinuv/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Thu Dec 27 10:16:16 2007 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Thu, 27 Dec 2007 07:16:16 -0800 Subject: [SciPy-user] installation problem In-Reply-To: <637563b70712270711j1853a42fm554afd7f805e035d@mail.gmail.com> References: <637563b70712270711j1853a42fm554afd7f805e035d@mail.gmail.com> Message-ID: <4773C1C0.1060601@slac.stanford.edu> hi Vinu, you seem to have an installation of ATLAS which has not been built with the -Fa alg -fPIC flags (forcing position independent code for each compiler). See http://math-atlas.sourceforge.net/atlas_install/atlas_install.html#SECTION00043000000000000000 Johann Vinu Vikram wrote: > Hi all > I am facing problem when trying to install scipy0.6.0. The error I am > getting is > building 'scipy.integrate._odepack' extension > compiling C sources > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > -Wstrict-prototypes -fPIC > > compile options: '-DATLAS_INFO="\"3.6.0\"" > -I/usr/local/lib/python2.5/site-packages/numpy/core/include > -I/usr/local/include/python2.5 -c' > /usr/bin/g77 -g -Wall -shared build/temp.linux-x86_64- > 2.5/scipy/integrate/_odepackmodule.o > -L/home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2 > -Lbuild/temp.linux-x86_64-2.5 -lodepack -llinpack_lite -lmach > -lptf77blas -lptcblas -latlas -lg2c -o > build/lib.linux-x86_64-2.5/scipy/integrate/_odepack.so > /usr/bin/ld: > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a(ATL_F77wrap_dptscal.o): > relocation R_X86_64_32 against `a local symbol' can not be used when > making a shared object; recompile with -fPIC > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a: could not > read symbols: Bad value > collect2: ld returned 1 exit status > /usr/bin/ld: > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a(ATL_F77wrap_dptscal.o): > relocation R_X86_64_32 against `a local symbol' can not be used when > making a shared object; recompile with -fPIC > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a: could not > read symbols: Bad value > collect2: ld returned 1 exit status > error: Command "/usr/bin/g77 -g -Wall -shared > build/temp.linux-x86_64-2.5/scipy/integrate/_odepackmodule.o > -L/home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2 -Lbuild/temp.linux-x86_64- > 2.5 -lodepack -llinpack_lite -lmach -lptf77blas -lptcblas -latlas > -lg2c -o build/lib.linux-x86_64-2.5/scipy/integrate/_odepack.so" > failed with exit status 1 > > Please help me to solve this. > thanks > Vinu V > > -- > VINU VIKRAM > http://iucaa.ernet.in/~vvinuv/ > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lorenzo.isella at gmail.com Thu Dec 27 11:44:45 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 27 Dec 2007 17:44:45 +0100 Subject: [SciPy-user] Scipy for Cluster Detection In-Reply-To: References: Message-ID: <4773D67D.8010701@gmail.com> Hello, Many thanks for the useful answers. I should have added that I have in mind to deal with system with at most a few thousands of particles, each of them labeled by 3 coordinates. The particle positions are read from a text file, then I can get the relative distances e.g. with some Fortran routine I import with f2py if speed is a problem. If this is manageable, then I could really end up calculating directly the relative distances between all the particles and then do as suggested in the following. I may post again early in January, however (I do not have enough time to test all of this straight away). Cheers Lorenzo > Message: 3 > Date: Thu, 27 Dec 2007 01:24:26 -0500 > From: David Warde-Farley > Subject: Re: [SciPy-user] Scipy for Cluster Detection > To: SciPy Users List > Message-ID: > Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes > > On 26-Dec-07, at 7:28 PM, Lorenzo Isella wrote: > > >> Dear All, >> I hope this will not sound too off-topic. >> Without going into details, I am simulating the behavior of a set of >> particles interacting with a short-ranged, strongly binding >> potential. I >> start with a number N of particles which undergo collisions with each >> other and, in doing so, they stick giving rise to clusters. >> The code which does that is not written in Python, but it returns the >> positions of these particles in space. >> As time goes on, more and more particles coagulate around one of these >> clusters. Eventually, they will all end up in the same cluster. >> Two particles are considered to be bounded if their distance falls >> below >> a certain threshold d. >> A cluster is nothing else than a group of particles all directly or >> indirectly bound together. >> > > > This doesn't sound like what would typically be considered a > > Do you have code that produces the (N^2 - N)/2 distances between pairs > of particles? How large is N? If it's small enough, turning it into a > NumPy array should be straightforward, and then thresholding at d is > one line of code. Then what you end up with is a boolean matrix that > where the (i,j) entry denotes that i and j are in a cluster together. > This can be thought of as a graph or network where the nodes are > particles and connections / links / edges exist between those > particles that are in a cluster together. > > Once you have this your problem is what graph theorists would call > identifying the "connected components", which is a well-studied > problem and can be done fairly easily, Google "finding connected > components" or pick up any algorithms textbook at your local library > such as the Cormen et al "Introduction to Algorithms". > > I'm not in a position to comment on whether there's something like > this already in SciPy, as I really don't know. However, finding > connected components of a graph (i.e. your clusters) is a very well > studied problem in the computer science (namely graph theory) > literature, and the algorithms are simple enough that less than 20 > lines of Python should do the trick provided the distance matrix is > manageable. > > Regards, > > David > > > From timmichelsen at gmx-topmail.de Thu Dec 27 12:45:35 2007 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Thu, 27 Dec 2007 18:45:35 +0100 Subject: [SciPy-user] assignment of hours of day in time series Message-ID: Hello, how do I assign the hours of a day in time series? I have hourly measurements where hour 1 represents the end of the period 0:00-1:00, 2 the end of the period 1:00-2:00, ... , 24 the end of the period 23:00 to 24:00. When I plot these hourly time series from February to November the curve is continued into December because of that matter. time series then assumes that the value for hour 0:00 of dec, 01 is 0 which then leads do a wrong plotting behaviour. I want to achieve that hour 24 is accounted as the last measurement period of a day and not as the first measurement of the next day (like 0:00). Please help me out here. Thanks in advance, Timmie From fperez.net at gmail.com Thu Dec 27 16:04:34 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 27 Dec 2007 14:04:34 -0700 Subject: [SciPy-user] Scipy server down... Message-ID: Any chance it might be brought back up? (I'm trying to commit the weave cleanup work Min did at the sprint...) Thanks, f From millman at berkeley.edu Thu Dec 27 16:11:37 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 27 Dec 2007 13:11:37 -0800 Subject: [SciPy-user] Scipy server down... In-Reply-To: References: Message-ID: I restarted httpd. It is still slow, but seems to be responsive now. Let me know if it still isn't working. On Dec 27, 2007 1:04 PM, Fernando Perez wrote: > Any chance it might be brought back up? (I'm trying to commit the > weave cleanup work Min did at the sprint...) > > Thanks, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From jeremit0 at gmail.com Thu Dec 27 16:20:17 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Thu, 27 Dec 2007 16:20:17 -0500 Subject: [SciPy-user] Need very simple example showing how to return numpy array from C/C++ function Message-ID: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> I have a class member function that looks like this: std::vector discretize(...); I would like to return a numpy array instead of a std::vector. I could (of course) rewrite my function to return a pointer to the first element of an array if needed. I am currently using SWIG for wrapping my code for Python. I have searched far and wide for a simple example of how I should do this. Everything I find is *far* too complicated for someone doing this for the first time. Does someone have a simple example they would like to share? I have the "Guide to Numpy"; it has lots of information about the different functions, it doesn't have any examples. Thanks, Jeremy Conlin -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Dec 27 16:46:54 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 27 Dec 2007 14:46:54 -0700 Subject: [SciPy-user] Scipy server down... In-Reply-To: References: Message-ID: On Dec 27, 2007 2:11 PM, Jarrod Millman wrote: > I restarted httpd. It is still slow, but seems to be responsive now. > Let me know if it still isn't working. SVN is working again, thanks. Cheers, f From matthieu.brucher at gmail.com Thu Dec 27 17:04:11 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 27 Dec 2007 23:04:11 +0100 Subject: [SciPy-user] Need very simple example showing how to return numpy array from C/C++ function In-Reply-To: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> References: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> Message-ID: Hi, The problem is that you will have to copy yout data anyway. One of the best solutions is to pass an already created Numpy array (as argument or with a callback function) that you will be able to use (this is what I do with a custom C++ array class that wraps the Python object). Matthieu 2007/12/27, Jeremy Conlin : > > I have a class member function that looks like this: > std::vector discretize(...); > > I would like to return a numpy array instead of a std::vector. I > could (of course) rewrite my function to return a pointer to the first > element of an array if needed. I am currently using SWIG for wrapping my > code for Python. > > I have searched far and wide for a simple example of how I should do this. > Everything I find is *far* too complicated for someone doing this for the > first time. Does someone have a simple example they would like to share? I > have the "Guide to Numpy"; it has lots of information about the different > functions, it doesn't have any examples. > > Thanks, > Jeremy Conlin > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Thu Dec 27 19:16:04 2007 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 27 Dec 2007 18:16:04 -0600 Subject: [SciPy-user] Need very simple example showing how to return numpy array from C/C++ function In-Reply-To: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> References: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> Message-ID: On Dec 27, 2007 3:20 PM, Jeremy Conlin wrote: > I have a class member function that looks like this: > > std::vector discretize(...); > > I would like to return a numpy array instead of a std::vector. I > could (of course) rewrite my function to return a pointer to the first > element of an array if needed. I am currently using SWIG for wrapping my > code for Python. > > I have searched far and wide for a simple example of how I should do this. > Everything I find is *far* too complicated for someone doing this for the > first time. Does someone have a simple example they would like to share? I > have the "Guide to Numpy"; it has lots of information about the different > functions, it doesn't have any examples. I use STL vectors for ouput in scipy.sparse. See line 517 of: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/sparsetools/numpy.i For a STL vector named myvec this should work: int length = myvec.size(); PyObject *obj = PyArray_FromDims(1, &length, PyArray_DOUBLE); memcpy(PyArray_DATA(obj),&(myvec[0]),sizeof(double)*length); If you know the size of the output in advance, then you might follow Matthieu's suggestion since a copy is not required. -- Nathan Bell wnbell at gmail.com From bhendrix at enthought.com Thu Dec 27 20:23:49 2007 From: bhendrix at enthought.com (Bryce Hendrix) Date: Thu, 27 Dec 2007 19:23:49 -0600 Subject: [SciPy-user] [Numpy-discussion] Scipy server down... In-Reply-To: References: Message-ID: <47745025.8080909@enthought.com> Looks like the web site & svn are up for me. Bryce Fernando Perez wrote: > Any chance it might be brought back up? (I'm trying to commit the > weave cleanup work Min did at the sprint...) > > Thanks, > > f > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From vvinuv at gmail.com Thu Dec 27 23:14:34 2007 From: vvinuv at gmail.com (Vinu Vikram) Date: Fri, 28 Dec 2007 09:44:34 +0530 Subject: [SciPy-user] installation problem In-Reply-To: <4773C1C0.1060601@slac.stanford.edu> References: <637563b70712270711j1853a42fm554afd7f805e035d@mail.gmail.com> <4773C1C0.1060601@slac.stanford.edu> Message-ID: <637563b70712272014g4401f1a6nca1a399f12f0e8d9@mail.gmail.com> I worked . Thanks!! there was no error during the installation. But when I tried to import scipy.signal, I got the following error >>> import scipy.signal Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.5/site-packages/scipy/signal/__init__.py", line 10, in from filter_design import * File "/usr/local/lib/python2.5/site-packages/scipy/signal/filter_design.py", line 9, in from scipy import special, optimize File "/usr/local/lib/python2.5/site-packages/scipy/optimize/__init__.py", line 11, in from lbfgsb import fmin_l_bfgs_b File "/usr/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", line 30, in import _lbfgsb ImportError: /usr/local/lib/python2.5/site-packages/scipy/optimize/_lbfgsb.so: undefined symbol: _gfortran_st_write_done How I can fix it? Thanks Vinu V On Dec 27, 2007 8:46 PM, Johann Cohen-Tanugi wrote: > hi Vinu, > you seem to have an installation of ATLAS which has not been built with > the -Fa alg -fPIC flags (forcing position independent code for each > compiler). See > > http://math-atlas.sourceforge.net/atlas_install/atlas_install.html#SECTION00043000000000000000 > > Johann > > Vinu Vikram wrote: > > Hi all > > I am facing problem when trying to install scipy0.6.0. The error I am > > getting is > > building 'scipy.integrate._odepack' extension > > compiling C sources > > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > > -Wstrict-prototypes -fPIC > > > > compile options: '-DATLAS_INFO="\"3.6.0\"" > > -I/usr/local/lib/python2.5/site-packages/numpy/core/include > > -I/usr/local/include/python2.5 -c' > > /usr/bin/g77 -g -Wall -shared build/temp.linux-x86_64- > > 2.5/scipy/integrate/_odepackmodule.o > > -L/home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2 > > -Lbuild/temp.linux-x86_64-2.5 -lodepack -llinpack_lite -lmach > > -lptf77blas -lptcblas -latlas -lg2c -o > > build/lib.linux-x86_64-2.5/scipy/integrate/_odepack.so > > /usr/bin/ld: > > > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a(ATL_F77wrap_dptscal.o): > > relocation R_X86_64_32 against `a local symbol' can not be used when > > making a shared object; recompile with -fPIC > > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a: could not > > read symbols: Bad value > > collect2: ld returned 1 exit status > > /usr/bin/ld: > > > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a(ATL_F77wrap_dptscal.o): > > relocation R_X86_64_32 against `a local symbol' can not be used when > > making a shared object; recompile with -fPIC > > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a: could not > > read symbols: Bad value > > collect2: ld returned 1 exit status > > error: Command "/usr/bin/g77 -g -Wall -shared > > build/temp.linux-x86_64-2.5/scipy/integrate/_odepackmodule.o > > -L/home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2 -Lbuild/temp.linux-x86_64- > > 2.5 -lodepack -llinpack_lite -lmach -lptf77blas -lptcblas -latlas > > -lg2c -o build/lib.linux-x86_64-2.5/scipy/integrate/_odepack.so" > > failed with exit status 1 > > > > Please help me to solve this. > > thanks > > Vinu V > > > > -- > > VINU VIKRAM > > http://iucaa.ernet.in/~vvinuv/ < > http://iucaa.ernet.in/%7Evvinuv/> > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- VINU VIKRAM http://iucaa.ernet.in/~vvinuv/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Fri Dec 28 00:06:05 2007 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Thu, 27 Dec 2007 21:06:05 -0800 Subject: [SciPy-user] installation problem In-Reply-To: <637563b70712272014g4401f1a6nca1a399f12f0e8d9@mail.gmail.com> References: <637563b70712270711j1853a42fm554afd7f805e035d@mail.gmail.com> <4773C1C0.1060601@slac.stanford.edu> <637563b70712272014g4401f1a6nca1a399f12f0e8d9@mail.gmail.com> Message-ID: <4774843D.3020005@slac.stanford.edu> hmm I don't have this symbol in my version of this shared lib, and I am no expert here :) But did you make sure that you compiled ATLAS and scipy with gfortran? I could imagine something like that happening if scipy was built with g77 and ATLAS with gfortran.... You can use the fcompiler option to setup.py to ensure that gfortran is picked up as the fortran compiler. Maybe an expert will kick in here... hth, Johann Vinu Vikram wrote: > I worked . Thanks!! > there was no error during the installation. But when I tried to import > scipy.signal, I got the following error > > >>> import scipy.signal > Traceback (most recent call last): > File "", line 1, in > File > "/usr/local/lib/python2.5/site-packages/scipy/signal/__init__.py", > line 10, in > from filter_design import * > File > "/usr/local/lib/python2.5/site-packages/scipy/signal/filter_design.py", > line 9, in > from scipy import special, optimize > File > "/usr/local/lib/python2.5/site-packages/scipy/optimize/__init__.py", > line 11, in > from lbfgsb import fmin_l_bfgs_b > File > "/usr/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", > line 30, in > import _lbfgsb > ImportError: > /usr/local/lib/python2.5/site-packages/scipy/optimize/_lbfgsb.so: > undefined symbol: _gfortran_st_write_done > > How I can fix it? > Thanks > Vinu V > > On Dec 27, 2007 8:46 PM, Johann Cohen-Tanugi > wrote: > > hi Vinu, > you seem to have an installation of ATLAS which has not been built > with > the -Fa alg -fPIC flags (forcing position independent code for each > compiler). See > http://math-atlas.sourceforge.net/atlas_install/atlas_install.html#SECTION00043000000000000000 > > Johann > > Vinu Vikram wrote: > > Hi all > > I am facing problem when trying to install scipy0.6.0. The error > I am > > getting is > > building 'scipy.integrate._odepack' extension > > compiling C sources > > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > > -Wstrict-prototypes -fPIC > > > > compile options: '-DATLAS_INFO="\"3.6.0\"" > > -I/usr/local/lib/python2.5/site-packages/numpy/core/include > > -I/usr/local/include/python2.5 -c' > > /usr/bin/g77 -g -Wall -shared build/temp.linux-x86_64- > > 2.5/scipy/integrate/_odepackmodule.o > > -L/home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2 > > -Lbuild/temp.linux-x86_64-2.5 -lodepack -llinpack_lite -lmach > > -lptf77blas -lptcblas -latlas -lg2c -o > > build/lib.linux-x86_64- 2.5/scipy/integrate/_odepack.so > > /usr/bin/ld: > > > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a(ATL_F77wrap_dptscal.o): > > relocation R_X86_64_32 against `a local symbol' can not be used > when > > making a shared object; recompile with -fPIC > > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a: could not > > read symbols: Bad value > > collect2: ld returned 1 exit status > > /usr/bin/ld: > > > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a(ATL_F77wrap_dptscal.o): > > relocation R_X86_64_32 against `a local symbol' can not be used when > > making a shared object; recompile with -fPIC > > /home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2/libptf77blas.a: could not > > read symbols: Bad value > > collect2: ld returned 1 exit status > > error: Command "/usr/bin/g77 -g -Wall -shared > > build/temp.linux-x86_64- 2.5/scipy/integrate/_odepackmodule.o > > -L/home/vinu/ATLAS/lib/Linux_UNKNOWNSSE2_2 > -Lbuild/temp.linux-x86_64- > > 2.5 -lodepack -llinpack_lite -lmach -lptf77blas -lptcblas -latlas > > -lg2c -o build/lib.linux-x86_64- 2.5/scipy/integrate/_odepack.so" > > failed with exit status 1 > > > > Please help me to solve this. > > thanks > > Vinu V > > > > -- > > VINU VIKRAM > > http://iucaa.ernet.in/~vvinuv/ > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -- > VINU VIKRAM > http://iucaa.ernet.in/~vvinuv/ > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthieu.brucher at gmail.com Fri Dec 28 04:32:10 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 28 Dec 2007 10:32:10 +0100 Subject: [SciPy-user] Need very simple example showing how to return numpy array from C/C++ function In-Reply-To: References: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> Message-ID: 2007/12/28, Nathan Bell : > > On Dec 27, 2007 3:20 PM, Jeremy Conlin wrote: > > I have a class member function that looks like this: > > > > std::vector discretize(...); > > > > I would like to return a numpy array instead of a > std::vector. I > > could (of course) rewrite my function to return a pointer to the first > > element of an array if needed. I am currently using SWIG for wrapping > my > > code for Python. > > > > I have searched far and wide for a simple example of how I should do > this. > > Everything I find is *far* too complicated for someone doing this for > the > > first time. Does someone have a simple example they would like to > share? I > > have the "Guide to Numpy"; it has lots of information about the > different > > functions, it doesn't have any examples. > > I use STL vectors for ouput in scipy.sparse. See line 517 of: > > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/sparsetools/numpy.i > > For a STL vector named myvec this should work: > > int length = myvec.size(); > PyObject *obj = PyArray_FromDims(1, &length, PyArray_DOUBLE); > memcpy(PyArray_DATA(obj),&(myvec[0]),sizeof(double)*length); > > If you know the size of the output in advance, then you might follow > Matthieu's suggestion since a copy is not required. > You can then create a custom typemap with Nathan's code (not tested) : %typemap(out) std::vector %{ int length = $1.size(); $result = PyArray_FromDims(1, &length, PyArray_DOUBLE); memcpy(PyArray_DATA($result),&($1[0]),sizeof(double)*length); %} The $1 indicates the data that will be wrapped and copied and $result is the corresponding Python object. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From timmichelsen at gmx-topmail.de Fri Dec 28 10:34:06 2007 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Fri, 28 Dec 2007 16:34:06 +0100 Subject: [SciPy-user] end_date doesn't work in time series Message-ID: Hello! According to the descriptions in the wiki (http://www.scipy.org/SciPyPackages/TimeSeries#head-352d5481446b38e165dff3993e39f49c0e902a91-2) one should be able to define a time series by giving start /and/ end date. like: mydata_ts_hourly = TS.time_series(mydata, start_date=D_hr_start, end_date=None) But defining a time series this way doesn't work: TypeError: time_series() got an unexpected keyword argument 'end_date' How can I get this work? Is it a bug and where shall I report this? Kind regards, Timmie From jeremit0 at gmail.com Fri Dec 28 13:58:23 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Fri, 28 Dec 2007 13:58:23 -0500 Subject: [SciPy-user] Need very simple example showing how to return numpy array from C/C++ function In-Reply-To: References: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> Message-ID: <3db594f70712281058vb291fb1u1604ff8cbaaca204@mail.gmail.com> On Dec 28, 2007 4:32 AM, Matthieu Brucher wrote: > > > 2007/12/28, Nathan Bell : > > > > On Dec 27, 2007 3:20 PM, Jeremy Conlin wrote: > > > I have a class member function that looks like this: > > > > > > std::vector discretize(...); > > > > > > I would like to return a numpy array instead of a > > std::vector. I > > > could (of course) rewrite my function to return a pointer to the first > > > element of an array if needed. I am currently using SWIG for wrapping > > my > > > code for Python. > > > > > > I have searched far and wide for a simple example of how I should do > > this. > > > Everything I find is *far* too complicated for someone doing this for > > the > > > first time. Does someone have a simple example they would like to > > share? I > > > have the "Guide to Numpy"; it has lots of information about the > > different > > > functions, it doesn't have any examples. > > > > I use STL vectors for ouput in scipy.sparse. See line 517 of: > > > > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/sparsetools/numpy.i > > > > For a STL vector named myvec this should work: > > > > int length = myvec.size(); > > PyObject *obj = PyArray_FromDims(1, &length, PyArray_DOUBLE); > > memcpy(PyArray_DATA(obj),&(myvec[0]),sizeof(double)*length); > > > > If you know the size of the output in advance, then you might follow > > Matthieu's suggestion since a copy is not required. > > > > You can then create a custom typemap with Nathan's code (not tested) : > %typemap(out) std::vector %{ > int length = $1.size(); > $result = PyArray_FromDims(1, &length, PyArray_DOUBLE); > memcpy(PyArray_DATA($result),&($1[0]),sizeof(double)*length); > %} > Thank you, that is beginning to make more sense to one inexperienced in wrapping. There is one error that I haven't been able to work around. error: no match for 'operator[]' in 'result[0]' This comes from the memcpy line, I guess. It looks like it is getting mixed up between the $1 and the $result. Do I need to apply this typemap to avoid this error or is it automatically applied because the type (double) is specified? Thanks again, Jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Fri Dec 28 14:06:24 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 28 Dec 2007 20:06:24 +0100 Subject: [SciPy-user] Need very simple example showing how to return numpy array from C/C++ function In-Reply-To: <3db594f70712281058vb291fb1u1604ff8cbaaca204@mail.gmail.com> References: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> <3db594f70712281058vb291fb1u1604ff8cbaaca204@mail.gmail.com> Message-ID: > > Thank you, that is beginning to make more sense to one inexperienced in > wrapping. There is one error that I haven't been able to work around. > > error: no match for 'operator[]' in 'result[0]' > > This comes from the memcpy line, I guess. It looks like it is getting > mixed up between the $1 and the $result. > This is strange... Is there more indications like the type of result ? Do I need to apply this typemap to avoid this error or is it automatically > applied because the type (double) is specified? > The fact that you have a compile error shows that it is applied ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Fri Dec 28 14:08:30 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 28 Dec 2007 14:08:30 -0500 Subject: [SciPy-user] end_date doesn't work in time series In-Reply-To: References: Message-ID: <200712281408.37524.pgmdevlist@gmail.com> Tim, > According to the descriptions in the wiki > (http://www.scipy.org/SciPyPackages/TimeSeries#head-352d5481446b38e165dff39 >93e39f49c0e902a91-2) one should be able to define a time series by giving > start /and/ end date. Well, sorry about that: I need to update the doc. Matt and I decided to get rid of the end_date keyword, as it brings more problems than it solves. Basically, you only need to provide a starting date, the end date will be defined according to the size of the input data. So, your command: > mydata_ts_hourly = TS.time_series(mydata, start_date=D_hr_start, > end_date=None) should in fact be mydata_ts_hourly = TS.time_series(mydata, start_date=D_hr_start) From jeremit0 at gmail.com Fri Dec 28 15:44:43 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Fri, 28 Dec 2007 15:44:43 -0500 Subject: [SciPy-user] Need very simple example showing how to return numpy array from C/C++ function In-Reply-To: References: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> <3db594f70712281058vb291fb1u1604ff8cbaaca204@mail.gmail.com> Message-ID: <3db594f70712281244i45baf536y358cb28ee6e79f42@mail.gmail.com> On Dec 28, 2007 2:06 PM, Matthieu Brucher wrote: > Thank you, that is beginning to make more sense to one inexperienced in > > wrapping. There is one error that I haven't been able to work around. > > > > error: no match for 'operator[]' in 'result[0]' > > > > This comes from the memcpy line, I guess. It looks like it is getting > > mixed up between the $1 and the $result. > > > > > This is strange... Is there more indications like the type of result ? > > There wasn't much else in the compiler messages. I am just figuring this out before I add it to my real code. As such, the files I use are simple. I have attached them to this email. > Do I need to apply this typemap to avoid this error or is it automatically > > applied because the type (double) is specified? > > > > > The fact that you have a compile error shows that it is applied ;) > Of course! Thanks again, Jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_SWIG.i Type: application/octet-stream Size: 678 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_SWIG.h Type: application/octet-stream Size: 527 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_SWIG.cpp Type: application/octet-stream Size: 594 bytes Desc: not available URL: From humufr at yahoo.fr Fri Dec 28 15:46:32 2007 From: humufr at yahoo.fr (humufr at yahoo.fr) Date: Fri, 28 Dec 2007 15:46:32 -0500 Subject: [SciPy-user] maskedarray moved? Message-ID: <200712281546.33014.humufr@yahoo.fr> Hi, I would like to know where is maskedarray. It's not anymore inside the sandbox? Where can I find it now? Thanks, N. From millman at berkeley.edu Fri Dec 28 16:03:37 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 28 Dec 2007 13:03:37 -0800 Subject: [SciPy-user] maskedarray moved? In-Reply-To: <200712281546.33014.humufr@yahoo.fr> References: <200712281546.33014.humufr@yahoo.fr> Message-ID: It is now in a numpy branch: http://projects.scipy.org/scipy/numpy/browser/branches/maskedarray and will be merged with the numpy trunk soon. You can view the API changes with the existing numpy.ma here: http://svn.scipy.org/svn/numpy/branches/maskedarray/numpy/ma/API_CHANGES.txt Thanks, Jarrod On Dec 28, 2007 12:46 PM, wrote: > Hi, > > I would like to know where is maskedarray. It's not anymore inside the > sandbox? Where can I find it now? > > Thanks, > > N. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From matthieu.brucher at gmail.com Fri Dec 28 17:50:59 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 28 Dec 2007 23:50:59 +0100 Subject: [SciPy-user] Need very simple example showing how to return numpy array from C/C++ function In-Reply-To: <3db594f70712281244i45baf536y358cb28ee6e79f42@mail.gmail.com> References: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> <3db594f70712281058vb291fb1u1604ff8cbaaca204@mail.gmail.com> <3db594f70712281244i45baf536y358cb28ee6e79f42@mail.gmail.com> Message-ID: To make your .i work, you have two things to do : - first initialize numpy : %{ #define SWIG_FILE_WITH_INIT %} %include "numpy.i" %init %{ import_array(); %} Then result (which is $1 in the typemap) is in fact a wrapper around the vector, and to get it, you have to dereference it, so the typemap becomes : %typemap(out) std::vector { int length = $1.size(); $result = PyArray_FromDims(1, &length, PyArray_DOUBLE); memcpy(PyArray_DATA($result),&((*(&$1))[0]),sizeof(double)*length); } I think I will add this one to my blog, if nobody minds. Matthieu 2007/12/28, Jeremy Conlin : > > > > On Dec 28, 2007 2:06 PM, Matthieu Brucher > wrote: > > > Thank you, that is beginning to make more sense to one inexperienced in > > > wrapping. There is one error that I haven't been able to work around. > > > > > > error: no match for 'operator[]' in 'result[0]' > > > > > > This comes from the memcpy line, I guess. It looks like it is getting > > > mixed up between the $1 and the $result. > > > > > > > > > This is strange... Is there more indications like the type of result ? > > > > > There wasn't much else in the compiler messages. I am just figuring this > out before I add it to my real code. As such, the files I use are simple. > I have attached them to this email. > > > > Do I need to apply this typemap to avoid this error or is it > > > automatically applied because the type (double) is specified? > > > > > > > > > The fact that you have a compile error shows that it is applied ;) > > > Of course! > > Thanks again, > Jeremy > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg.net at gmail.com Fri Dec 28 18:15:43 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Fri, 28 Dec 2007 16:15:43 -0700 Subject: [SciPy-user] Banded matrix wrapper Message-ID: <6ce0ac130712281515k11f92714i2fb863e12704d1cc@mail.gmail.com> Hi, I am needing to solve an eigenvalue problem using banded matrices. I see that scipy now has a function for doing this, but it takes the matrix in a particular format (the LAPACK banded matrix format). Does anyone have any utilities for working with this format? I figured I would ask before doing this myself. Thanks Brian From jeremit0 at gmail.com Fri Dec 28 19:07:59 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Fri, 28 Dec 2007 19:07:59 -0500 Subject: [SciPy-user] Need very simple example showing how to return numpy array from C/C++ function In-Reply-To: References: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> <3db594f70712281058vb291fb1u1604ff8cbaaca204@mail.gmail.com> <3db594f70712281244i45baf536y358cb28ee6e79f42@mail.gmail.com> Message-ID: <3db594f70712281607n4f79d6a5w9d946c30f08dcc39@mail.gmail.com> On Dec 28, 2007 5:50 PM, Matthieu Brucher wrote: > To make your .i work, you have two things to do : > - first initialize numpy : > %{ > #define SWIG_FILE_WITH_INIT > %} > %include "numpy.i" > %init %{ > import_array(); > %} > > Then result (which is $1 in the typemap) is in fact a wrapper around the > vector, and to get it, you have to dereference it, so the typemap becomes : > %typemap(out) std::vector { > int length = $1.size(); > $result = PyArray_FromDims(1, &length, PyArray_DOUBLE); > memcpy(PyArray_DATA($result),&((*(&$1))[0]),sizeof(double)*length); > } > YEAH!!! Now it works! Thanks a lot! Now that it works, I wonder how this could be done without copying the data. Since a STL vector can also be accessed and treated like an array, could we exploit this to avoid copying? I tried using the function PyArray_FromDimsAndData and passing a reference to the first element using the same syntax as for memcpy, but I got an error similar to the one I previously reported: error: no match for 'operator*' in '*result' > > I think I will add this one to my blog, if nobody minds. I think it would be great to have a simple example like this available to the community. A page on the wiki would probably be best. I could create such a page, but with my limited experience, I'm not sure I would be the best person. I couldn't explain why all the elements in the *.i file are necessary. Thanks, Jeremy > > Matthieu > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnuttall at uky.edu Fri Dec 28 19:06:57 2007 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Fri, 28 Dec 2007 19:06:57 -0500 Subject: [SciPy-user] assignment of hours of day in time series In-Reply-To: References: Message-ID: Timmie, I'd start with the Python built-in datetime module and see if that helps. Brandon ________________________________________ From: scipy-user-bounces at scipy.org [scipy-user-bounces at scipy.org] On Behalf Of Tim Michelsen [timmichelsen at gmx-topmail.de] Sent: Thursday, December 27, 2007 12:45 PM To: scipy-user at scipy.org Subject: [SciPy-user] assignment of hours of day in time series Hello, how do I assign the hours of a day in time series? I have hourly measurements where hour 1 represents the end of the period 0:00-1:00, 2 the end of the period 1:00-2:00, ... , 24 the end of the period 23:00 to 24:00. When I plot these hourly time series from February to November the curve is continued into December because of that matter. time series then assumes that the value for hour 0:00 of dec, 01 is 0 which then leads do a wrong plotting behaviour. I want to achieve that hour 24 is accounted as the last measurement period of a day and not as the first measurement of the next day (like 0:00). Please help me out here. Thanks in advance, Timmie _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From wnbell at gmail.com Fri Dec 28 19:57:45 2007 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 28 Dec 2007 18:57:45 -0600 Subject: [SciPy-user] Need very simple example showing how to return numpy array from C/C++ function In-Reply-To: <3db594f70712281607n4f79d6a5w9d946c30f08dcc39@mail.gmail.com> References: <3db594f70712271320l18ed1c3fh728ff05c8ceae66d@mail.gmail.com> <3db594f70712281058vb291fb1u1604ff8cbaaca204@mail.gmail.com> <3db594f70712281244i45baf536y358cb28ee6e79f42@mail.gmail.com> <3db594f70712281607n4f79d6a5w9d946c30f08dcc39@mail.gmail.com> Message-ID: On Dec 28, 2007 6:07 PM, Jeremy Conlin wrote: > > Now that it works, I wonder how this could be done without copying the data. > Since a STL vector can also be accessed and treated like an array, could we > exploit this to avoid copying? I tried using the function > PyArray_FromDimsAndData and passing a reference to the first element using > the same syntax as for memcpy, but I got an error similar to the one I > previously reported: Ideally you could pluck out the contiguous memory chunk used by the STL vector and hand that to numpy. However I doubt this would work because the vector probably needs to free its data to a different allocator (I would assume numpy uses malloc and the STL uses some C++ weirdness). > I think it would be great to have a simple example like this available to > the community. A page on the wiki would probably be best. I could create > such a page, but with my limited experience, I'm not sure I would be the > best person. I couldn't explain why all the elements in the *.i file are > necessary. Agreed. I'd like to see if anyone else comes up with better ways of interfacing C++ and scipy. Matthieu, if you have time you might also include a pointer to these wrappers for numpy's complex types: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/sparsetools/complex_ops.h -- Nathan Bell wnbell at gmail.com From mani.sabri at gmail.com Sat Dec 29 03:28:51 2007 From: mani.sabri at gmail.com (mani sabri) Date: Sat, 29 Dec 2007 11:58:51 +0330 Subject: [SciPy-user] weave and mingw on win xp sp2 bug? Message-ID: <4776059b.03be100a.54cf.2160@mx.google.com> Hi Happy holidays I'm using python 2.4.4 , scipy 0.6.0 , numpy 1.0.4 and mingw gcc 3.4.5 on win xp sp2 When I try to run a weave example (e.x array3d.py) from "C:\PROGRAM FILES\Python24\Lib\site-packages\scipy\weave\examples" I have the following typical output: numpy: [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]] Pure Inline: Found executable C:\MINGW\BIN\g++.exe g++.exe: C:\program: No such file or directory g++.exe: files\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp: No such file or directory g++.exe: files\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o: No such file or directory g++.exe: no input files And This trace back: CompileError: error: Command "g++ -mno-cygwin -O2 -Wall -I"C:\program files\Python24\lib\site-packages\scipy\weave" -I"C:\program files\Python24\lib\site-packages\scipy\weave\scxx" -I"C:\program files\Python24\lib\site-packages\numpy\core\include" -I"C:\program files\Python24\include" -I"C:\program files\Python24\PC" -c C:\program files\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp -o c:\docume~1\man\locals~1\temp\man\python24_intermediate\compiler_12e837eb1ea 3ab5199fbcc0e83015e3f\Release\program files\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o" failed with exit status 1 IMHO When I put quotes around -c argument in above command (-c "C:\program files\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp") every thing is fine. Can anyone help me to hack this around? Cheers Mani. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sat Dec 29 03:36:20 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 29 Dec 2007 09:36:20 +0100 Subject: [SciPy-user] weave and mingw on win xp sp2 bug? In-Reply-To: <4776059b.03be100a.54cf.2160@mx.google.com> References: <4776059b.03be100a.54cf.2160@mx.google.com> Message-ID: Hi, The problem is the space in the name of the folder. It should not be there (Python is usually not installed in Program Files). This problem arises regularly with Linux too for inclusion paths with some tools. Matthieu 2007/12/29, mani sabri : > > Hi > > Happy holidays > > I'm using python 2.4.4 , scipy 0.6.0 , numpy 1.0.4 and mingw gcc 3.4.5 on > win xp sp2 > > When I try to run a weave example (e.x array3d.py) from "C:\PROGRAM > FILES\Python24\Lib\site-packages\scipy\weave\examples" I have the following > typical output: > > > > numpy: > > [[[ 0 1 2 3] > > [ 4 5 6 7] > > [ 8 9 10 11]] > > > > [[12 13 14 15] > > [16 17 18 19] > > [20 21 22 23]]] > > > > Pure Inline: > > > > Found executable C:\MINGW\BIN\g++.exe > > g++.exe: C:\program: No such file or directory > > g++.exe: files\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp: > No such file or directory > > g++.exe: files\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o: No > such file or directory > > g++.exe: no input files > > > > And This trace back: > > > > CompileError: error: Command "g++ -mno-cygwin -O2 -Wall -I"C:\program > files\Python24\lib\site-packages\scipy\weave" -I"C:\program > files\Python24\lib\site-packages\scipy\weave\scxx" -I"C:\program > files\Python24\lib\site-packages\numpy\core\include" -I"C:\program > files\Python24\include" -I"C:\program files\Python24\PC" -c C:\program > files\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp -o > c:\docume~1\man\locals~1\temp\man\python24_intermediate\compiler_12e837eb1ea3ab5199fbcc0e83015e3f\Release\program > files\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o" failed with > exit status 1 > > > > IMHO When I put quotes around ?c argument in above command (-c > "C:\program > files\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp") every > thing is fine. > > Can anyone help me to hack this around? > > > > Cheers > > Mani. > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From mani.sabri at gmail.com Sat Dec 29 04:08:18 2007 From: mani.sabri at gmail.com (mani sabri) Date: Sat, 29 Dec 2007 12:38:18 +0330 Subject: [SciPy-user] weave and mingw on win xp sp2 bug? In-Reply-To: Message-ID: <47760eda.1a7c100a.4a38.13e6@mx.google.com> Hi Yeah, exactly. do you know where is the code responsible for it? I wasn't able to find the code for -c option because it does not exist in "-c " form in any .py file in weave subdir. May be it is in makefile.in. imho this will be solved by putting quotes around the file path. p.s: numpy.distutils.tests() runs without this problem from the same path.! ("C:\program files\Python24..") _____ From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Matthieu Brucher Sent: Saturday, December 29, 2007 12:06 PM To: SciPy Users List Subject: Re: [SciPy-user] weave and mingw on win xp sp2 bug? Hi, The problem is the space in the name of the folder. It should not be there (Python is usually not installed in Program Files). This problem arises regularly with Linux too for inclusion paths with some tools. Matthieu 2007/12/29, mani sabri : CompileError: error: Command "g++ -mno-cygwin -O2 -Wall -I"C:\program files\Python24\lib\site-packages\scipy\weave" -I"C:\program files\Python24\lib\site-packages\scipy\weave\scxx" -I"C:\program files\Python24\lib\site-packages\numpy\core\include" -I"C:\program files\Python24\include" -I"C:\program files\Python24\PC" -c C:\program files\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp -o c:\docume~1\man\locals~1\temp\man\python24_intermediate\compiler_12e837eb1ea 3ab5199fbcc0e83015e3f\Release\program files\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o" failed with exit status 1 IMHO When I put quotes around -c argument in above command (-c "C:\program files\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp") every thing is fine. Can anyone help me to hack this around? _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sat Dec 29 04:17:40 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 29 Dec 2007 10:17:40 +0100 Subject: [SciPy-user] weave and mingw on win xp sp2 bug? In-Reply-To: <47760eda.1a7c100a.4a38.13e6@mx.google.com> References: <47760eda.1a7c100a.4a38.13e6@mx.google.com> Message-ID: 2007/12/29, mani sabri : > > > > Hi > > Yeah, exactly. do you know where is the code responsible for it? I wasn't > able to find the code for ?c option because it does not exist in "-c " form > in any .py file in weave subdir. May be it is in makefile.in. imho this > will be solved by putting quotes around the file path. > It must be in Python distutils module, so your chances to fix it this way are next to zero :( p.s: numpy.distutils.tests() runs without this problem from the same path.! > ("C:\program files\Python24?.") > That can happen. Your best shot is using the default location for Python, that is c:\Python24\ This way, it will work. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From mani.sabri at gmail.com Sat Dec 29 04:52:32 2007 From: mani.sabri at gmail.com (mani sabri) Date: Sat, 29 Dec 2007 13:22:32 +0330 Subject: [SciPy-user] weave and mingw on win xp sp2 bug? In-Reply-To: Message-ID: <47761935.0ec5100a.5a49.15f2@mx.google.com> >-----Original Message----- >From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On >Behalf Of Matthieu Brucher >Sent: Saturday, December 29, 2007 12:48 PM > >2007/12/29, mani sabri : > Hi > Yeah, exactly. do you know where is the code responsible for it? I >wasn't able to find the code for -c option because it does not exist in "-c >" form in any .py file in weave subdir. May be it is in makefile.in. imho >this will be solved by putting quotes around the file path. > >It must be in Python distutils module, so your chances to fix it this way >are next to zero :( > > p.s: numpy.distutils.tests() runs without this problem from the same >path.! ("C:\program files\Python24..") > >That can happen. >Your best shot is using the default location for Python, that is >c:\Python24\ This way, it will work. Then I should do that. Its time to attack the environment variables once more! ;) > >Matthieu >-- >French PhD student >Website : http://matthieu-brucher.developpez.com/ >Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 >LinkedIn : http://www.linkedin.com/in/matthieubrucher From grh at mur.at Sat Dec 29 06:05:32 2007 From: grh at mur.at (Georg Holzmann) Date: Sat, 29 Dec 2007 12:05:32 +0100 Subject: [SciPy-user] debugging scipy Message-ID: <477629FC.5020306@mur.at> Hallo! I have a problem using scipy in the python debugger (I do that with the eric4 IDE). When I step through the commands, I got the an error (only in debug mode!) at the statement: "import scipy" and the error is: ------8<-------- The file /usr/lib/python2.5/site-packages/setuptools-0.6c7-py2.5.egg/pkg_resources.py could not be opened. ------8<-------- and then: ------8<-------- The debugged program raised the exception OSError "[Errno 2] No such file or directory: '/home/holzi/scipytest/test'" File: /usr/lib/python2.5/site-packages/setuptools-0.6c7-py2.5.egg/pkg_resources.py, Line: 1659 Break here? ------8<-------- But my setuptools installation seems to be ok and the file /usr/lib/python2.5/site-packages/setuptools-0.6c7-py2.5.egg/ is present. (my scipy version is 0.6.0, Python 2.5.1) Has anyone a hint what I can try to resolve this problem ? Many thanks, LG Georg From mani.sabri at gmail.com Sat Dec 29 07:04:56 2007 From: mani.sabri at gmail.com (mani sabri) Date: Sat, 29 Dec 2007 15:34:56 +0330 Subject: [SciPy-user] weave.test() win mingw Message-ID: <4776383d.0856100a.45fe.2e65@mx.google.com> Hi Below is the result of weave.test() on win xp sp2. is it ok? Specially those "error removing"s and "Permission denied". is it safe to use weave In this condition? Mani --------------- >>>weave.test() Found 1/1 tests for scipy.weave.ast_tools Found 2/2 tests for scipy.weave.blitz_tools Found 9/9 tests for scipy.weave.build_tools Found 0/0 tests for scipy.weave.c_spec Found 26/26 tests for scipy.weave.catalog building extensions here: c:\docume~1\man\locals~1\temp\man\python24_compiled\m0 Found 1/1 tests for scipy.weave.ext_tools Found 0/0 tests for scipy.weave.inline_tools Found 74/74 tests for scipy.weave.size_check Found 16/16 tests for scipy.weave.slice_handler Found 3/3 tests for scipy.weave.standard_array_spec Found 0/0 tests for __main__ ...warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations .....warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ............................error removing c:\docume~1\man\locals~1\temp\tmpcbvq4wcat_test: c:\docume~1\man\locals~1\temp\tmpcbvq4wcat_test\win3224compiled_catalog: Permission denied error removing c:\docume~1\man\locals~1\temp\tmpcbvq4wcat_test: c:\docume~1\man\locals~1\temp\tmpcbvq4wcat_test: Directory not empty ............................................................................ .................... ---------------------------------------------------------------------- Ran 132 tests in 1.250s OK >>> From odonnems at yahoo.com Sat Dec 29 09:34:09 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Sat, 29 Dec 2007 06:34:09 -0800 (PST) Subject: [SciPy-user] weave and mingw on win xp sp2 bug? Message-ID: <32610.62959.qm@web58013.mail.re3.yahoo.com> Hi sorry I did not reply about your inquiry of using weave. It took me quite a bit of time to get this to work and I never got any responses. Unfortunately, I am heading out the door so I will keep this extremely short until I can provide a more complete response tonight or tomorrow morning. I was able to get weave inline to work using python 2.5.1 and mingw 5.0.3. The path to this needs to be added to your pythonpath variable and you need to specify 'gcc' as your compiler. I also worked with VS2003 and VS2005 Express. And I worked with python 2.4.1 and 2.4.4. None of these combinations worked. There was also a problem in numpy with finding the python.exe file. I cannot remember if this error was given with VS compiler or both VS and MingW. Anyhow, hopefully this will help you out and you can upgrade. I will provide all my notes on what I tried and what I was able to get to work ASAP (tonight or tomorrow). mike ----- Original Message ---- From: mani sabri To: scipy-user at scipy.org Sent: Saturday, December 29, 2007 1:28:51 AM Subject: [SciPy-user] weave and mingw on win xp sp2 bug? Hi Happy holidays I?m using python 2.4.4 , scipy 0.6.0 , numpy 1.0.4 and mingw gcc 3.4.5 on win xp sp2 When I try to run a weave example (e.x array3d.py) from ?C:\PROGRAM FILES\Python24\Lib\site-packages\scipy\weave\examples? I have the following typical output: numpy: [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]] Pure Inline: Found executable C:\MINGW\BIN\g++.exe g++.exe: C:\program: No such file or directory g++.exe: files\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp: No such file or directory g++.exe: files\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o: No such file or directory g++.exe: no input files And This trace back: CompileError: error: Command "g++ -mno-cygwin -O2 -Wall -I"C:\program files\Python24\lib\site-packages\scipy\weave" -I"C:\program files\Python24\lib\site-packages\scipy\weave\scxx" -I"C:\program files\Python24\lib\site-packages\numpy\core\include" -I"C:\program files\Python24\include" -I"C:\program files\Python24\PC" -c C:\program files\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp -o c:\docume~1\man\locals~1\temp\man\python24_intermediate\compiler_12e837eb1ea3ab5199fbcc0e83015e3f\Release\program files\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o" failed with exit status 1 IMHO When I put quotes around ?c argument in above command (-c ?C:\program files\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp?) every thing is fine. Can anyone help me to hack this around? Cheers Mani. ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sat Dec 29 11:43:25 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 29 Dec 2007 18:43:25 +0200 Subject: [SciPy-user] Scipy for Cluster Detection In-Reply-To: References: <4772F19C.6040308@gmail.com> Message-ID: <20071229164325.GL9051@mentat.za.net> On Thu, Dec 27, 2007 at 01:24:26AM -0500, David Warde-Farley wrote: > Once you have this your problem is what graph theorists would call > identifying the "connected components", which is a well-studied > problem and can be done fairly easily, Google "finding connected > components" or pick up any algorithms textbook at your local library > such as the Cormen et al "Introduction to Algorithms". I have implemented the O(N) algorithm described in Christophe Fiorio and Jens Gustedt, "Two linear time Union-Find strategies for image processing", Theoretical Computer Science 154 (1996), pp. 165-181. and Kensheng Wu, Ekow Otoo and Arie Shoshani, "Optimizing connected component labeling algorithms", Paper LBNL-56864, 2005, Lawrence Berkeley National Laboratory (University of California), http://repositories.cdlib.org/lbnl/LBNL-56864. The code can be found at http://mentat.za.net/source/connected_components.tar.bz2 Run 'scons' to compile (I can send a setup.py if necessary). Typical usage: x = N.array([[0, 0, 3, 2, 1, 9], [0, 1, 1, 9, 2, 9], [0, 0, 1, 9, 9, 9], [3, 1, 1, 5, 3, 0]]) labels = connected_components.linear(x) Result: [[0, 0, 1, 2, 3, 4], [0, 5, 5, 4, 6, 4], [0, 0, 5, 4, 4, 4], [7, 5, 5, 8, 9, 10]]) The given source searches for neighbours that have the same values, but this can easily be modified to look for values within a certain difference range. Regards St?fan From corrada at cs.umass.edu Sat Dec 29 12:37:52 2007 From: corrada at cs.umass.edu (Andres Corrada-Emmanuel) Date: Sat, 29 Dec 2007 12:37:52 -0500 Subject: [SciPy-user] scipy.test segfaulting in Windows Message-ID: <477685F0.1000506@cs.umass.edu> Hello, First off, let me thank all the developers of scipy for creating such an excellent tool for scientific computation. After a week of struggling with some C++ packages and their python bindings, I found that the scipy.sandbox.delaunay package is just what I needed. It was a cinch to compile and install in Windows under Cygwin and natively. I got greedy and wanted to make sure that the installation tests worked and got the segfault on both the native Python and the Cygwin Python sides that others have mentioned on the web. I've tried building it with: python setup.py build config_fc --noopt but got the same segfault. Any idea of how this can be fixed? What needs to be tweaked? p.s. Using UMFPACK in Cygwin requires that UMFPACK be linked to UFconfig/xerbla/libcerbla.a to avoid linker errors while building scipy with umfpack support -- Andres Corrada-Emmanuel Research Fellow Aerial Imaging and Remote Sensing Lab Computer Science Department University of Massachusetts at Amherst Blog: www.corrada.com/blog From humufr at yahoo.fr Sat Dec 29 12:47:02 2007 From: humufr at yahoo.fr (humufr at yahoo.fr) Date: Sat, 29 Dec 2007 12:47:02 -0500 Subject: [SciPy-user] maskedarray moved? In-Reply-To: References: <200712281546.33014.humufr@yahoo.fr> Message-ID: <200712291247.03322.humufr@yahoo.fr> Cool great news, I was using only maskedarray for sometime now so if it'll be include in numpy that will be far better. Thanks, N. Le Friday 28 December 2007 16:03:37 Jarrod Millman, vous avez ?crit?: > It is now in a numpy branch: > http://projects.scipy.org/scipy/numpy/browser/branches/maskedarray > > and will be merged with the numpy trunk soon. You can view the API > changes with the existing numpy.ma here: > http://svn.scipy.org/svn/numpy/branches/maskedarray/numpy/ma/API_CHANGES.tx >t > > Thanks, > Jarrod > > On Dec 28, 2007 12:46 PM, wrote: > > Hi, > > > > I would like to know where is maskedarray. It's not anymore inside the > > sandbox? Where can I find it now? > > > > Thanks, > > > > N. > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user From lists.steve at arachnedesign.net Sat Dec 29 16:16:43 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Sat, 29 Dec 2007 16:16:43 -0500 Subject: [SciPy-user] slicot and numpy/scipy Message-ID: <205B1B5C-DFE5-4676-B3FC-343529818F9A@arachnedesign.net> Hi all, I saw some talk in the mail archives (between Travis and Nils) about interfacing scipy w/ slicot, but never saw anything come out of it. I was just curious if this was ever seen through, or if there were problems that couldn't be worked around (aside from the inability to distribute slicot). Thanks, -steve From robert.kern at gmail.com Sat Dec 29 16:29:11 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 29 Dec 2007 16:29:11 -0500 Subject: [SciPy-user] debugging scipy In-Reply-To: <477629FC.5020306@mur.at> References: <477629FC.5020306@mur.at> Message-ID: <4776BC27.3050304@gmail.com> Georg Holzmann wrote: > Hallo! > > I have a problem using scipy in the python debugger (I do that with the > eric4 IDE). > > When I step through the commands, I got the an error (only in debug > mode!) at the statement: > "import scipy" > and the error is: > ------8<-------- > The file > /usr/lib/python2.5/site-packages/setuptools-0.6c7-py2.5.egg/pkg_resources.py > could not be opened. > ------8<-------- > and then: > ------8<-------- > The debugged program raised the exception OSError > "[Errno 2] No such file or directory: '/home/holzi/scipytest/test'" > File: > /usr/lib/python2.5/site-packages/setuptools-0.6c7-py2.5.egg/pkg_resources.py, > Line: 1659 > Break here? > ------8<-------- > > But my setuptools installation seems to be ok and the file > /usr/lib/python2.5/site-packages/setuptools-0.6c7-py2.5.egg/ is present. > (my scipy version is 0.6.0, Python 2.5.1) > > Has anyone a hint what I can try to resolve this problem ? Eric apparently does not know anything about zipped eggs. The setuptools egg is zipped and pkg_resource.py is inside of it. In debug mode, I imagine it tries to load the source of every module that gets imported. Remove the lines in scipy/__init__.py: """ try: import pkg_resources as _pr # activate namespace packages (manipulates __path__) del _pr except ImportError: pass """ I'm not really sure why we put them there, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From corrada at cs.umass.edu Sat Dec 29 20:26:14 2007 From: corrada at cs.umass.edu (Andres Corrada-Emmanuel) Date: Sat, 29 Dec 2007 20:26:14 -0500 Subject: [SciPy-user] scipy.test segfaulting in Windows In-Reply-To: <477685F0.1000506@cs.umass.edu> References: <477685F0.1000506@cs.umass.edu> Message-ID: <4776F3B6.4050008@cs.umass.edu> The problem is related to the test_histogram tests in test_ndimage.py. Commenting out the tests results in the test suite completing but two failures occur later on (with no segfault) also related to nd_image: generic filter 1 generic 1d filter 1 Andres Corrada-Emmanuel wrote: > Hello, > > First off, let me thank all the developers of scipy for creating such an > excellent tool for scientific computation. After a week of struggling > with some C++ packages and their python bindings, I found that the > scipy.sandbox.delaunay package is just what I needed. It was a cinch to > compile and install in Windows under Cygwin and natively. > > I got greedy and wanted to make sure that the installation tests worked > and got the segfault on both the native Python and the Cygwin Python > sides that others have mentioned on the web. > > I've tried building it with: > > python setup.py build config_fc --noopt > > but got the same segfault. > > Any idea of how this can be fixed? What needs to be tweaked? > > p.s. > > Using UMFPACK in Cygwin requires that UMFPACK be linked to > UFconfig/xerbla/libcerbla.a to avoid linker errors while building scipy > with umfpack support > -- Andres Corrada-Emmanuel Research Fellow Aerial Imaging and Remote Sensing Lab Computer Science Department University of Massachusetts at Amherst Blog: www.corrada.com/blog From stefan at sun.ac.za Sat Dec 29 21:06:38 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 30 Dec 2007 04:06:38 +0200 Subject: [SciPy-user] scipy.test segfaulting in Windows In-Reply-To: <4776F3B6.4050008@cs.umass.edu> References: <477685F0.1000506@cs.umass.edu> <4776F3B6.4050008@cs.umass.edu> Message-ID: <20071230020638.GA11137@mentat.za.net> Hi Andres On Sat, Dec 29, 2007 at 08:26:14PM -0500, Andres Corrada-Emmanuel wrote: > The problem is related to the test_histogram tests in test_ndimage.py. > Commenting out the tests results in the test suite completing but two > failures occur later on (with no segfault) also related to nd_image: > > generic filter 1 > generic 1d filter 1 This has been fixed in SVN for some time, so try that version instead. Regards St?fan From fperez.net at gmail.com Sat Dec 29 22:08:55 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 29 Dec 2007 20:08:55 -0700 Subject: [SciPy-user] Bundling numpy/scipy with applications In-Reply-To: <473953CC.7010402@ar.media.kyoto-u.ac.jp> References: <0CDABAC6-8EA6-409E-B389-578864A6407E@usgs.gov> <47392C92.8020208@ar.media.kyoto-u.ac.jp> <47393D17.9090000@slac.stanford.edu> <47393FD8.7060400@ar.media.kyoto-u.ac.jp> <47394E44.6010608@slac.stanford.edu> <473953CC.7010402@ar.media.kyoto-u.ac.jp> Message-ID: On Nov 13, 2007 12:35 AM, David Cournapeau wrote: > Johann Cohen-Tanugi wrote: > > > > do make install in platform/numpy fails exactly in the same way : > I forgot to tell you to do make clean before, otherwise, it indeed has > no way to fail in a different way :) > > > [cohen at localhost numpy]$ make install > > [===== NOW BUILDING: numpy-1.0.3.1 =====] > > [fetch] complete for numpy. > > [checksum] complete for numpy. > > [extract] complete for numpy. > > [patch] complete for numpy. > > # Change default path when looking for libs to fake dir, > > # so we can set everything by env variables > > cd work/main.d/numpy-1.0.3.1 && > > PYTHONPATH=/home/cohen/garnumpyinstall/lib/python2.5/site-packages:/home/cohen/garnumpyinstall/lib/python2.5/site-packages/gtk-2.0 > > /usr/bin/python \ > > setup.py config_fc --fcompiler=gnu config > > Traceback (most recent call last): > > File "setup.py", line 90, in > > setup_package() > > File "setup.py", line 60, in setup_package > > from numpy.distutils.core import setup > > File > > "/data1/sources/python/garnumpy-0.4/platform/numpy/work/main.d/numpy-1.0.3.1/numpy/__init__.py", > > line 39, in > > import core > > File > > "/data1/sources/python/garnumpy-0.4/platform/numpy/work/main.d/numpy-1.0.3.1/numpy/core/__init__.py", > > line 8, in > > import numerictypes as nt > > File > > "/data1/sources/python/garnumpy-0.4/platform/numpy/work/main.d/numpy-1.0.3.1/numpy/core/numerictypes.py", > > line 83, in > > from numpy.core.multiarray import typeinfo, ndarray, array, empty, dtype > > ImportError: No module named multiarray > I really don't understand this error: you are configuring numpy for > compilation, so it should not try to import numpy.core (which has no way > to succeed since it is not built yet). IOW, it is a bootstrap error > (numpy build system tries to import numpy). We finally tracked this down to a system-wide numpy confusing the brittle hack that __init__ was using to detect being run from the source directory. This commit: http://projects.scipy.org/scipy/numpy/changeset/4663 Fixes it and allows you to do an in-place build of numpy via python setup.py build_src --inplace python setup.py build_ext --inplace for development. It's a bit of an ugly hack, but at least it works robustly and avoids the problem above (which bit me today). Please let me know if the approach in the patch causes problems for any other types of usage. Cheers, f From corrada at cs.umass.edu Sat Dec 29 23:03:58 2007 From: corrada at cs.umass.edu (Andres Corrada-Emmanuel) Date: Sat, 29 Dec 2007 23:03:58 -0500 Subject: [SciPy-user] scipy.test segfaulting in Windows In-Reply-To: <20071230020638.GA11137@mentat.za.net> References: <477685F0.1000506@cs.umass.edu> <4776F3B6.4050008@cs.umass.edu> <20071230020638.GA11137@mentat.za.net> Message-ID: <477718AE.8050408@cs.umass.edu> Thanks Stefan, using the svn versions of nd_image.[ch] fixed the problem. Stefan van der Walt wrote: > Hi Andres > > On Sat, Dec 29, 2007 at 08:26:14PM -0500, Andres Corrada-Emmanuel wrote: >> The problem is related to the test_histogram tests in test_ndimage.py. >> Commenting out the tests results in the test suite completing but two >> failures occur later on (with no segfault) also related to nd_image: >> >> generic filter 1 >> generic 1d filter 1 > > This has been fixed in SVN for some time, so try that version instead. > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Andres Corrada-Emmanuel Research Fellow Aerial Imaging and Remote Sensing Lab Computer Science Department University of Massachusetts at Amherst Blog: www.corrada.com/blog From nwagner at iam.uni-stuttgart.de Sun Dec 30 03:26:05 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 30 Dec 2007 09:26:05 +0100 Subject: [SciPy-user] slicot and numpy/scipy In-Reply-To: <205B1B5C-DFE5-4676-B3FC-343529818F9A@arachnedesign.net> References: <205B1B5C-DFE5-4676-B3FC-343529818F9A@arachnedesign.net> Message-ID: On Sat, 29 Dec 2007 16:16:43 -0500 Steve Lianoglou wrote: > Hi all, > > I saw some talk in the mail archives (between Travis and >Nils) about > interfacing scipy w/ slicot, but never saw anything come >out of it. > > I was just curious if this was ever seen through, or if >there were > problems that couldn't be worked around (aside from the >inability to > distribute slicot). > > Thanks, > -steve > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Hi Steve, IIRC it was a license problem but now we have scikits :-). http://www.slicot.org/ Commercial users need to reach a license agreement before using SLICOT software http://www.slicot.org/index.php?site=access Cheers, Nils From mani.sabri at gmail.com Sun Dec 30 04:34:18 2007 From: mani.sabri at gmail.com (mani sabri) Date: Sun, 30 Dec 2007 13:04:18 +0330 Subject: [SciPy-user] weave and mingw on win xp sp2 bug? In-Reply-To: <32610.62959.qm@web58013.mail.re3.yahoo.com> Message-ID: <4777666e.0ac0100a.3bee.7fff@mx.google.com> >-----Original Message----- >From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On >Behalf Of Michael ODonnell >Sent: Saturday, December 29, 2007 6:04 PM >To: SciPy Users List > >Hi sorry I did not reply about your inquiry of using weave. It took me >quite a bit of time to get this to work and I never got any responses. >Unfortunately, I am heading out the door so I will keep this extremely >short until I can provide a more complete response tonight or tomorrow >morning. I was able to get weave inline to work using python 2.5.1 and >mingw 5.0.3. The path to this needs to be added to your pythonpath variable >and you need to specify 'gcc' as your compiler. I also worked with VS2003 >and VS2005 Express. And I worked with python 2.4.1 and 2.4.4. None of these >combinations worked. There was also a problem in numpy with finding the >python.exe file. I cannot remember if this error was given with VS compiler >or both VS and MingW. > I managed to run/compile my Time Wrapping function with weave and python 2.4.4 an hour ago. Although the weave.tests continues to show ugly "Permission Denied" message I had no problem with my func. (and like most windows users I don't care about that message! Its windows! ;) ) As I remember it is the python24.a file in 2.4.4 that makes the whole difference because gcc don't know anything about Microsoft lib files. Anyhow here is a brief state of my system: - scipy 0.6.0 - numpy 1.0.4 - Python in c:\python24 (no spaces in path! Its important) - environment variables: python_lib = c:\Python24\libs\python24.a pythonpath = c:\Python24;C:\PROGRAM FILES\Pyrex-0.9.6.4; swig_lib = C:\PROGRAM FILES\swingwin-1.3.27\python python_include = C:\Python24\include; path = C:\MINGW\BIN;C:\CYGWIN\BIN; C:\PROGRAM FILES\swingwin- 1.3.27\python;c:\python24 p.s: I saw some posts about cleaning up weave.tests in scipy-dev these days. I hope they take a look at weave.example directory. From grh at mur.at Sun Dec 30 08:37:38 2007 From: grh at mur.at (Georg Holzmann) Date: Sun, 30 Dec 2007 14:37:38 +0100 Subject: [SciPy-user] debugging scipy In-Reply-To: <4776BC27.3050304@gmail.com> References: <477629FC.5020306@mur.at> <4776BC27.3050304@gmail.com> Message-ID: <47779F22.7020105@mur.at> Hallo! > Eric apparently does not know anything about zipped eggs. The setuptools egg is > zipped and pkg_resource.py is inside of it. In debug mode, I imagine it tries to > load the source of every module that gets imported. Thanks - yes sorry, this problem is only with eric. I just tried pydev/eclipse and there is no such problem. I wanted to find a good IDE for numpy/scipy scripts - pydev/eclipse works quite good, but there is no interactive console during debugging where you could plot some numarrays ... Thanks, LG Georg From corrada at cs.umass.edu Sun Dec 30 08:46:46 2007 From: corrada at cs.umass.edu (Andres Corrada-Emmanuel) Date: Sun, 30 Dec 2007 08:46:46 -0500 Subject: [SciPy-user] debugging scipy In-Reply-To: <47779F22.7020105@mur.at> References: <477629FC.5020306@mur.at> <4776BC27.3050304@gmail.com> <47779F22.7020105@mur.at> Message-ID: <4777A146.1040309@cs.umass.edu> Hello, On a different slant on this topic: Is there documentation on how to setup a debugging environment for scipy? I mean by that some sort of step-by-step instructions on how to build a debugging version, what tools are useful (nm, objdump, etc.) and how to step thru the pdb and/or gdb at the same time. I once knew how to do this with extension modules with gdb but have lost the knowledge after years of disuse. This sort of documentation could help people like me who are users of scipy to acquire skills that can help the scipy developers when debugging problems. Georg Holzmann wrote: > Hallo! > >> Eric apparently does not know anything about zipped eggs. The setuptools egg is >> zipped and pkg_resource.py is inside of it. In debug mode, I imagine it tries to >> load the source of every module that gets imported. > > Thanks - yes sorry, this problem is only with eric. I just tried > pydev/eclipse and there is no such problem. > I wanted to find a good IDE for numpy/scipy scripts - pydev/eclipse > works quite good, but there is no interactive console during debugging > where you could plot some numarrays ... > > Thanks, > LG > Georg > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Andres Corrada-Emmanuel Research Fellow Aerial Imaging and Remote Sensing Lab Computer Science Department University of Massachusetts at Amherst Blog: www.corrada.com/blog From grh at mur.at Sun Dec 30 10:25:42 2007 From: grh at mur.at (Georg Holzmann) Date: Sun, 30 Dec 2007 16:25:42 +0100 Subject: [SciPy-user] debugging scipy In-Reply-To: <4777A146.1040309@cs.umass.edu> References: <477629FC.5020306@mur.at> <4776BC27.3050304@gmail.com> <47779F22.7020105@mur.at> <4777A146.1040309@cs.umass.edu> Message-ID: <4777B876.9060102@mur.at> Hallo! > On a different slant on this topic: Is there documentation on how to > setup a debugging environment for scipy? I mean by that some sort of > step-by-step instructions on how to build a debugging version, what > tools are useful (nm, objdump, etc.) and how to step thru the pdb and/or > gdb at the same time. I once knew how to do this with extension modules Yes this would be nice - I am currently also trying to debug python and c++ modules for python - so far I tried this now with eclipse (with CDT and PYDEV plugin) but I am not yet there where I want to be ... ;) ... however, I will try again next year ... LG Georg From robert.kern at gmail.com Sun Dec 30 16:39:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 30 Dec 2007 16:39:50 -0500 Subject: [SciPy-user] slicot and numpy/scipy In-Reply-To: References: <205B1B5C-DFE5-4676-B3FC-343529818F9A@arachnedesign.net> Message-ID: <47781026.2060901@gmail.com> Nils Wagner wrote: > IIRC it was a license problem but now we have scikits :-). We also have a policy that scikits must have an open source license. SLICOT is distinctly not open source. So no, SLICOT wrappers will not be a part of scipy or scikits. You're welcome to write wrappers hosted elsewhere and discuss them here, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From odonnems at yahoo.com Sun Dec 30 18:01:22 2007 From: odonnems at yahoo.com (Michael ODonnell) Date: Sun, 30 Dec 2007 15:01:22 -0800 (PST) Subject: [SciPy-user] weave and mingw on win xp sp2 bug? Message-ID: <359774.29886.qm@web58014.mail.re3.yahoo.com> Thanks for the info on this. Sorry I was slow at responding to your email. I did compile my notes as to how I got weave.inline to work. I know that when I was working on this before and trying a million different combinations, I also changed the location of my temp directory in the environment variables to C:/Temp. I have learned that everything tends to work best when there are no spaces in directories and ISO standards are followed. I will go ahead and post my notes on what I did but it sounds like you have everything working so feel free to ignore. Like I mentioned I was only able to get weave.inline to work with python 2.5.1 and mingw 5.0.3. I tried python 2.4.1 and 2.4.4 with visual studio compilers and mingw. I hope this information is of some help. This process was new for me so it may not be the best solution or accurate but it does work for me. OS: Windows XP SP2 professional Processor: x86 I suggest you check that MinGW is installed correctly by following the directions on their website. A summary of these directions include: First install MSYS, say, to directory C:\msys\1.0. Then install MinGW to C:\msys\1.0\mingw directory. If you choose to install MinGW to C:\mingw as the installer suggests by default then you have to create a file C:\msys\1.0\etc\fstab containing c:/mingw /mingw and then restart your computer so that /mingw will be mounted. Once this is done you might test the compiler on something to eliminate this as a source of error. Then make sure you add the MinGW path to PYTHONPATH environment variable (e.g., % PYTHONPATH%; C:\msys\1.0\bin;C:\msys\1.0\mingw\bin;). I then had to modify the numpy distutils exec_command.py. This is likely found in C:\Python25\Lib\site-packages\numpy\ distutils: I commented out this code starting on line 67 or so: ''' if os.name in ['nt','dos']: fdir,fn = os.path.split(pythonexe) fn = fn.upper().replace('PYTHONW','PYTHON') pythonexe = os.path.join(fdir,fn) assert os.path.isfile(pythonexe), '%r is not a file' % (pythonexe,) ''' And then I replaced with these lines: ###MOD edits #C:\Python24\python.exe import string exe = 'python.exe' for p in string.split(os.environ['Path'],';'): fn = os.path.join(os.path.abspath(p),exe) if os.path.isfile(fn): pythonexe = fn ###End of MOD edits These edits worked for me. I was never successful in using VS2003 toolkit or VS2005 Express. I was able to install these and compile some python modules successfully so they did work as compilers. However, changes to the SDK had to be implemented in order for these to work as well. I am not sure why I could not get weave to work with these compilers but once I got something to work I gave up trying to understand due to my time constraints. The alterations I made to the compilers are included below. Apparently the problem has to do with Microsoft and Express. The full software package of visual studio apparently does not have these problems. Here are the edits I made to get these to compile python modules (weave still did not work however). Source for this info: http://www.codeproject.com/KB/wtl/WTLExpress.aspx 1. cd C:\Program Files\Microsoft Platform SDK\include\atl Change SetChainEntry function at line 1725 of atlwin.h - define "int i" at the first line of the function body. BOOL SetChainEntry(DWORD dwChainID, CMessageMap* pObject, DWORD dwMsgMapID = 0) { int i; // first search for an existing entry for(i = 0; i < m_aChainEntry.GetSize(); i++) 2.Change AllocStdCallThunk and FreeStdCallThunk at line 287 of atlbase.h to the new macros /* Comment it PVOID __stdcall __AllocStdCallThunk(VOID); VOID __stdcall __FreeStdCallThunk(PVOID); #define AllocStdCallThunk() __AllocStdCallThunk() #define FreeStdCallThunk(p) __FreeStdCallThunk(p) #pragma comment(lib, "atlthunk.lib") */ #define AllocStdCallThunk() HeapAlloc(GetProcessHeap(), 0, sizeof(_stdcallthunk)) #define FreeStdCallThunk(p) HeapFree(GetProcessHeap(), 0, p) 3. Download and install WTL from SourceForge cd to WTL\AppWiz folder, double click setup80x.js (or appropriate version to install the WTL Wizard into VC Express ----- Original Message ---- From: mani sabri To: SciPy Users List Sent: Sunday, December 30, 2007 2:34:18 AM Subject: Re: [SciPy-user] weave and mingw on win xp sp2 bug? >-----Original Message----- >From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On >Behalf Of Michael ODonnell >Sent: Saturday, December 29, 2007 6:04 PM >To: SciPy Users List > >Hi sorry I did not reply about your inquiry of using weave. It took me >quite a bit of time to get this to work and I never got any responses. >Unfortunately, I am heading out the door so I will keep this extremely >short until I can provide a more complete response tonight or tomorrow >morning. I was able to get weave inline to work using python 2.5.1 and >mingw 5.0.3. The path to this needs to be added to your pythonpath variable >and you need to specify 'gcc' as your compiler. I also worked with VS2003 >and VS2005 Express. And I worked with python 2.4.1 and 2.4.4. None of these >combinations worked. There was also a problem in numpy with finding the >python.exe file. I cannot remember if this error was given with VS compiler >or both VS and MingW. > I managed to run/compile my Time Wrapping function with weave and python 2.4.4 an hour ago. Although the weave.tests continues to show ugly "Permission Denied" message I had no problem with my func. (and like most windows users I don't care about that message! Its windows! ;) ) As I remember it is the python24.a file in 2.4.4 that makes the whole difference because gcc don't know anything about Microsoft lib files. Anyhow here is a brief state of my system: - scipy 0.6.0 - numpy 1.0.4 - Python in c:\python24 (no spaces in path! Its important) - environment variables: python_lib = c:\Python24\libs\python24.a pythonpath = c:\Python24;C:\PROGRAM FILES\Pyrex-0.9.6.4; swig_lib = C:\PROGRAM FILES\swingwin-1.3.27\python python_include = C:\Python24\include; path = C:\MINGW\BIN;C:\CYGWIN\BIN; C:\PROGRAM FILES\swingwin- 1.3.27\python;c:\python24 p.s: I saw some posts about cleaning up weave.tests in scipy-dev these days. I hope they take a look at weave.example directory. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user ____________________________________________________________________________________ Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs -------------- next part -------------- An HTML attachment was scrubbed... URL: From mani.sabri at gmail.com Mon Dec 31 04:08:47 2007 From: mani.sabri at gmail.com (mani sabri) Date: Mon, 31 Dec 2007 12:38:47 +0330 Subject: [SciPy-user] weave and mingw on win xp sp2 bug? In-Reply-To: <359774.29886.qm@web58014.mail.re3.yahoo.com> Message-ID: <4778b1f7.05a4100a.100a.ffffd2d0@mx.google.com> I just installed mingw in c:\mingw and moved my python24 from "c:\program files\python24" to "c:\python24" and added those environment variables that I mentioned before and every thing did not worked!(such as weave.test() and some of most of examples but its not important because imho they are outdated) but my code compiled successfully. >-----Original Message----- >From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On >Behalf Of Michael ODonnell >Sent: Monday, December 31, 2007 2:31 AM >To: SciPy Users List >Subject: Re: [SciPy-user] weave and mingw on win xp sp2 bug? > >I also changed the location of my temp directory in the >environment variables to C:/Temp. I have learned that everything tends to >work best when there are no spaces in directories and ISO standards are >followed. > >Like I mentioned I was only able to get weave.inline to work with python >2.5.1 and mingw 5.0.3. I tried python 2.4.1 and 2.4.4 with visual studio >compilers and mingw. > >OS: Windows XP SP2 professional >Processor: x86 > >I suggest you check that MinGW is installed correctly by following the >directions on their website. A summary of these directions include: First >install MSYS, say, to directory C:\msys\1.0. Then install MinGW to >C:\msys\1.0\mingw directory. If you choose to install MinGW to C:\mingw as >the installer suggests by default then you have to create a file >C:\msys\1.0\etc\fstab containing c:/mingw /mingw and then restart your >computer so that /mingw will be mounted. Once this is done you might test >the compiler on something to eliminate this as a source of error. Then make >sure you add the MinGW path to PYTHONPATH environment variable (e.g., % >PYTHONPATH%; C:\msys\1.0\bin;C:\msys\1.0\mingw\bin;). > > >I then had to modify the numpy distutils exec_command.py. This is likely >found in C:\Python25\Lib\site-packages\numpy\ distutils: > > I commented out this code starting on line 67 or so: > ''' > if os.name in ['nt','dos']: > fdir,fn = os.path.split(pythonexe) > fn = fn.upper().replace('PYTHONW','PYTHON') > pythonexe = os.path.join(fdir,fn) > assert os.path.isfile(pythonexe), '%r is not a file' % (pythonexe,) > ''' > > And then I replaced with these lines: > ###MOD edits > #C:\Python24\python.exe > import string > exe = 'python.exe' > for p in string.split(os.environ['Path'],';'): > fn = os.path.join(os.path.abspath(p),exe) > if os.path.isfile(fn): > pythonexe = fn > ###End of MOD edits > >These edits worked for me. > It's strange. > >I was never successful in using VS2003 toolkit or VS2005 Express. I was >able to install these and compile some python modules successfully so they >did work as compilers. However, changes to the SDK had to be implemented in >order for these to work as well. I am not sure why I could not get weave to >work with these compilers but once I got something to work I gave up trying >to understand due to my time constraints. The alterations I made to the >compilers are included below. Apparently the problem has to do with >Microsoft and Express. The full software package of visual studio >apparently does not have these problems. Here are the edits I made to get >these to compile python modules (weave still did not work however). > >Source for this info: http://www.codeproject.com/KB/wtl/WTLExpress.aspx > >1. cd C:\Program Files\Microsoft Platform SDK\include\atl > >Change SetChainEntry function at line 1725 of atlwin.h - define "int i" at >the first line of the function body. > >BOOL SetChainEntry(DWORD dwChainID, CMessageMap* pObject, DWORD > >dwMsgMapID = 0) >{ > int i; > // first search for an existing entry > for(i = 0; i < m_aChainEntry.GetSize(); i++) > >2.Change AllocStdCallThunk and FreeStdCallThunk at line 287 of atlbase.h to >the new macros > >/* Comment it >PVOID __stdcall __AllocStdCallThunk(VOID); >VOID __stdcall __FreeStdCallThunk(PVOID); > >#define AllocStdCallThunk() __AllocStdCallThunk() > >#define FreeStdCallThunk(p) __FreeStdCallThunk(p) > >#pragma comment(lib, "atlthunk.lib") >*/ >#define AllocStdCallThunk() HeapAlloc(GetProcessHeap(), > 0, sizeof(_stdcallthunk)) >#define FreeStdCallThunk(p) HeapFree(GetProcessHeap(), 0, p) > >3. Download and install WTL from SourceForge > >cd to WTL\AppWiz folder, double click setup80x.js (or appropriate version >to install the WTL Wizard into VC Express From lev at columbia.edu Mon Dec 31 13:07:51 2007 From: lev at columbia.edu (Lev Givon) Date: Mon, 31 Dec 2007 13:07:51 -0500 Subject: [SciPy-user] Ann: OpenOpt v 0.15 (free optimization framework) In-Reply-To: <47643642.80808@scipy.org> References: <47643642.80808@scipy.org> Message-ID: <20071231180751.GB26755@localhost.cc.columbia.edu> Received from dmitrey on Sat, Dec 15, 2007 at 03:17:06PM EST: > Hi all, > we are glad to inform you about OpenOpt v 0.15 (release), free (license: > BSD) optimization framework for Python language programmers > > Changes since previous release (September 4): > > * some new classes > * several new solvers written > * some more solvers connected > * NLP/NSP solver ralg can handle constrained problems > * some bugfixes > * some enhancements in graphical output (especially for constrained > problems) > > Regards, > OpenOpt developers > http://scipy.org/scipy/scikits/wiki/OpenOpt > http://openopt.blogspot.com/ Why are there two tarballs available for download on http://scipy.org/scipy/scikits/wiki/OpenOptInstall? They both seem to contain the same version of the software. L.G. From millman at berkeley.edu Mon Dec 31 17:43:37 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 31 Dec 2007 14:43:37 -0800 Subject: [SciPy-user] planet.scipy.org Message-ID: Hey, I just wanted to announce that we now have a NumPy/SciPy blog aggregator thanks to Ga?l Varoquaux: http://planet.scipy.org/ Feel free to contact me if you have a blog that you would like included. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/